doi
stringlengths
0
570
pub_date
stringclasses
355 values
sections
listlengths
1
245
abstract
stringlengths
0
5.25k
title
stringlengths
0
228
figures
listlengths
0
130
authors
stringlengths
0
11.9k
references
listlengths
0
835
formulas
listlengths
0
679
10.1006/csla.2000.0138
2023-05-16
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b37", "b15", "b21", "b19", "b13", "b38", "b12", "b17", "b37", "b0", "b36", "b25", "b8", "b17", "b36" ], "table_ref": [], "text": "A spectrum of studies recently arose in Natural Language Processing (NLP), which incorporates intermediate supervision signals into the model by simply converting the intermediate signals into textual sequences and prepending or appending these sequences to the output sequence. It benefits tasks such as math word problems (Wei et al., 2022), commonsense reasoning (Liu et al., 2022), programs execution (Nye et al., 2022), summarisation (Narayan et al., 2021), etc. This trend further There she is. That she is. That is she. triggered the collection of a new dataset with intermediate results (Lewkowycz et al., 2022) and corresponding theoretical analysis (Wies et al., 2022). Intermediate supervision signals show consistent benefits to these various sequence generation tasks and Neural Machine Translation (NMT) is a basic and typical sequence generation task in the NLP community. However, it remains an open question whether and how intermediate signals can be defined and leveraged for NMT. Meanwhile, previous studies (Koehn and Knowles, 2017;Müller et al., 2020) found that NMT suffers from poor domain robustness, i.e. the generalisation ability to unseen domains. Such an ability not only has theoretical meaning, but also has practical value since: 1) the target domain(s) may be unknown when a system is built; 2) some language pairs may only have training data for limited domains. Since the recent study (Wei et al., 2022) Different from math problem-solving tasks, machine translation tasks do not have explicit intermediate results to serve as the intermediate signals. A recent work (Voita et al., 2021b) found that NMT acquires the three core SMT competencies, target-side language modelling, lexical translation and reordering in order during the course of the training. Inspired by this work, we borrow tech-niques in SMT to produce intermediate sequences as the intermediate signals for NMT. Specifically, we first obtain the word alignments for the parallel corpus and use it to produce the word-for-word translations (lex) and the aligned word-for-word translations (ali) to resemble the lexical translation and reordering competencies in SMT. As shown in Figure 1, the intermediate sequences resemble structurally approaching the target from the source progressively, which shares a similar spirit of how humans do translation or reasoning about translation step by step, thus named Progressive Translation.\nOur intuition is that these intermediate sequences inject an inductive bias about a domain-agnostic principle of the transformation between two languages, i.e. word-for-word mapping, then reordering, and finally refinement. Such a bias limits the learning flexibility of the model but prevents the model from building up some spurious correlations (Arjovsky et al., 2019) which harm out-ofdomain performance.\nHowever, previous works have shown that NMT is prone to overly relying on the target history (Wang and Sennrich, 2020;Voita et al., 2021a), which is partially correlated with exposure bias (Ranzato et al., 2016) (a mismatch between training and inference), especially under domainshift. Simply prepending these introduced intermediate sequences to the target would introduce spurious causal relationships from the intermediate sequences to the target. As a result, these intermediate sequences would potentially mislead the model about the prediction of the target, due to erroneous intermediate sequences during inference. To alleviate this spurious causal relationship, we introduce the full-permutation multi-task learning framework, where the target and intermediate sequences are fully permuted. The Minimum Bayes Risk (Goel and Byrne, 2000) decoding algorithm is used to select a consensus translation from all permutations to further improve the performance.\nWe first test our proposed framework on IWSLT'14 German→English and find that the proposed intermediate sequence can improve the domain robustness of NMT. The permutation multi-task learning is important for the intermediate sequence which is prone to erroneous during inference. To examine the generality of our methods, we conduct experiments on another two domain-robustness datasets in NMT, OPUS German→English and a low resource German→Romansh scenario. Our methods show consistent out-of-domain improvement over these two datasets.\nMoreover, previous works (Müller et al., 2020;Wang and Sennrich, 2020) found that hallucinated translations are more pronounced in out-of-domain setting. Such translations are fluent but completely unrelated to the input, and they may cause more serious problems in practical use due to their misleading nature. Therefore, we manually evaluate the proportion of hallucinations. Results show that our methods substantially reduce the amount of hallucinations in out-of-domain translation. Finally, since the corpus size in the main experiments is relatively small, we investigate the effectiveness of our methods when scaling up the corpus sizes. Results show that our methods are especially effective under the low-resource scenarios." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b19", "b20", "b27" ], "table_ref": [], "text": "Intermediate Supervision Signals. Some existing works in the broader NLP community try to incorporate intermediate sequences into the model. We take two typical examples of them to better distinguish our work from other works. Narayan et al. (2021) (Ng et al., 2020) or auxiliary tasks where the target history is less informative (Sánchez-Cartagena et al., 2021) named MTL-DA framework. The main difference between our PT framework and the MTL-DA framework is that the MTL-DA framework treats each target-side sequence as an independent task conditioned on the source, whereas PT also encourages the model to learn the transformational relations between any pair of target-side sequences, which may help the model to generalise better across domains." }, { "figure_ref": [], "heading": "Multi-task Learning:", "publication_ref": [], "table_ref": [], "text": "<123> is a control token indicating the order of three sequences. 1: lex; 2: ali; 3: tgt, then <123> is for the task where the target is in order of lex, ali and tgt. <lex>, <ali>, <tgt> is the special tokens prepended to lex, ali, tgt separately.\nSource: Das ist sie. Target: There she is." }, { "figure_ref": [], "heading": "Old training pair:", "publication_ref": [ "b10", "b3", "b5", "b40" ], "table_ref": [], "text": "New training pairs: Target: <lex> That is she. <ali> That she is. <tgt> There she is.\nTarget: <tgt> There she is. <ali> That she is. <lex> That is she.\nSource: <123> Das ist sie.\nSource: <321> Das ist sie. Statistical Machine Translation in NMT. The intermediate sequences of PT are produced using the word alignments and reordering components in Statistical Machine Translation (SMT). There are works on improving NMT with SMT features and techniques (He et al., 2016;Chen et al., 2016;Du and Way, 2017;Zhao et al., 2018). However, these works either modify the architecture of the neural network or require more than one model to produce the translation (e.g. a rule-based pre-ordering model and a NMT model etc.). To the best of our knowledge, we are the first to incorporate features from SMT into NMT by converting the features into textual sequences and prepending these to the target without requiring extra models or modifying the neural architecture.\n3 Approach" }, { "figure_ref": [], "heading": "Intermediate Sequences", "publication_ref": [ "b7", "b22", "b11", "b30" ], "table_ref": [], "text": "The traditional SMT decomposes the translation task into distinct components where some features could potentially be the intermediate supervision ali: lex is reordered so that the word alignments from the target to lex is monotonic. The word alignments used here are target-to-source alignments because it is equivalent to the target-to-lex alignments since lex is word-for-word mapped from the source. The words in the target which is assigned to \"NULL\" are omitted during reordering.\nlex, ali and target (tgt) are prefixed with a special token separately for extracting the corresponding sequence from the predicted output. The one-tomany (both source-to-target and target-to-source) word alignments are obtained with mgiza++ (Gao and Vogel, 2008;Och and Ney, 2003) 1 , a SMT word alignments tool, on the in-domain training corpus, following the default parameter provided in train-model.perl by Moses (Koehn et al., 2007) 2 . The one-to-one word alignments are built by computing the intersection between the one-to-many word alignments in both directions. The bilingual lexicon is obtained by associating each source word to the target word it is most frequently aligned within the one-to-one word alignments.\nThe learning of word alignments and transformations of lex and ali are at the word level. The BPE (Sennrich et al., 2016) word segmentation is trained on src-tgt parallel data as normal and applied to both source-target parallel sequences and intermediate sequences (the target-language vocabulary is applied to split the words in the intermediate sequences).\nWe expect that the introduced intermediate sequences would benefit the domain robustness of NMT. Because the proposed intermediate sequences serve as a supervision signal to provide the model with an explicit path for learning the transformational relations from source to target. Such signals inject an inductive bias about one kind of domain-agnostic principle of the transformation between two languages, i.e. word-for-word mapping, then reordering, finally refinement. This injected bias limits the learning flexibility of the neural model but prevents the model from building up some spurious correlations which harm out-ofdomain performance." }, { "figure_ref": [ "fig_4" ], "heading": "Spurious Causality Relationship", "publication_ref": [], "table_ref": [], "text": "To introduce these intermediate sequences as intermediate supervision signals to the model, we prepend them to the output sequence in training. However, simply prepending these produced intermediate sequences to the target would potentially introduce spurious causality relationships from presequence to post-sequence. For example, prepending lex, ali to the target would introduce the causal relationships of lex → ali → tgt. These are spurious causality relationships because the model is highly unlikely to get the gold-standard pre-sequences (lex or ali) as in the training during inference, especially under the domain-shift where the performance is relatively poor. Therefore, the model should learn that source (input) is the only reliable information for any target-side sequences. Note that such spurious causality relationship in principle results from a mismatch between training and inference of the standard training-inference paradigm of NMT, which is termed exposure bias by the community.\nIntuitively, if the model could predict the targetside sequences in any order, then the causality relationship between target-side sequences should be reduced. Therefore, we propose to fully permute the target-side sequences, i.e. intermediate sequences (lex or ali) and the target sequence (tgt). Figrue 2 illustrates the training data after permutation when we prepend both lex and ali to the target. The source is prefixed with a control token for each permutation, i.e. 1: lex; 2: ali; 3: tgt, then <123> is the control token for the permutation where the target is in the order of lex, ali and tgt.\nAs shown in Figure 3, with the permutation, we create counterfactual data which disentangles the causal relations of lex → ali → tgt and enhances the causal relations from source to each of these three sequences. Therefore, the full-permutation multitask training better balances the model's reliance on the source and target history, at least on presequence(s)." }, { "figure_ref": [], "heading": "Minimum Bayes Risk Decoding", "publication_ref": [ "b6", "b6", "b31" ], "table_ref": [], "text": "From our preliminary experiments, we found that various test sets prefer different generation orders of the permutation. For example, order lex-ali-tgt performs best on some test sets whereas tgt-ali-lex performs best on some other test sets. Therefore, we suspect that the translation quality would be further improved if we could dynamically select the best candidate translations from all permutations. Inspired by (Eikema and Aziz, 2021), we use Minimum Bayes Risk (MBR) decoding to select a consensus translation from all permutations.\nMBR aims to find a translation that maximises expected utility (or minimises expected risk) over the posterior distribution. In practice, the posterior distribution is approximated by drawing a pool of samples S = (s 1 , ..., s n ) of size n from the model:\ny = argmax s i ∈S 1 n n s j =1 u (s i , s j ) (1)\nwhere u is the utility function to compute the similarity between two sequences. In our experiment, the samples S are translations from all permutations.\nFollowing Eikema and Aziz (2021), we use BEER (Stanojević and Sima'an, 2014) as the utility function, and the released toolkit 3 for MBR decoding." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [ "b27", "b20", "b2", "b27", "b14", "b17", "b17", "b20", "b28", "b30" ], "table_ref": [], "text": "We work on three datasets involving two language pairs, which were used in previous works on the domain robustness in NMT (Sánchez-Cartagena et al., 2021;Ng et al., 2020).\nIWSLT'14 DE→EN IWSLT'14 (Cettolo et al., 2014) German→English (DE→EN) is a commonly used small-scale dataset in NMT, which consists of 180 000 sentence pairs in the TED talk domain. Following Sánchez-Cartagena et al. (2021), the validation and in-domain (ID) testing sets are tst2013 and tst2014 separately; and out-of-domain (OOD) test sets consist of IT, law and medical domains from OPUS (Lison and Tiedemann, 2016) collected by Müller et al. (2020) 4 .\nOPUS DE→EN & Allegra DE→RM are two benchmarks of domain-robustness NMT released by Müller et al. (2020). OPUS comprises five domains: medical, IT, law, koran and subtitles. Following Ng et al. (2020), we use medical as ID for training (which consists of 600 000 parallel sentences) and validation and the rest of four domains as OOD test sets. Allegra (Scherrer and Cartoni, 2012) German→Romansh (DE→RM) has 100 000 sentence pairs in law domain. The test OOD domain is blogs, using data from Convivenza.\nWe tokenise and truecase all datasets with Moses 3 https://github.com/Roxot/mbr-nmt 4 https://github.com/ZurichNLP/ domain-robustness and use shared BPE with 10 000 (on IWSLT'14) and 32 000 (on OPUS and Allegra) for word segmentation (Sennrich et al., 2016)." }, { "figure_ref": [], "heading": "Models and Evaluation", "publication_ref": [ "b29", "b32", "b36", "b20", "b23", "b24" ], "table_ref": [], "text": "All experiments are done with the Nematus toolkit (Sennrich et al., 2017) based on the Transformer architecture (Vaswani et al., 2017) 5 . The baseline is trained on the training corpus without using intermediate sequences. We follow Wang and Sennrich (2020) to set hyperparameters (see Appendix) on three datasets. For our framework, we scale up the token batch size proportional to the length of the target for a fair comparison, e.g. if the target-side sequence is three times longer than the original target, we scale up the batch size to three times as well. 6 . The performance of the original order (lex)-(ali)-tgt is used for validation and testing. We conduct early-stopping if the validation performance underperforms the best one over 10 times of validation in both the translation quality (BLEU) and the cross entropy loss.\nWe also compare to two recently proposed methods of domain robustness in NMT. SSMBA (Ng et al., 2020) generates synthetic training data by moving randomly on a data manifold with a pair of corruption and reconstruction functions. Re-verse+Mono+Replace (Sánchez-Cartagena et al., 2021) (RMP) introduces three auxiliary tasks where the target history is less informative.\nWe report cased, detokenised BLEU (Papineni et al., 2002) with SacreBLEU (Post, 2018) 7 . Each experiment is independently run for three times, and we report the average and standard deviation to account for optimiser instability." }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b36" ], "table_ref": [ "tab_1", "tab_2" ], "text": "We test our proposal mainly on IWSLT'14 DE→EN. Table 1 summarises the results. 1 is the baseline system which is trained on parallel corpus only without any data augmentation. The average OOD is computed by averaging results across all OOD test sets. Single lex benefits OOD whereas ali does not. Firstly, we simply prepend the produced intermediate sequence(s) (any one of them and both of them in the order of lex-ali) to the target sequence. Results show that single lex ( 2 ) significantly improves the OOD performance by 2.2 BLEU, at the cost of 0.9 BLEU decrease in in-domain performance. However, the introduction of ali deteriorates the performance on both in-domain (ID) and OOD test sets ( 3 and 4 ). We argue that this comes from the reason that the learning of generating ali is more difficult than generating lex (ali needs an extra reordering step and also the produced ali is noisy due to the word alignment errors). As a result, ali is more erroneous than lex during inference. Therefore, generation quality of the target deteriorates due to its causal dependency on ali. ali benefits OOD with the support of permutation multi-task learning. We try to alleviate the problem by introducing the permutation multi-task learning on top of 2 ∼ 4 . Results show that the permutation successfully alleviates the deterioration of introducing ali, bringing positive results for both ID and OOD ( 3 → 6 , 4 → 7 ). With the permutation, a single ali intermediate sequence ( 6) can improve OOD over the baseline by 2 BLEU and the combination of lex and ali ( 7 ) bring further improvement on OOD over single lex ( 2 ) or single ali ( 6 ) by 0.5 and 0.7 BLEU respectively. The permutation shows a negative effect on single lex ( 2 → 5 ). Because the lex is very easy to learn, few error would occur when predicting lex. Therefore, permutation is not effective and even has negative effects as it makes the neural model hard to focus on learning the task of lex-tgt, leading to inferior performance. MBR decoding brings further improvement. For the lex, ali, tgt with permutation, there are six permutations in total. We dynamically select a consensus translation over each input data by performing MBR decoding over translation from all permu-tations. Results show MBR ( 7 → 8 ) could further improve the OOD and ID performances by 0.4 and 0.6 BLEU respectively, and outperforms baseline OOD by 3.1 BLEU at the cost of 1.6 BLEU decrease in ID. Results on other datasets and comparison with existing methods. As 8 achieves the highest OOD performance and 2 achieves relatively high OOD and ID performance with simpler techniques, we name 8 as PT f ull and 2 as PT simple and evaluate these two methods on another two domainrobustness datasets (OPUS DE→EN and Allegra DE→RM). Table 2 lists the results.\nBaselines (Transformer) in cited works (RMP and SSMBA) are trained under inappropriate hyperparameters, e.g. on IWSLT'14, the cited works uses default hyperparameters for the WMT dataset (more than 10 times larger than IWSLT'14). To enable better comparison by other researchers, we train the Transformer with the appropriate hyperparameters provided by Wang and Sennrich (2020) to build strong baselines, which outperform those in the cited works. We re-implement the other two DA methods based on our baseline for comparison.\nResults show that both PT simple and PT f ull perform most effectively on IWSLT'14 OOD, surpassing the existing methods by 0.7-2.3 BLEU. On the other two new datasets, PT simple and PT f ull show consistent OOD improvement, outperforming our baseline (Transformer) by 1.1-1.6 BLEU and 1.1-1.2 BLEU on OPUS and DE→RM dataset respectively. The ID performance of PT simple and PT f ull on these two datasets is less affected than on IWSLT'14, at the cost of 0.3-0.4 BLUE decrease on OPUS and even no decrease on the Allegra DE→RM.\nPT f ull significantly outperforms PT simple OOD on OPUS DE→EN and they show negligible ID differences. For Allegra DE→RM, PT simple and PT f ull shows similar OOD and ID performance." }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "BLEU score indicates that the proposed methods can improve domain robustness. In this section, we investigate the reduction of hallucinations and performance on larger datasets of our methods." }, { "figure_ref": [ "fig_5" ], "heading": "Hallucinations", "publication_ref": [ "b39", "b16", "b18", "b17", "b1" ], "table_ref": [ "tab_4", "tab_5" ], "text": "Hallucinations are more pronounced in out-ofdomain translation, and their misleading nature makes them particularly problematic. Therefore, et al., 2021;Yan et al., 2022), and finding solutions for hallucinations (Miao et al., 2021;Müller and Sennrich, 2021) \nTo test our methods for reducing the hallucinations under domain shift, we manually evaluate the proportion of hallucinations on IWSLT'14 and OPUS (DE→EN) OOD test sets. We follow the definition and evaluation by Müller et al. (2020), considering a translation as a hallucination if it is (partially) fluent and its content is not related to the source (inadequate). We report the proportion of such hallucinations in each system.\nThe manual evaluation is performed by two students who have completed an English-medium university program. We collect ∼3000 annotations for 10 configurations. We ask annotators to evaluate translations according to fluency and adequacy. For fluency, the annotator classifies a translation as fluent, partially fluent or not fluent; for adequacy, as adequate, partially adequate or inadequate. We report the kappa coefficient (K) (Carletta, 1996) for inter-annotator and intra-annotator agreement in Table 3, and assess statistical significance with Fisher's exact test (two-tailed).\nTable 4 shows the results of human evaluation. All of the DA methods significantly decrease the proportion of hallucinations by 2%-6% on IWSLT'14 and by 9%-11% on OPUS, with the increase in BLEU. Note that the two metrics do not correlate perfectly: for example, PT corpus as the training data separately. We follow the same data preprocessing as for OPUS (medical). The hyperparameters for training the model are the same as those for IWSLT'14 when the corpus size is 0.2M and those for OPUS (medical) when the corpus size is 2M. For the corpus size of 20M, we increase the token batch size to 16384 instead of 4096 and keep the rest of the hyperparameters the same as for the 2M corpus size. Similarly, each experiment is independently run for three times and we report the average result.\nResults are shown in Figure 4. As expected, increasing the corpus size (0.2M-20M) improves both ID and OOD performance for all systems. When the corpus size is small (0.2M), PT f ull (red line) shows a considerable improvement in OOD over the baseline (blue line) by 4.3 BLEU and even slightly benefits ID, surpassing the baseline by around 0.9 BLEU. However, scaling up the corpus size (0.2M-20M) narrows the gap of OOD improvement (4.3-0.9 BLEU) between the baseline and PT f ull , and widens the ID deterioration from +0.9 to -1.6 BLEU.\nIn general, PT simple (green line) follows a similar tendency as PT f ull , compared to the baseline. However, PT simple underperforms the baseline at the corpus size of 2M. By a close inspection, we found that the training of PT simple is relatively unstable. The standard deviations of PT simple for OOD are 1.38, 2.49 and 0.24 on 0.2M, 2M and 20M corpus size respectively, whereas the standard deviations of PT f ull are 0.47, 0.27 and 0.52 respectively. This indicates that the training of PT simple is less stable than PT f ull when the corpus size is 0.2M-2M. The better stability of PT f ull may come from its permutation multi-task learning mechanism.\nPT simple always underperforms PT f ull on OOD for any corpus size. PT simple shows slightly better ID performance than PT f ull when the corpus size is large (2M-20M) but underperforms PT f ull on ID performance in low resource setting where the corpus size is 0.2M." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "Our results show that our introduced intermediate signals effectively improve the OOD performance of NMT. Intermediate sequence lex can benefit OOD by simply prepending it to the target. ali is more likely to be erroneous during inference than lex, which results in degenerated target due to the spurious causal relationship. Our proposed permutation multi-task learning successfully alleviates the problem and manifests the effectiveness of ali.\nExperiments also confirm that the MBR algorithm can further improve the performance by dynamically selecting a consensus translation from all permutations. The human evaluation shows that the proposed methods substantially reduce the number of hallucinations of the out-of-domain translation.\nExperiments on the larger corpus sizes indicate that our methods are especially promising in the low-resource scenarios.\nOur work is the first attempt to complete the puzzle of the study of intermediate signals in NMT, and two new ideas may benefit this study in other areas: 1) thinking intermediate signals from the intermediate structures between the transformation from the input to the output; 2) the permutation multi-task learning, instead of only pre/appending intermediate sequences to the output sequence. The permutation multi-task learning + MBR decoding framework is also a potential solution for any multi-pass generation tasks (e.g. speech translation), which suffer from the error propagation problem. The problem is alleviated with the permutation which disentangles causal relations between intermediate and final results. Finally, our work provides a new perspective of data augmentation in NMT, i.e. augmenting data by introducing extra sequences instead of directly modifying the source or target." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "The way we use the intermediate sequences is to concatenate new sequences and the target sequence as the new target. As a result, the length of the target increases linearly with the number of intermediate sequences introduced, which increases the cost of inference. In the meantime, Minimum Bayes Risk decoding needs to do prediction multiple times under different control tasks, which further increases the computational cost. However, there are potential solutions to compromise between the computational cost and quality, e.g. learning a student model by distilling the domainrobust knowledge from Progressive Translation." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "The datasets used in the experiments are all wellknown machine translation datasets and publicity available. Data preprocessing does not involve any external textual resources. Intermediate sequences generated in our data augmentation method are new symbolic combinations of the tokens in the target language. However, the final output of the model is the tgt sequence which is the same as the target sequence in the original training set. Therefore, we would not expect the model trained with our data augmentation method would produce more harmful biases. Finally, we declare that any biases or offensive contexts generated from the model do not reflect the views or values of the authors." }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "A.1 Discussion of Intermediate Sequences lex and ali intermediate sequences may come from certain intermediate topological spaces between the transformation from the topological spaces of the source into the target languages. We empirically confirm that such intermediate sequences might look strange but are easier for the neural model to learn and predict, since they are structurally closer to the source. We use the standard Transformer model to learn to predict lex, ali and tgt (this is just the baseline) directly on IWSLT'14 dataset and report the results on both in-domain and outof-domain test sets. Note that the gold-standard sequences of lex and ali on the out-of-domain test sets are produced on the corresponding out-of-domain training sets.\nTable 5 shows that lex is easier to be predicted than ali, and ali is easier to be predicted than tgt by the NMT model, over both in-domain and out-ofdomain test sets. " }, { "figure_ref": [], "heading": " * ", "publication_ref": [], "table_ref": [], "text": "The work described in this paper is substantially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Code: 14200620)." } ]
Previous studies show that intermediate supervision signals benefit various Natural Language Processing tasks. However, it is not clear whether there exist intermediate signals that benefit Neural Machine Translation (NMT). Borrowing techniques from Statistical Machine Translation, we propose intermediate signals which are intermediate sequences from the "source-like" structure to the "target-like" structure. Such intermediate sequences introduce an inductive bias that reflects a domain-agnostic principle of translation, which reduces spurious correlations that are harmful to out-of-domain generalisation. Furthermore, we introduce a full-permutation multi-task learning to alleviate the spurious causal relations from intermediate sequences to the target, which results from exposure bias. The Minimum Bayes Risk decoding algorithm is used to pick the best candidate translation from all permutations to further improve the performance. Experiments show that the introduced intermediate signals can effectively improve the domain robustness of NMT and reduces the amount of hallucinations on outof-domain translation. Further analysis shows that our methods are especially promising in low-resource scenarios.
Progressive Translation: Improving Domain Robustness of Neural Machine Translation with Intermediate Sequences *
[ { "figure_caption": "Figure 1 :1Figure 1: An illustration of the transformation from a source sentence to the target translation and its analogy with vision. src: source; tgt: target; lex: word-by-word translation; ali: reorders lex monotonically based on word alignments.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "in intermediate supervision signals showed a benefit of such signals on out-of-domain generalisation, we expect intermediate signals may benefit domain robustness in NMT.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: An illustration of the proposed intermediate sequences and multi-task learning framework. src: source.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "signals. More recently,Voita et al. (2021b) found that NMT acquires the three core SMT competencies, i.e. target-side language modelling, lexical translation and reordering, in order during the course of training. Inspired by this work, we produce word-for-word translations and aligned wordfor-word translations as the intermediate sequences to resemble the lexical translation and reordering components separately using the word alignments component in SMT. As shown in Figure 2 Data Augmentation part, for each source-target parallel sequence in the training corpus, we augment their target sequences with two extra intermediate sequences, lex and ali. The two intermediate sequences are prepended to the target to form an augmented target. lex: The source sequence is word-for-word translated based on a bilingual lexicon obtained from the parallel training corpus. Tokens that are not in the lexicon are copied into lex.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Causal graphs for the source and three targetside sequences. Solid arrow denotes casual dependence and dashed arrow represents the statistical correlation between two variables. Left: relations if we simply prepend lex and ali to the target. Right: relations after full-permutation multi-task learning.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Average BLEU (↑) on in-domain and out-of-domain test sets for models trained on OPUS DE→EN (subtitles) with various sizes of the training corpus.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Average BLEU (↑) and standard deviation of ablation results on in-domain and out-of-domain test sets on IWSLT'14 DE→EN. permu: permutation.", "figure_data": "ID Augmentation In-DomainITLawMedicalaverage OOD1Transformer32.1±0.3814.7±0.21 10.1±0.38 17.0±0.2513.9±0.192lex+tgt31.2±0.5016.6±0.26 11.1±0.23 20.7±0.6616.1±0.303ali+tgt25.8±3.5714.4±2.544.5±6.00 17.9±1.3212.2±3.254lex+ali+tgt25.5±7.829.4±1.143.1±2.31 11.3±6.707.9±1.7152 + permu30.1±1.5515.5±0.507.2±5.48 19.0±1.0813.9±2.1863 + permu30.6±0.3016.9±1.00 10.8±0.40 19.9±0.6015.9±0.5374 + permu29.9±0.3218.2±0.89 10.8±0.10 20.7±0.4016.6±0.3787 + MBR30.5±0.2117.7±0.7211.8±0.121.6±0.4917.0±0.35", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Average BLEU (↑) and standard deviation on in-domain and out-of-domain test sets for models trained on IWSLT'14 DE→EN, OPUS DE→EN and Allegra DE→RM. PT simple : method 2 in Table1; PT f ull : method 8 in Table1; RMP: Reverse+Mono+Replace many works have been conducted on hallucinations, involving detection of hallucinations(Zhou et al., 2021;Guerreiro et al., 2022;Dale et al., 2022), exploration of the causes of hallucinations (Raunak", "figure_data": "IWSLT'14OPUSDE→RMaugmentation in-domain average OOD in-domain average OOD in-domain average OODResults reported by Sánchez-Cartagena et al. (2021):Transformer30.0±0.108.3±0.85----RMP31.4±0.3011.8±0.48----Results reported by Ng et al. (2020):Transformer--57.010.251.512.2SSMBA--54.910.752.014.7Our experiments:Transformer32.1±0.3813.9±0.1958.8±0.3811.0±0.2254.4±0.2519.2±0.23SSMBA31.9±0.1515.4±0.1058.4±0.2012.1±0.2154.7±0.2020.4±0.15RMP32.2±0.0614.7±0.1759.2±0.2512.6±0.4155.1±0.2121.5±0.23PT simple31.2±0.5016.1±0.3058.5±0.6412.1±0.1854.6±0.1220.3±0.31PT f ull30.5±0.2117.0±0.3558.4±0.1212.6±0.1054.4±0.2120.4±0.51", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "f ull has", "figure_data": "inter-annotatorintra-annotatorannotation P (A) P (E) K P (A) P (E) Kfluency0.520.31 0.30 0.840.39 0.73adequacy0.680.38 0.48 0.880.38 0.81", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Inter-annotator (N=300) and intra-annotator agreement (N=150) of manual evaluation. a higher BLEU than PT simple but PT simple has a similar or even lower proportion of hallucinations than PT f ull . This indicates that PT f ull improves translation quality in other aspects.", "figure_data": "% hallucinations (BLEU)Augmentation IWSLT'14OPUSTransformer11% (13.9) 39% (11.0)RMP9% (14.7) 30% (12.6)SSMBA6% (15.4) 28% (12.1)PT simple5% (16.1) 28% (12.1)PT f ull7% (17.0) 30% (12.6)", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "", "figure_data": ": Proportion of hallucinations (↓) and BLEU (↑)on out-of-domain test sets over IWSLT'14 and OPUS(DE→EN).5.2 Tendency by scaling up the corpus sizeSince the size of the training corpus in the previousexperiments ranges from 0.1M to 0.6M (million)samples, which is a low-resource setting for NMT,here we investigate the performance of our methodswhen scaling up the corpus size. We use subtitlesdomain from OPUS as the in-domain training data(because it has around 20M sentence pairs) andthe rest four domains as the OOD test sets. Weuse the first 0.2M, 2M and 20M samples in the", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Average BLEU (↑) and standard deviation on in-domain and out-of-domain test sets on IWSLT'14 DE→EN when the target is lex, ali or tgt separately.", "figure_data": "DomainlexalitgtID94.0±0.20 61.1 ±0.12 32.1±0.38OOD72.6±0.60 47.9 ±0.48 13.9±0.19", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Configurations of NMT systems over three datasets.", "figure_data": "", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" } ]
Chaojun Wang; Yang Liu; Wai Lam
[ { "authors": "Martin Arjovsky; Léon Bottou; Ishaan Gulrajani; David Lopez-Paz", "journal": "", "ref_id": "b0", "title": "Invariant risk minimization", "year": "2019" }, { "authors": "Jean Carletta", "journal": "Computational Linguistics", "ref_id": "b1", "title": "Assessing agreement on classification tasks: The kappa statistic", "year": "1996" }, { "authors": "Mauro Cettolo; Jan Niehues; Sebastian Stüker; Luisa Bentivogli; Marcello Federico", "journal": "", "ref_id": "b2", "title": "Report on the 11th IWSLT evaluation campaign", "year": "2014" }, { "authors": "Wenhu Chen; Evgeny Matusov; Shahram Khadivi; Jan-Thorsten Peter", "journal": "", "ref_id": "b3", "title": "Guided alignment training for topic-aware neural machine translation", "year": "2016" }, { "authors": "David Dale; Elena Voita; Loïc Barrault; Marta R ", "journal": "", "ref_id": "b4", "title": "Detecting and mitigating hallucinations in machine translation: Model internal workings alone do well, sentence similarity even better", "year": "2022" }, { "authors": "Jinhua Du; Andy Way", "journal": "Prague Bulletin of Mathematical Linguistics", "ref_id": "b5", "title": "Pre-reordering for neural machine translation: Helpful or harmful?", "year": "2017" }, { "authors": "Bryan Eikema; Wilker Aziz", "journal": "", "ref_id": "b6", "title": "Sampling-based minimum bayes risk decoding for neural machine translation", "year": "2021" }, { "authors": "Qin Gao; Stephan Vogel", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Parallel implementations of word alignment tool", "year": "2008" }, { "authors": "Vaibhava Goel; William J Byrne", "journal": "Computer Speech&Language", "ref_id": "b8", "title": "Minimum bayes-risk automatic speech recognition", "year": "2000" }, { "authors": "M Nuno; Elena Guerreiro; Voita; F T André; Martins", "journal": "", "ref_id": "b9", "title": "Looking for a needle in a haystack: A comprehensive study of hallucinations in neural machine translation", "year": "2022" }, { "authors": "Wei He; Zhongjun He; Hua Wu; Haifeng Wang", "journal": "", "ref_id": "b10", "title": "Improved neural machine translation with smt features", "year": "2016" }, { "authors": "Philipp Koehn; Hieu Hoang; Alexandra Birch; Chris Callison-Burch; Marcello Federico; Nicola Bertoldi; Brooke Cowan; Wade Shen; Christine Moran; Richard Zens; Chris Dyer; Ondřej Bojar; Alexandra Constantin; Evan Herbst", "journal": "", "ref_id": "b11", "title": "Moses: Open source toolkit for statistical machine translation", "year": "2007" }, { "authors": "Philipp Koehn; Rebecca Knowles", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Six challenges for neural machine translation", "year": "2017" }, { "authors": "Aitor Lewkowycz; Anders Andreassen; David Dohan; Ethan Dyer; Henryk Michalewski; Vinay Ramasesh; Ambrose Slone; Cem Anil; Imanol Schlag; Theo Gutman-Solo; Yuhuai Wu; Behnam Neyshabur; Guy Gur-Ari; Vedant Misra", "journal": "", "ref_id": "b13", "title": "Solving quantitative reasoning problems with language models", "year": "2022" }, { "authors": "Pierre Lison; Jörg Tiedemann", "journal": "European Language Resources Association (ELRA", "ref_id": "b14", "title": "OpenSubtitles2016: Extracting large parallel corpora from movie and TV subtitles", "year": "2016" }, { "authors": "Jiacheng Liu; Alisa Liu; Ximing Lu; Sean Welleck; Peter West; Le Ronan; Yejin Bras; Hannaneh Choi; Hajishirzi", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "Generated knowledge prompting for commonsense reasoning", "year": "2022" }, { "authors": "Mengqi Miao; Fandong Meng; Yijin Liu; Xiao-Hua Zhou; Jie Zhou", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Prevent the language model from being overconfident in neural machine translation", "year": "2021" }, { "authors": "Mathias Müller; Annette Rios; Rico Sennrich", "journal": "", "ref_id": "b17", "title": "Domain robustness in neural machine translation", "year": "2020" }, { "authors": "Mathias Müller; Rico Sennrich", "journal": "", "ref_id": "b18", "title": "Understanding the properties of minimum Bayes risk decoding in neural machine translation", "year": "2021" }, { "authors": "Shashi Narayan; Yao Zhao; Joshua Maynez; Gonçalo Simões; Vitaly Nikolaev; Ryan Mcdonald", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b19", "title": "Planning with learned entity prompts for abstractive summarization", "year": "2021" }, { "authors": "Nathan Ng; Kyunghyun Cho; Marzyeh Ghassemi", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "SSMBA: Self-supervised manifold based data augmentation for improving out-of-domain robustness", "year": "2020" }, { "authors": "Maxwell Nye; Anders Johan Andreassen; Guy Gur-Ari; Henryk Michalewski; Jacob Austin; David Bieber; David Dohan; Aitor Lewkowycz; Maarten Bosma; David Luan; Charles Sutton; Augustus Odena", "journal": "", "ref_id": "b21", "title": "Show your work: Scratchpads for intermediate computation with language models", "year": "2022" }, { "authors": "Josef Franz; Hermann Och; Ney", "journal": "Computational Linguistics", "ref_id": "b22", "title": "A systematic comparison of various statistical alignment models", "year": "2003" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Matt Post", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "A call for clarity in reporting BLEU scores", "year": "2018" }, { "authors": "Aurelio Marc; Sumit Ranzato; Michael Chopra; Wojciech Auli; Zaremba", "journal": "", "ref_id": "b25", "title": "Sequence level training with recurrent neural networks", "year": "2016-05-02" }, { "authors": "Arul Vikas Raunak; Marcin Menezes; Junczys-Dowmunt", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "The curious case of hallucinations in neural machine translation", "year": "2021" }, { "authors": "M Víctor; Miquel Sánchez-Cartagena; Juan Esplà-Gomis; Felipe Antonio Pérez-Ortiz; Sánchez-Martínez", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Rethinking data augmentation for low-resource neural machine translation: A multitask learning approach", "year": "2021" }, { "authors": "Yves Scherrer; Bruno Cartoni", "journal": "European Language Resources Association (ELRA", "ref_id": "b28", "title": "The trilingual ALLEGRA corpus: Presentation and possible use for lexicon induction", "year": "2012" }, { "authors": "Rico Sennrich; Orhan Firat; Kyunghyun Cho; Alexandra Birch; Barry Haddow; Julian Hitschler; Marcin Junczys-Dowmunt; Samuel Läubli; Antonio Valerio Miceli; Jozef Barone; Maria Mokry; Nȃdejde", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "Nematus: a toolkit for neural machine translation", "year": "2017" }, { "authors": "Rico Sennrich; Barry Haddow; Alexandra Birch", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Neural machine translation of rare words with subword units", "year": "2016" }, { "authors": "Miloš Stanojević; Khalil Sima'an", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "Fitting sentence level translation evaluation with many dense features", "year": "2014" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Ł Ukasz Kaiser; Illia Polosukhin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b32", "title": "Attention is all you need", "year": "2017" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b33", "title": "", "year": "" }, { "authors": "Elena Voita; Rico Sennrich; Ivan Titov", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Analyzing the source and target contributions to predictions in neural machine translation", "year": "2021" }, { "authors": "Elena Voita; Rico Sennrich; Ivan Titov", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Language modeling, lexical translation, reordering: The training process of NMT through the lens of classical SMT", "year": "2021" }, { "authors": "Chaojun Wang; Rico Sennrich", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "On exposure bias, hallucination and domain shift in neural machine translation", "year": "2020" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Brian Ichter; Fei Xia; Ed Chi; Quoc Le; Denny Zhou", "journal": "", "ref_id": "b37", "title": "Chain of thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Noam Wies; Yoav Levine; Amnon Shashua", "journal": "", "ref_id": "b38", "title": "Sub-task decomposition enables learning in sequence to sequence tasks", "year": "2022" }, { "authors": "Jianhao Yan; Fandong Meng; Jie Zhou", "journal": "", "ref_id": "b39", "title": "Probing causes of hallucinations in neural machine translations", "year": "2022" }, { "authors": "Yang Zhao; Jiajun Zhang; Chengqing Zong", "journal": "European Language Resources Association (ELRA", "ref_id": "b40", "title": "Exploiting pre-ordering for neural machine translation", "year": "2018" }, { "authors": "Chunting Zhou; Graham Neubig; Jiatao Gu; Mona Diab; Francisco Guzmán; Luke Zettlemoyer; Marjan Ghazvininejad", "journal": "Association for Computational Linguistics", "ref_id": "b41", "title": "Detecting hallucinated content in conditional neural sequence generation", "year": "2021" } ]
[ { "formula_coordinates": [ 4, 349.72, 703.29, 174.69, 34.41 ], "formula_id": "formula_0", "formula_text": "y = argmax s i ∈S 1 n n s j =1 u (s i , s j ) (1)" } ]
10.33011/lilt.v16i.1417
2023-05-16
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b20", "b7", "b2", "b9", "b17", "b3", "b15", "b4", "b6", "b1", "b16", "b20", "b9", "b7" ], "table_ref": [], "text": "In this research, we aim to uncover the inherent generalization properties, i.e., inductive bias, of RNNs with respect to how frequently RNNs switch the outputs through time steps in the sequence classification task, which we call output sequence frequency (Figure 1).\nIn supervised learning settings, a model is trained with finite input-output examples {(x 0 , y 0 ), . . . , (x n , y n )} and then tested with unseen input-output pairs. The models that achieve high accuracy on test data are often said to \"generalize well\". However, the important point is that func-Figure 1: An example showing a train dataset and two candidate generalization patterns, each showing a different output sequence frequency. Here, \"aababba\" is the input sequence, and there are four binary train labels 0, 1, 1, 0 each corresponding to the prefix of length 2, 3, 5, 6.\ntion f that satisfies f (x i ) = y i cannot be uniquely determined by finite train examples. This entails that if a model generalizes well to a certain function f , then the model hardly generalizes to another function f that has different outputs for the same unseen inputs, i.e., f (x test ) = f (x test ) but is consistent with the same train examples; f (x i ) = y i . Therefore, it is crucial to understand what kind of functions a model inherently prefers to learn, which is referred to as inductive bias (White and Cotterell, 2021;Kharitonov and Chaabouni, 2020;Delétang et al., 2022;Lovering et al., 2020).\nOur target is Recurrent Neural Network (RNN): a well-known deep learning architecture. A key feature of RNN is that it processes the input incrementally and predicts the output at each time step, producing a sequence of outputs. This is different from other deep learning architectures, e.g., Feed Forward Network (FFN), Convolutional Neural Network (CNN), and Transformers (Vaswani et al., 2017). Due to the incremental processing feature of RNNs, the inputs can be of variable length; RNNs have been used for various tasks in natural language processing, such as sentence classification and text generation. It has also been used as a subcomponent of more complex architectures (Dyer et al., 2016) and to simulate human sequential processing (Steinert-Threlkeld and Szymanik, 2019). Variants of RNN architectures have been proposed so far. The most basic one is the Elman RNN (Elman, 1990). Later, more complex architectures, such as LSTM (Hochreiter and Schmidhuber, 1997) and GRU (Cho et al., 2014), have been proposed to improve modeling long-term dependencies.\nAlthough deep learning models, including RNNs, are said to be high-performance models, they are essentially black boxes, and it is not clear what inductive bias they may have. In this research, in order to analyze the inductive bias of RNNs, we propose to calculate the output sequence frequency by regarding the outputs of RNNs as discrete-time signals and applying frequency domain analysis. Specifically, we apply discrete Fourier transform (DFT) to the output signals and compute the dominant frequencies to grasp the overall output patterns.\nInductive bias is not straightforward to analyze since it can be affected by various factors such as the task, dataset, and training method; theoretical analysis has been limited to simple architecture such as FFN (Rahaman et al., 2019;Valle-Perez et al., 2019). Therefore, empirical studies have been conducted to clarify the inductive bias in various tasks and settings, such as language modeling (White and Cotterell, 2021), sequence classification (Lovering et al., 2020), and sequenceto-sequence (Kharitonov and Chaabouni, 2020). These works approached the problems by designing synthetic datasets and testing several generalization patterns. However, when examining the output sequence frequency, we cannot directly apply these previous methods since enumerating exponentially many output sequence patterns in longer sequences is computationally difficult. To this end, our method makes use of frequency domain analysis to directly calculate the output sequence frequencies and avoid enumerating the candidate generalization patterns.\nIn the experiment, we randomly generated 500 synthetic datasets and trained models on a few data points (Figure 1). As a result, we found:\n• LSTM and GRU have an inductive bias such that the output changes at lower frequencies compared to Elman RNN, which can easily learn higher frequency patterns, • The inductive bias of LSTM and GRU varies with the number of layers and the size of hidden layers." }, { "figure_ref": [], "heading": "Background", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Inductive Bias Analysis", "publication_ref": [ "b20", "b7", "b7", "b14", "b7", "b7" ], "table_ref": [], "text": "Inductive bias analysis is usually performed by constructing synthetic datasets. This is because data from real tasks are complex and intertwined with various factors, making it difficult to determine what properties of the dataset affect the behavior of the model. For example, White and Cotterell (2021) targeted LSTM and Transformer and investigated whether easy-to-learn languages differ depending on their typological features in language modeling. White and Cotterell (2021) used Context Free Grammar (CFG) to construct parallel synthetic language corpora with controlled typological features. They trained models on each language and computed their perplexities to find that LSTM performs well regardless of word order while the transformer is affected. Another more synthetic example is Kharitonov and Chaabouni (2020). Kharitonov and Chaabouni (2020) targeted LSTM, CNN, and Transformer. They designed four synthetic tasks in the sequence-to-sequence framework and trained models on very small datasets (containing 1~4 data points). To examine the inductive biases of the models, they prepared a pair of candidate generalization patterns, such as COUNT and MEMORIZATION, for each task and compared the models' preference over the candidate patterns by calculating the Minimum Description Length (Rissanen, 1978). Using extremely small train datasets makes it possible to restrict the information models can obtain during training and analyze the models' inherent inductive bias in a more controlled setup.\nIn this research, we take a similar approach as (Kharitonov and Chaabouni, 2020), restricting the train data to extremely small numbers. However, we cannot directly apply the methods of (Kharitonov and Chaabouni, 2020) because the approach of comparing with candidate generalization patterns can be impractical in our case. Specifically, when examining the output sequence frequency, it is necessary to feed the models with longer se-quences in order to analyze a wide range of frequencies from low to high; there are exponentially many patterns with the same number of output changes in longer sequences, which makes it difficult to exhaustively enumerate the candidate generalization patterns. Therefore, instead of preparing candidate generalization patterns, we directly calculate the output sequence frequency for each model by regarding the outputs of the model as discrete-time signals and applying frequency domain analysis." }, { "figure_ref": [], "heading": "Frequency Domain Analysis", "publication_ref": [], "table_ref": [], "text": "Discrete Fourier Transform (DFT) is a fundamental analysis technique in digital signal processing. Intuitively, DFT decomposes a signal into a sum of finite sine waves of different frequencies, allowing one to analyze what frequency components the original signal consists of. The DFT for a length N discrete-time signal f [0], . . . , f [N -1] is defined by the following equation: \nF [k] = N -1 n=0 f [n] exp - √ -1 2π N kn . (1) When f [n] is a real-value signal, it is sufficient to consider only k ∈ {1, . . . , N 2 }." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Task", "publication_ref": [], "table_ref": [], "text": "To analyze the output sequence frequency, i.e., how frequently the output changes through time steps, we focus on a simple case of binary sequence classification task: the inputs are the prefixes of a binary sequence s ∈ {a, b} * . Specifically, given a binary sequence s ∈ {a, b} * , the input space I and the output space O are defined as follows: where O is a set of categorical distributions over the binary labels {0, 1}, and p denotes the probability of predicting label 1.\nI = {s 0:i | i = 0, . . . |s| -1}, (2) O = {(1 -p, p) | p ∈ [0, 1]},(3)\nWithout loss of generality, we can only consider the model's output probability of predicting label 1 for the sequence s 0:i , which we denote by M(s 0:i ).\nIn this way, we can regard the model's output sequence M(s 0:0 ), . . . , M(s 0:|s|-1 ) as a discretetime signal taking values in [0, 1]." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "Train Dataset", "publication_ref": [], "table_ref": [], "text": "Figure 2 shows an intuitive illustration of our dataset construction. Given a sequence s, we randomly generate the binary labels y 0:|s|-1 , where each y i is the label assigned to the prefix s 0:i . When two successive labels y i and y i+1 differ, we say there is a label change (e.g., y 9 and y 10 in Figure 2). 2 We then make a train dataset D by taking instances where the labels change: {(s 0:i , y i ), (s 0:i+1 , y i+1 ) | y i = y i+1 }. For example, in Figure 2, the train data D contains {(aa, 0) (aab, 1) (aababba, 1) (aababbaa, 0), . . .}. Note that the original labels y 0:|s|-1 can be uniquely recovered from D simply by interpolating or extending the labels for other prefixes.\nThe procedure is formalized as follows:\n1. Sample a sequence s ∈ {0, 1} N , where N is the length of the sequence, 2. Sample the number of label changes m ∈ {1, . . . M }, where M is the maximum number of label changes, 3. Sample the labels y 0:|s|-1 so that all the m label changes do not overlap3 , i.e. ∀i, j. i < j ∧ y i = y i+1 ∧ y j = y j+1 ⇒ i + 1 < j, 4. Create a dataset as D = {(s 0:i , y i ), (s 0:i+1 , y i+1 ) | y i = y i+1 }.\nBy training models on random input sequences s, we expect the model predictions to represent the inherent generalization property of the model." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [], "table_ref": [], "text": "For the analysis, we apply two evaluation metrics." }, { "figure_ref": [], "heading": "Test Cross-entropy Loss", "publication_ref": [], "table_ref": [], "text": "First, we compare the model's output sequence M(s 0:0 ), . . . , M(s 0:|s|-1 ) with the original labels y 0:|s|-1 by calculating test cross-entropy loss L CE . Intuitively, near-zero L CE indicates that the model generalizes to simply interpolate or extend the training labels since we constructed the train datasets so that the original labels can be recovered by interpolation, as described in section 3.2. The loss is formalized as:\nL CE = - 1 |T | i∈T (y i ln(M(s 0:i )) +(1-y i ) ln(1 -M(s 0:i ))),(4)\nwhere T = {i | (s 0:i , _) / ∈ D} is the set of test data indices." }, { "figure_ref": [], "heading": "Dominant Frequency", "publication_ref": [], "table_ref": [], "text": "In case L CE is high, we consider the model's output sequence M(s 0:0 ), . . . , M(s 0:|s|-1 ) as a discretetime signal and apply frequency domain analysis to look into the model's behavior. More specifically, we apply DFT to the output signal and obtain the dominant frequency ω dom . The dominant frequency ω dom is calculated by simply replacing f [n] in Equation 1 with M(s 0:n )." }, { "figure_ref": [], "heading": "Experiment Settings", "publication_ref": [ "b6", "b1", "b4" ], "table_ref": [], "text": "Here, we describe the basic settings of our experiment. We use well-known basic RNN architectures: LSTM (Hochreiter and Schmidhuber, 1997), GRU (Cho et al., 2014), and Elman RNN (Elman, 1990). For the decoding, we use a linear decoder without bias followed by a softmax function. We try 4 combinations of hyperparameters: (num_layers, hidden_size) ∈ {(1, 200), (2, 200), (3, 200), (2, 2000)}, where num_layers denotes the number of layers, and hidden_size denotes the size of hidden layers. 4For optimization, we train models to minimize the average cross-entropy loss by gradient descent using Adam (Kingma and Ba, 2015) with a learning rate of 1.0 × 10 -4 for 1000 epochs. 5Finally, we randomly generate 500 train datasets with N = 100, M = 5 and train 10 models with different random seeds for each dataset, architecture, and parameter setting. Note that this sparse setting (10 : 90 train-test data ratio at maximum) keeps the hypothesis space large and thus enables us to analyze the inductive bias of the models as described in section 2.1.\nTraining all the models took around 30 hours using 8 NVIDIA A100 GPUs." }, { "figure_ref": [], "heading": "Findings", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Models Do Not Learn to Interpolate", "publication_ref": [], "table_ref": [], "text": "In order to see if the models generalize simply to interpolate the given labels, we calculate the median test cross-entropy loss of the multiple models trained for each dataset (Figure 3). The dotted vertical line shows the random baseline loss of -ln( 1 2 ) ≈ 0.7. As can be seen in Figure 3, the median test cross-entropy loss is higher than the random baseline for most datasets for all of LSTM, GRU, and Elman RNN. This indicates that, in most cases, none of the LSTM, GRU, or Elman RNN learns to interpolate in this extremely simple setup, where only the label-changing part is given as training data. We also observe a similar trend in other hyperparameter settings; The test cross-entropy losses for other settings are shown in Appendix A." }, { "figure_ref": [ "fig_4", "fig_4", "fig_4", "fig_5" ], "heading": "Architectural Difference", "publication_ref": [], "table_ref": [], "text": "Now that the test cross-entropy loss has revealed that the patterns learned by the models contain more output changes than the original pattern in the train data, the next step is to see if there are any architecture-specific trends in the output sequence patterns. We calculate the dominant frequency for each model and take the median over the models trained on the same dataset. Figure 4 shows the distribution of median dominant frequencies for LSTM, GRU, and Elman RNN with different hyperparameters. It is clear that, in all settings, LSTM and GRU tend to learn lower-frequency patterns, while the dominant frequencies of Elman RNN tend to be higher. Comparing LSTM and GRU, LSTM has slightly lower-frequency patterns for hidden_size = 200 (Figure 4 (a,b,c)), though the difference is not as clear for hidden_size = 2000 (Figure 4 (d)).\nAn example of sequential outputs of LSTM and Elman is shown in Figure 5. The top rows show the actual model outputs for a specific sequence, and the bottom rows show the DFT of model outputs. In this example, only 4 labels 0, 1, 1, 0 are given to the prefixes of length 60, 61, 84, 85. It is clear that both LSTM and Elman learn periodic patterns but do not learn to interpolate the given train labels. Besides, it is also notable that LSTMs indeed learn lower- frequency patterns compared to Elman RNNs." }, { "figure_ref": [], "heading": "Effect of Hyperparameters", "publication_ref": [], "table_ref": [], "text": "Here, we describe how hyperparameters affect the observed inductive biases." }, { "figure_ref": [ "fig_6" ], "heading": "Number of Layers", "publication_ref": [], "table_ref": [], "text": "Figure 6 shows the median dominant frequencies of num_layers = 1, 2, 3 for LSTM, GRU, and Elman RNN. As for LSTM, it can be seen that the proportion of patterns in the lower-frequency domain tends to increase as the number of layers increases. In other words, despite the increased complexity of the models, LSTMs tend to learn simpler patterns (in the sense that the output changes less). A similar trend is observed for GRU, although not as clear as for LSTM. On the other hand, Elman RNN does not show such apparent differences." }, { "figure_ref": [ "fig_7" ], "heading": "Hidden Layer Size", "publication_ref": [], "table_ref": [], "text": "Figure 7 shows the median dominant frequencies of hidden_size = 200, 2000 for LSTM, GRU, and Elman RNN. Although the trend is not so clear, for LSTM and GRU, the counts are slightly larger for ω dom = 0.5 ∼ 1.0 when hidden_size = 2000, while the counts are larger for ω dom = 0.0 ∼ 0.5 when hidden_size = 200. This is rather the opposite trend from that of num_layers. However, the above trend does not seem to appear in Elman RNN." }, { "figure_ref": [], "heading": "Discussion and Limitation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Expressive Capacity and Output", "publication_ref": [ "b10", "b19", "b10", "b19", "b10" ], "table_ref": [], "text": "Sequence Frequency\nOur results do not align with the expressive capacity of RNNs reported in previous work (Merrill et al., 2020;Weiss et al., 2018). Merrill et al. (2020); Weiss et al. (2018) formally showed that LSTM is strictly more expressive than GRU and Elman RNN. On the other hand, in our experiments, LSTM and GRU show a bias toward lower frequencies, while Elman RNN, which has the same expressive capacity as GRU, according to (Merrill et al., 2020), shows an opposite bias toward higher frequencies. Note that the expressive capacity and the inductive bias of a model are basically different concepts. This is because expressive capacity is the theoretical upper bound on the functions a model can represent with all possible combinations of its parameters, regardless of the training procedure. In contrast, inductive bias is the preference of functions that a model learns from finite train data, possibly depending on training settings. However, they are not entirely unrelated because a function that is impossible to learn in terms of expressive capacity will never be learned, which can emerge as inductive bias. We conjecture that the difference between the expressive capacity and the observed inductive bias is due to the simplicity of our experiment setting. This difference is not a negative result: It indicates that inductive bias in such a simple setting is effective in observing detailed differences that cannot be captured by expressive capacity." }, { "figure_ref": [ "fig_4", "fig_5" ], "heading": "Randomness of Outputs", "publication_ref": [ "b16" ], "table_ref": [], "text": "Previous study showed that FFNs hardly learn random functions since they are inherently biased toward simple structured functions (Valle-Perez et al., 2019). We can find a similar trend for RNNs in our experimental results. In other words, by regarding the outputs of RNNs as discrete-time signals, we can confirm that the signals are not random, i.e., white noises. If we assume that the output signals of the RNNs are random, the dominant frequency should be uniformly distributed from low to high-frequency regions. Therefore, the biased distribution in Figure 4 indicates that the outputs of the RNNs are not random signals. This is also clear from the example outputs in Figure 5, where the models show periodic patterns." }, { "figure_ref": [], "heading": "Practical Implication", "publication_ref": [ "b0", "b5" ], "table_ref": [], "text": "For LSTM and GRU, we observed different inductive biases between increasing the number of layers and hidden layer size. Previous study that investigated whether RNNs can learn parenthesis also reported that LSTM and GRU behaved differently when the number of layers and the hidden layer size were increased (Bernardy, 2018). Although the tasks are different, our findings align with the previous work. From a practical point of view, these findings suggest that it may be more effective to increase the number of layers than to increase the hidden layer size depending on the target task.\nBesides, the fact that LSTM and GRU, which are known to be \"more practical\" than Elman RNN, tend to learn lower frequency patterns may support the idea that output sequence frequency aligns with \"practical usefulness.\" Furthermore, a concept similar to output sequence frequency has been proposed as a complexity measure in sequence classification: sensitivity (Hahn et al., 2021). While output sequence frequency focuses on the change in output over string length, sensitivity focuses on the change in output when a string is partially replaced, keeping its length. It would be an interesting future direction to examine the validity of inductive biases in output sequence frequency as an indicator of complexity and practical usefulness." }, { "figure_ref": [], "heading": "Limitation", "publication_ref": [], "table_ref": [], "text": "There are some dissimilarities between our experimental setup and practical sequence classification tasks:\n• The task is limited to the binary classification of binary sequences, • Models are trained only on prefixes of a sequence, • The number of train data is extremely small. Therefore, in order to accurately estimate the impact of our findings on the actual task, it is necessary to expand from sequence to language in a multi-label setting with a larger vocabulary.\nDue to the computational complexity, we only tried 4 combinations of hyperparameters. However, it is still necessary to exhaustively try combinations of hyperparameters for a more detailed analysis." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This study focuses on inductive bias regarding the output sequence frequency of RNNs, i.e., how often RNNs tend to change the outputs through time steps. To this end, we constructed synthetic datasets and applied frequency domain analysis by regarding the model outputs as discrete-time signals.\nExperimental results showed that LSTM and GRU have inductive biases towards having low output sequence frequency, whereas Elman RNN tends to learn higher-frequency patterns. Such differences in inductive bias could not be captured by the expressive capacity of each architecture alone. This indicates that inductive bias analysis on synthetic datasets is an effective method for studying model behaviors.\nBy testing different hyperparameters, we found that the inductive biases of LSTM and GRU vary with the number of layers and the hidden layer size in different ways. This confirms that when increasing the total number of parameters in a model, it would be effective not only to increase the hidden layer size but also to try various hyperparameters, such as the number of layers.\nAlthough the experimental setting was limited to simple cases, we believe this research shed some light on the inherent generalization properties of RNNs and built the basis for architecture selection and design." }, { "figure_ref": [ "fig_9", "fig_9" ], "heading": "A Test Cross-entropy Loss", "publication_ref": [], "table_ref": [], "text": "Figure 8 shows the distributions of median test cross-entropies in all settings we tried. As we can see in Figure 8, the median test cross-entropy loss is higher than the random baseline for most datasets in all cases." }, { "figure_ref": [ "fig_10", "fig_11", "fig_13", "fig_12", "fig_9", "fig_10", "fig_11", "fig_13", "fig_12" ], "heading": "B Raw Cross-entropy Loss", "publication_ref": [], "table_ref": [], "text": "In Figure 9, Figure 10, Figure 12, and Figure 11, we show the scatter plot of the train/test crossentropies for LSTM, GRU, and Ellman RNN for all the settings. The horizontal dashed line separates the datasets by the number of label changes. Besides, the datasets are also sorted by the median test cross-entropy. The dotted vertical line shows the random baseline loss of -ln( 12 ) ≈ 0.7. In Figure 8, the number of datasets having near-zero test cross-entropy is relatively higher for LSTM and GRU. For example, from Figure 9 (a), Figure 10 (a), Figure 12 (a), and Figure 11 (a), we can see that the datasets with the near-zero test cross-entropy loss mostly have only 1 label change. This indicates that LSTM and GRU indeed sometimes learn to naively extend the given labels, but mostly in the extreme case where the datasets have only 1 label change. However, for Elman RNN, we cannot find such a trend." }, { "figure_ref": [ "fig_14", "fig_15", "fig_16", "fig_6", "fig_9", "fig_14", "fig_15", "fig_16", "fig_6" ], "heading": "C Raw Dominant Frequency", "publication_ref": [], "table_ref": [], "text": "In Figure 13, Figure 14, Figure 15, and Figure 16, we show the scatter plot of the dominant frequencies for LSTM, GRU, and Ellman RNN for all the settings. The horizontal dashed line separates the datasets by the number of label changes. Besides, the datasets are also sorted by the median dominant frequency.\nIn Figure 8 (b,c), the number of datasets having the lowest frequency pattern is relatively higher for LSTM and GRU. We can see that these lowest frequency patterns are mostly restricted to the datasets having only 1 label change (Figure 13 (a), Figure 14 (a), Figure 15 (a), and Figure 16 (a)). This should be consistent with the findings in Appendix B. When a model simply learns to extend the labels, its dominant frequency is expected to be near its lowest when there is only one label change in the training dataset since the output sequence contains only one output change in such a case. " } ]
A unique feature of Recurrent Neural Networks (RNNs) is that it incrementally processes input sequences. In this research, we aim to uncover the inherent generalization properties, i.e., inductive bias, of RNNs with respect to how frequently RNNs switch the outputs through time steps in the sequence classification task, which we call output sequence frequency. Previous work analyzed inductive bias by training models with a few synthetic data and comparing the model's generalization with candidate generalization patterns. However, when examining the output sequence frequency, previous methods cannot be directly applied since enumerating candidate patterns is computationally difficult for longer sequences. To this end, we propose to directly calculate the output sequence frequency for each model by regarding the outputs of the model as discrete-time signals and applying frequency domain analysis. Experimental results showed that Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) have an inductive bias towards lowerfrequency patterns, while Elman RNN tends to learn patterns in which the output changes at high frequencies. We also found that the inductive bias of LSTM and GRU varies with the number of layers and the size of hidden layers.
Empirical Analysis of the Inductive Bias of Recurrent Neural Networks by Discrete Fourier Transform of Output Sequences
[ { "figure_caption": "1 Here, k = 1 corresponds to the lowest frequency component and k = N 2 to the highest. One useful measure for analyzing the property of the signal f [n] is the dominant frequency (Ng and Goldberger, 2007). In short, dominant frequency is the frequency component of maximum amplitude and is expected to represent the general periodic pattern of the original signal f [n]. The dominant frequency ω dom is defined by ω dom = 2π N k max , where k max = arg max{|F [k]|}.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Illustration of train dataset construction. The train dataset contains only the instances corresponding to the label changes.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The median test cross-entropy loss counts for LSTM, GRU, and Elman RNN with (num_layers, hidden_size) = (2, 200). The dotted vertical line shows the random baseline loss of -ln( 1 2 ).", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "(a) The results for num_layers = 1, hidden_size = 200. (b) The results for our base case num_layers = 2, hidden_size = 200. (c) The results for num_layers = 3, hidden_size = 200. (d) The results for num_layers = 2, hidden_size = 2000.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: The median dominant frequency counts for LSTM, GRU, and Elman RNN with different hyperparameters.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: An example of LSTM and Elman RNN with (num_layers, hidden_size) = (2, 200). The top rows show the actual model outputs for a specific sequence, and the bottom rows show the DFT of model outputs. In this example, 4 labels 0, 1, 1, 0 are assigned to the prefixes of length 60, 61, 84, 85. The Red and blue vertical lines correspond to the labels 0, 1, respectively. The results of 10 models with different random seeds are shown.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: The median dominant frequencies of num_layers = 1, 2, 3 for LSTM, GRU, and Elman RNN with hidden_size = 200.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: The median dominant frequencies of hidden_size = 200, 2000 for LSTM, GRU, and Elman RNN with num_layers = 2.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "(a) The results for num_layers = 1, hidden_size = 200. In this setting, the mean of the median train cross-entropy loss was at most 0.04, and the standard deviation was at most 0.09. (b) The results for our base case num_layers = 2, hidden_size = 200. In this setting, the mean of the median train cross-entropy loss was at most 0.01, and the standard deviation was at most 0.05. (c) The results for num_layers = 3, hidden_size = 200. In this setting, the mean of the median train cross-entropy loss was at most 0.004, and the standard deviation was at most 0.03. (d) The results for num_layers = 2, hidden_size = 2000.In this setting, the mean of the median train cross-entropy loss was at most 0.02, and the standard deviation was at most 0.09.", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: The median test cross-entropy loss counts for LSTM, GRU, and Elman RNN with different hyperparameters. The dotted vertical line shows the random baseline loss of -ln( 1 2 ) ≈ 0.7.", "figure_data": "", "figure_id": "fig_9", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: The Scatter plot of the train/test cross-entropies for LSTM, GRU, and Elman RNN with (num_layers, hidden_size) = (1, 200). The horizontal dashed line separates the datasets by the number of label changes. Besides, the datasets are also sorted by the median test cross-entropy. The dotted vertical line shows the random baseline loss of -ln(12 ) ≈ 0.7.", "figure_data": "", "figure_id": "fig_10", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: The Scatter plot of the train/test cross-entropies for LSTM, GRU, and Elman RNN with (num_layers, hidden_size) = (2, 200). The horizontal dashed line separates the datasets by the number of label changes. Besides, the datasets are also sorted by the median test cross-entropy. The dotted vertical line shows the random baseline loss of -ln(12 ) ≈ 0.7.", "figure_data": "", "figure_id": "fig_11", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: The Scatter plot of the train/test cross-entropies for LSTM, GRU, and Elman RNN with (num_layers, hidden_size) = (3, 200). The horizontal dashed line separates the datasets by the number of label changes. Besides, the datasets are also sorted by the median test cross-entropy. The dotted vertical line shows the random baseline loss of -ln(12 ) ≈ 0.7.", "figure_data": "", "figure_id": "fig_12", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: The Scatter plot of the train/test cross-entropies for LSTM, GRU, and Elman RNN with (num_layers, hidden_size) = (2, 2000). The horizontal dashed line separates the datasets by the number of label changes. Besides, the datasets are also sorted by the median test cross-entropy. The dotted vertical line shows the random baseline loss of -ln(12 ) ≈ 0.7.", "figure_data": "", "figure_id": "fig_13", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 13: The Scatter plot of the dominant frequencies for LSTM, GRU, and Elman RNN with (num_layers, hidden_size) = (1, 200). The horizontal dashed line separates the datasets by the number of label changes. Besides, the datasets are also sorted by the median dominant frequencies.", "figure_data": "", "figure_id": "fig_14", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 :14Figure 14: The Scatter plot of the dominant frequencies for LSTM, GRU, and Elman RNN with (num_layers, hidden_size) = (2, 200). The horizontal dashed line separates the datasets by the number of label changes. Besides, the datasets are also sorted by the median dominant frequencies.", "figure_data": "", "figure_id": "fig_15", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15 :15Figure 15: The Scatter plot of the dominant frequencies for LSTM, GRU, and Elman RNN with (num_layers, hidden_size) = (3, 200). The horizontal dashed line separates the datasets by the number of label changes. Besides, the datasets are also sorted by the median dominant frequencies.", "figure_data": "", "figure_id": "fig_16", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Figure 16: The Scatter plot of the dominant frequencies for LSTM, GRU, and Elman RNN with (num_layers, hidden_size) = (2, 2000). The horizontal dashed line separates the datasets by the number of label changes. Besides, the datasets are also sorted by the median dominant frequencies.", "figure_data": "", "figure_id": "fig_17", "figure_label": "", "figure_type": "figure" } ]
Taiga Ishii; Ryo Ueda; Yusuke Miyao
[ { "authors": "Jean-Philippe Bernardy", "journal": "", "ref_id": "b0", "title": "Can Recurrent Neural Networks Learn Nested Recursion? Linguistic Issues in Language Technology", "year": "2018" }, { "authors": "Kyunghyun Cho; Bart Van Merriënboer; Caglar Gulcehre; Dzmitry Bahdanau; Fethi Bougares; Holger Schwenk; Yoshua Bengio", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation", "year": "2014" }, { "authors": "Grégoire Delétang; Anian Ruoss; Jordi Grau-Moya; Tim Genewein; Kevin Li; Elliot Wenliang; Marcus Catt; Shane Hutter; Pedro A Legg; Ortega", "journal": "", "ref_id": "b2", "title": "Neural networks and the chomsky hierarchy", "year": "2022" }, { "authors": "Chris Dyer; Adhiguna Kuncoro; Miguel Ballesteros; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Recurrent Neural Network Grammars", "year": "2016" }, { "authors": "Jeffrey L Elman", "journal": "Cognitive Science", "ref_id": "b4", "title": "Finding Structure in Time", "year": "1990" }, { "authors": "Michael Hahn; Dan Jurafsky; Richard Futrell", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b5", "title": "Sensitivity as a Complexity Measure for Sequence Classification Tasks", "year": "2021" }, { "authors": "Sepp Hochreiter; Jürgen Schmidhuber", "journal": "Neural Computation", "ref_id": "b6", "title": "Long Short-Term Memory", "year": "1997" }, { "authors": "Eugene Kharitonov; Rahma Chaabouni", "journal": "", "ref_id": "b7", "title": "What they do when in doubt: A study of inductive biases in seq2seq learners", "year": "2020" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b8", "title": "Adam: A Method for Stochastic Optimization", "year": "2015-05-07" }, { "authors": "Charles Lovering; Rohan Jha; Tal Linzen; Ellie Pavlick", "journal": "", "ref_id": "b9", "title": "Predicting Inductive Biases of Pre-Trained Models", "year": "2020" }, { "authors": "William Merrill; Gail Weiss; Yoav Goldberg; Roy Schwartz; Noah A Smith; Eran Yahav", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "A Formal Hierarchy of RNN Architectures", "year": "2020" }, { "authors": "Jason Ng; Jeffrey J Goldberger", "journal": "Journal of Cardiovascular Electrophysiology", "ref_id": "b11", "title": "Understanding and Interpreting Dominant Frequency Analysis of AF Electrograms", "year": "2007" }, { "authors": "Aristide Nasim Rahaman; Devansh Baratin; Felix Arpit; Min Draxler; Fred Lin; Yoshua Hamprecht; Aaron Bengio; Courville", "journal": "", "ref_id": "b12", "title": "On the Spectral Bias of Neural Networks", "year": "2019" }, { "authors": " Pmlr", "journal": "", "ref_id": "b13", "title": "", "year": "" }, { "authors": "J Rissanen", "journal": "Automatica", "ref_id": "b14", "title": "Modeling by shortest data description", "year": "1978" }, { "authors": "Shane Steinert; -Threlkeld ; Jakub Szymanik", "journal": "Semantics and Pragmatics", "ref_id": "b15", "title": "Learnability and semantic universals", "year": "2019" }, { "authors": "Guillermo Valle-Perez; Chico Q Camargo; Ard A Louis", "journal": "", "ref_id": "b16", "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", "year": "2019" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b17", "title": "Attention is All you Need", "year": "2017" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b18", "title": "", "year": "" }, { "authors": "Gail Weiss; Yoav Goldberg; Eran Yahav", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "On the Practical Computational Power of Finite Precision RNNs for Language Recognition", "year": "2018" }, { "authors": "Jennifer C White; Ryan Cotterell", "journal": "", "ref_id": "b20", "title": "Examining the Inductive Bias of Neural Language Models with Artificial Languages", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 70.35, 350.7, 218.78, 69.38 ], "formula_id": "formula_0", "formula_text": "F [k] = N -1 n=0 f [n] exp - √ -1 2π N kn . (1) When f [n] is a real-value signal, it is sufficient to consider only k ∈ {1, . . . , N 2 }." }, { "formula_coordinates": [ 3, 110.8, 707.13, 178.33, 26.35 ], "formula_id": "formula_1", "formula_text": "I = {s 0:i | i = 0, . . . |s| -1}, (2) O = {(1 -p, p) | p ∈ [0, 1]},(3)" }, { "formula_coordinates": [ 4, 82.71, 454.43, 206.42, 46.34 ], "formula_id": "formula_2", "formula_text": "L CE = - 1 |T | i∈T (y i ln(M(s 0:i )) +(1-y i ) ln(1 -M(s 0:i ))),(4)" } ]
2023-05-16
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0", "fig_0" ], "heading": "", "publication_ref": [ "b2", "b5", "b2", "b6", "b12", "b13", "b14", "b8", "b9" ], "table_ref": [], "text": "from itself to guide its own model training, resulting in improved model accuracy.\nTo support SKD, existing works have explored various methods for extracting useful knowledge from a model itself, as shown in Fig. 1. In general, a neural network model can be divided into several blocks. Each block may contain one or multiple layers in the model. Based on this model architecture, a popular SKD approach named Multi-exit SKD [3]- [6] is to re-train the early layers (also known as shallow layers) of the model under the guidance of counterpart's outputs or the model's own final output, as shown in Fig. 1 (a). For example, Be Your Own Teacher (BYOT) [3] adds an Auxiliary Classifier (AC) to each block of the model. It uses the knowledge extracted from the final output of the model to train the ACs and update corresponding blocks. Multi-exit SKD helps to ensure that all blocks in the model fully learn the features of the training dataset. However, it introduces a high computational overhead for training the additional ACs. For instance, it takes over 5 hours to train BYOT on the CIFAR100 dataset using the ResNet-101 model, compared with about 3.48 hours for training the original model.\nExisting SKD methods in the literature with less computational cost use regularization methods that leverage information from history models (i.e., time-wise SKD (TW-SKD)) [7]- [13], as shown in Fig. 1 (b) and the predictions from the same class of input data (i.e., intra-class SKD (IC-SKD)) [14], [15] as shown in Fig. 1 (c). TW-SKD methods, such as self-Distillation from the Last mini-Batch (DLB) [9], leverage the idea that a \"poor\" teacher that has a low model accuracy may provide useful knowledge compared to a well-trained teacher [10] and use historical models as the \"poor\" teacher. However, the output of the historical model can only provide limited highly abstracted and inexplicable knowledge on account that model at different training stages learns different levels of features of the input data. IC-SKD aims to learn a more generalized model output probability distribution for each class of data by minimizing the distance between the model outputs of different data that belong to the same class. However, IC-SKD overlooks the similarity of inter-class model output probability distributions, which can result in limited model performance and overfitting.\nIn this paper, we aim to answer the following key question: How to design SKD to capture more complete features of input data with relatively low computation cost such that to promote model performance?\nWe answer the above question by developing a novel informative teacher and learning a consistent shape of the model outputs of all data regardless of their belonging classes. Note that, the informative teacher does not mean that the teacher has a high model accuracy. Specifically, preliminary experiments suggest that different layers in a neural network can extract different levels of features for the input data. Typically, shallower layers can capture more shape and edge information while deeper layers can learn more semantic information. This motivates us to construct a teacher by utilizing the feature extracted from the shallow layers to guide the training of the whole model. Therefore, we propose Distillation with Reverse Guidance (DRG). DRG employs an AC for a shallow layer and uses the output of the AC to facilitate the student, i.e., the whole model, in learning the shape and edge information from the shallow layer. Thus, the model can simultaneously capture both structural and detailed features of input data, leading to improved model performance. DRG overcomes the high computation cost of BYOT and is able to extract more informative information than TW-SKD.\nFurthermore, to learn a consistent shape of the model outputs for all data, we propose Distillation with Shape-wise Regularization (DSR) that aims to explore the shape of interclass similarity. Different from vanilla KD, where the student mimics the model output distribution of the teacher, and IC-SKD, which focuses on intra-class similarity, DSR learns a consistently ranked model output shape of all data. Our experimental results show that DSR enlarges the decision boundary among classes, contributing to increased model performance.\nOur contribution can be summarized as follows:\n• We design a lightweight SKD framework with multisource information fusion to improve model performance at a low computation cost. • We proposed the DRG method that constructs an informative teacher utilizing the output of a shallow layer to facilitate the model simultaneously learning the structural and detailed features of data. • We propose the DSR method to stimulate the model learning a consistent ranked output shape of all data regardless of their belonging classes. • We evaluate the performance of proposed DRG and DSR methods and their combination over a variety of datasets and models. Notably, our proposed methods outperform the baseline methods by an average of 2% and the stateof-the-art (SOTA) up to 1.15%.\n• We analyze the rationality behind DRG and DSR through experiments and show their superiority in capturing more complete features of data than baselines and enlarging the decision boundary. The remainder of this paper is organized as follows. Section II reviews the related works of KD and SKD. We present preliminaries for the SKD problem in Section III and propose our DRG and DSR methods in Section IV. Sections V and VI demonstrate the experimental results and ablation study, respectively. Section VII discusses the rationality behind DRG and DSR. Finally, Section VIII concludes our paper." }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [ "b16", "b17", "b18", "b3", "b20", "b22", "b23", "b24", "b26", "b1", "b2", "b4", "b6", "b7", "b8", "b13", "b14", "b10", "b9", "b27", "b28", "b9", "b10", "b11" ], "table_ref": [], "text": "Knowledge distillation. Vanilla KD employs a teacherstudent framework to facilitate the student learning from the model output logits of the teacher [1] [16]. A unique parameter in KD is the temperature in the softmax function over the teacher's model output logit, by tuning which, the student can benefit more from the teacher with improved model performance [17] [18]. An improved KD method is feature-based distillation, where the student learns the teacher's intermediate feature [19] [20] [4]. Works in the literature also have focused on the privacy issues of KD, such as data-free KD that preserves an inaccessible training dataset of the teacher for the student [21]- [23], private model compression [24], and undistillable model that prevents a student from learning from the model through KD [25]- [27].\nSelf-knowledge distillation. The first SKD work can date back to Born Again Neural Networks (BAN) [2]. BAN employs a serial-distillation mechanism, namely asking teachers to guide students with the same architecture which would later be asked to guide other sub-students. The average of all students' outputs are considered as the final outputs. BYOT et. al [3]- [5] developed a multi-exit architecture for a neural network. The final output of the network is utilized to update the shallow layers. However, BYOT exerts a high computation cost due to the training of ACs for each exit of the model.\nIn addition, works in the literature also achieve SKD well by designing a much more delicate regularization to improve model performance. There are three categories of regularization, i.e., TW-SKD, IC-SKD, and SKD with Label Smoothing. TW-SKD uses the model in the history as the teacher to regularize the current model. Specifically, Snapshot distillation (SS-KD) [7] randomly chooses a model from previous iterations. Progressive refinement knowledge distillation (PS-KD) [8] and DLB [9] regard the model in the last epoch as poor-teacher. For IC-SKD, the class-wise SKD (CS-KD) [14] uses two batched of data samples from the same class and minimizes the output discrepancy between the two batches. Data-Distortion Guided Self-Distillation (DDGSD) [15] exerts different pre-processing techniques on the same batch and minimizes their model output difference. Another way to improve the performance of SKD is labelsmoothing. The essence of many label-smoothing works lies in the utility of self-teaching, and they can be viewed as special cases of SKD. Label-Smoothing Regularization (LSR) [11] introduces a method where the ground truth distribution is combined with a uniform distribution to create a virtual teacher with random accuracy [10]. Delving Deep into Label Smoothing (OLS) [28] proposes a more reasonable smoothing technique that constructs labels based on the integrated output information from previous epochs. Inspired by the widespread Zipf's law, Efficient one pass self-distillation with ZipF's Label Smoothing (ZF-LS) [29] seeks to discover the conformality of ranked outputs and introduces a novel counterpart discrepancy loss, minimizing with Zipf's distribution based on self-knowledge. Motivated by ZF-LS, it is promising to achieve consistent model outputs of all data by using ranked outputs from the last iteration as softened targets, which can be seen as a specific form of label smoothing. For SKD with label-smoothing, Teacher-free knowledge distillation (TF-KD) [10], has discovered the entity of Label Smoothing Regularization (LSR) [11] to generate high-accuracy virtual teacher. Adversarial Learning and Implicit regularization for self-Knowledge Distillation (AI-KD) [12] integrates TF-KD and PS-KD and additionally employs a Generative Adversarial Network (GAN) to align distributions between sup-student and student.\nOur work differs from the above work by designing a lightweight SKD framework with multi-source information fusion. We consider the more informative information from shallow layers of the networks and explore a consistent shape of model output for all classes of data." }, { "figure_ref": [], "heading": "III. PRELIMINARIES", "publication_ref": [], "table_ref": [], "text": "In this section, we present the preliminaries including the multi-class classification problem, KD, and SKD." }, { "figure_ref": [], "heading": "A. Multi-class Classification", "publication_ref": [], "table_ref": [], "text": "Considering a supervised classification task on a training dataset D, each data sample in the dataset is represented by {x, y} ∈ D, where x indicates the input and y is the corresponding label. We assume there are total K classes such that y ∈ {1, . . . , K}. We train a neural network model h(θ, x) parameterized by θ to minimize the loss of a data sample on the model. A typical loss function for classification is crossentropy loss. Denote z := h (θ, x) as the output logit of the model. Applying the softmax function (with temperature τ = 1) to the model output, we can obtain the probability distribution p for the input data x:\np (z|x) = softmax (z, τ ) = exp (z/τ ) K k=1 exp(z k /τ ) ,(1)\nwhere z k indicate the kth element in z. When it is clear from the context, we use p for short of p (z|x). The cross-entropy loss function is\nL CE (p (z|x) , y) = 1 K K k=1 y k log p k ,(2)\nwhere p k indicates the k th element of p. The objective is to minimize the expected risk of the model on the whole dataset:\nmin θ E {x,y}∈D L CE (p (z|x) , y).(3)" }, { "figure_ref": [], "heading": "B. Knowledge Distillation", "publication_ref": [ "b0", "b0" ], "table_ref": [], "text": "In KD, there exists another teacher model to guide the training of the target model, i.e., the student. A high temperature τ > 1 is applied to soften the model output probability distribution to facilitate transferring more knowledge from the teacher to the student [1]. Denote the output probability distribution of the teacher with temperature τ for an input x by q (z |x), where z is the output logit of the teacher. The Kullback-Liebler (KL) divergence is employed to measure the difference between the teacher and student's model output probability distributions (z and z):\nL KL (q (z |x) , p (z|x)) = 1 K K k=1 q k log q k p k .(4)\nFinally, the overall loss function for vanilla KD is:\nL KD (p, y, q) =L CE (p (z|x) , y) + τ 2 • L KL (q (z |x) , p (z|x))(5)\nThe coefficient τ 2 balances the cross-entropy and KL divergence losses when the temperature τ changes [1]." }, { "figure_ref": [], "heading": "C. Self-Knowledge Distillation", "publication_ref": [], "table_ref": [], "text": "Self-knowledge distillation applies KD to improve model performance by utilizing the prior knowledge extracted from the model itself, which is different from the vanilla KD with a separate teacher model. To train the model h (θ, x), we first extract some information I (θ, x) from the model. I (θ, x) may change with time, layers, and input data, but is not related to any other model. SKD executes a self-knowledge transfer (ST) loss to minimize the discrepancy between the model and the extracted information:\nL ST (h (θ, x) , I (θ, w)) := ρ (h (θ, w) , I (θ, x)) ,(6)\nwhere ρ is a metric function, which varies for different SKD methods. For example, ρ corresponds to a l2-norm in BYOT, the KL Divergence in PS-KD, and the adversarial loss in AI-KD, etc. The ST loss function may take effect at different parts of the model h (θ, w). For example, the ST loss function updates the shallow layers of the model in BYOT and updates the whole model in TW-SKD and IC-SKD. Overall, the SKD loss function combines the original loss function using the hard labels and the ST loss function:\nL SKD = L CE (p (z|x) , y) + ζ • L ST (h (θ, x) , I (θ, x)) (7)\nwhere ζ measures the importance of ST loss, which may vary for different SKD methods.\nIV. PROPOSED METHODS In this section, we propose our DRG and DSR methods to achieve multi-source information fusion for SKD performance improvement." }, { "figure_ref": [ "fig_1" ], "heading": "A. Distillation with Reverse Guidance (DRG)", "publication_ref": [ "b9", "b29" ], "table_ref": [], "text": "Motivation: Different layers in a neural network grab different features of the input data. Typically, shallower layers can capture more shape and edge information while deeper layers can learn more detailed semantic information. The shape and edge feature of the input data vanishes gradually as the layers become deepen, resulting in ignorance of edge information in the final model output and severe model overfitting. By adding an AC to a shallow layer, we can construct a teacher model for the original model. The output of the AC is usually more underfitting than the whole model as it has a smaller model architecture. Related works have revealed the effectiveness of a \"poor\" teacher for KD [10]. However, they have neglected the potential of shallow layers for guiding the training of the whole model. Thus, we propose to use the shallow layer to reversely guide the training of the whole model to achieve information fusion of both edge and detailed features of date.\nDRG design: The framework of DRG is demonstrated on the left-hand side of Fig. 2. We consider neural networks with sequential layers/blocks, such as ResNet [30]. DRG introduces an add-on structure, i.e., AC, to the output of a shallow layer/block1 , constructing a \"poor\" teacher. Let w be the Compute discrepancy L RG using (9); 8:\nCompute loss L DRG using (10);\n9:\nθ t+1 ← -θ t -γ • ∇L DRG ; 10:\nw t+1 ← -w t -γ • ∇L DRG ; 11: end for parameter of the AC. The teacher model can be represented by g θ, w, x , where θ ⊂ θ is the parameter of the easier layers of the whole model before the layer connected to the AC. Denote the output logit and corresponding output probability distribution of g θ, w, x taking x as input by z := g θ, w, x and q (z |x) := softmax (z , τ ), respectively. We use the cross-entropy loss function to train the \"poor\" teacher model and the whole model simultaneously using the following hard-label loss.\nL HL = L CE (q(z |x), y) + L CE (p(z|x), y).(8)\nTo achieve reverse guidance, the \"poor\" teacher guides the whole model training by minimizing the KL divergence:\nL RG = τ 2 • L KL (q(z |x), p(z|x))(9)\nOverall, the whole loss function of DGR is\nL DRG = L HL + α • L RG , (10\n)\nwhere α is a coefficient between two losses. Algorithm 1 demonstrates the model training process of DRG, where γ denotes the learning rate and T indicates the total number of training iterations. θ t and w t represent the model parameters at iteration t. In each iteration t, a minibatch of data B t ⊂ D is randomly sampled to train the model. The mini-batch is simultaneously fed into the model (line 4) and the teacher, which is constructed by shallow layers of the model and the AC, (line 6). Based on the output of the original model and the teacher, we calculate the DRG loss, i.e., L DRG (line 8) and update the model and auxiliary parameters (line 9 -10) according to SGD." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "B. Distillation with Shape-wise Regularization (DSR)", "publication_ref": [ "b28" ], "table_ref": [], "text": "Motivation: Existing works have investigated the intraclass similarity of input data, such as CS-KD and DDGSD. However, to the best of our knowledge, no work has stressed the consistent property of model output among different classes, i.e., inter-class similarity. To illustrate the necessity of exploring a consistent model output property of different classes of data, we evaluate the variance of ranked model outputs, as demonstrated on the left-hand side of Fig. 3. ResNet, CIFAR100, TinyImageNet are abbreviated as \"Res\", \"C100\", and \"Tin\" respectively.\nRanking outputs [29] according to class probability would eliminate the class inharmony and gives more concentration on the overall interaction between classes. We train CIFAR100 and TinyImageNet datasets using various models till converge and normalize the training time to the training process. The variance is calculated by taking an average of the variances of each element in the model outputs over all test data samples. We can observe that the variance of ranked model output decreases along with the training process, which corresponds to increasing model accuracy. On the right-hand side of Fig. 3, we calculate the Pearson coefficients between model accuracy and the variance of ranked model output for different datasets trained with various models. All results exert a strong negative relation between model accuracy and ranked output variance. This implies that along with model training, the model outputs from various classes have a consistent tendency after being ranked. The phenomenon motivates us to regularize the ranked model output shape of different input data to improve the performance of SKD." }, { "figure_ref": [ "fig_1" ], "heading": "DSR design:", "publication_ref": [], "table_ref": [], "text": "The framework of DRG is demonstrated on the right-hand side of Fig. 2. In each iteration t, we rank the elements in the model output according to the nondecreasing order and obtain zt = {z\nt 1 , zt 1 , • • • zt K }, such that zt 1 ≤ zt 1 ≤ . . . ≤ zt K .\nDSR achieves the consistency of ranked model output between different input date leveraging the ranked model output of the last iteration, i.e., zt-1 . We use KL divergence to regularize the model using zt-1 , defined as the L t SR :\nL t SR = τ 2 • L KL (p(z t-1 |x), p(z t |x)).(11)\nOverall, DSR combines the vanilla classification loss and L t SR for SKD model training:\nL DSR = L CE (p(z|x), y) + β • L t SR ,(12)\nwhere β measures the importance of L t SR compared to the original classification loss.\nAlgorithm 2 shows the training process of DSR. Specifically in each iteration t, data batch B t is randomly sampled to train the model. Outputs of the model, i.e., z, are then ranked in ascending order to obtain z(line 5). The DSR loss is computed Randomly sample B t from D;" }, { "figure_ref": [], "heading": "4:", "publication_ref": [], "table_ref": [], "text": "z ← h(θ t , B t );" }, { "figure_ref": [], "heading": "5:", "publication_ref": [], "table_ref": [], "text": "Rank z in ascending order to obtain z;\n6:\nCompute loss L t DSR using (12); 7:\nθ t+1 ← -θ t -γ • ∇L DSR ; 8:\nStore z for the next iteration; 9: end for using the ranked model output in the last iteration, i.e., zt-1 , (line 6). After updating the model parameters with SGD (line 7), zt will be recorded (line 8) and used in the next iteration.\nWe can combine our DRG and DSR methods for SKD using the following overall loss function:\nL = L HL + α • L RG + β • L t SR .(13)\nV. EXPERIMENTS\nWe conduct experiments for our proposed method over various datasets and models. First, we introduce settings including datasets, models, baselines, etc. Then, we analyze the experimental results for different datasets. Our code is available at https://github.com/xucong-parsifal/LightSKD." }, { "figure_ref": [], "heading": "A. Settings", "publication_ref": [ "b30", "b31", "b29", "b32", "b33", "b2", "b13", "b7", "b8", "b28", "b9" ], "table_ref": [], "text": "Datasets. We employ five datasets for classification tasks, i.e., CIFAR100, TinyImageNet, Caltech101, Stanford Dogs and CUB200.\n• CIFAR100: CIFAR100 [31] is a classical 100-class classification dataset. It contains 50,000 images for training and 10,000 for test. The image size is 32x32 pixels. • TinyImageNet: TinyImageNet is a subset of ImageNet [32], with 100, 000 train data samples and 10,000 test samples. There are 200 classes in total. The size of an image is 32x32 pixels. • Caltech101: Caltech101 is a large tough-grained dataset for classification and object detection. There are 101 main classes and 1 background class in total. • Stanford Dogs / CUB200: Stanford Dogs and CUB200 are large fine-grained datasets that consist of 120 dog classes and 200 bird classes, respectively. In all experiments, training samples are processed with Ran-domCrop (32x32 for CIFAR100,TinyImageNet; 224x224 for others) and RandomHorizontalFlip to ensure that all images have a consistent size and to add randomness to the training process.\nModels. We employ five classical neural network models for the above datasets including ResNet18, ResNet50, ResNet101 [30], ResNeXt50 32x4d [33], and DenseNet121 [34]. The ResNet series is well-known for its innovative shortcut connections, which help to reduce overfitting. In contrast, the DenseNet architecture was the first to introduce Hyperparameters. We fixed the number of epochs to 200 and set the temperature τ using a grid search. We set hyperparameters α and β to 0.2 and 1, respectively, and employ a manual learning rate adjustment mechanism for our experiments. For CIFAR100, the initial learning rate was set to 0.1 and decreased to 0.2 of its previous value at 60, 120, and 160 epochs. For TinyImageNet, Stanford Dogs, CUB200, and Caltech101, the initial learning rate was set to 0.1 and decreased to 0.1 of its previous value at 100 and 150 epochs. We use a batch size of 128 for CIFAR100 and TinyImageNet, and 64 for the other datasets. The optimizer used was SGD with a momentum of 0.9 and weight decay of 5e-4. For DRG, we add an AC after the second block of the model to construct the \"poor\" teacher.\nBaselines. We combine our proposed method with the following methods:\n• Vanilla: training the original model without SKD;\n• BYOT [3]: adding an Auxiliary Classifier (AC) to each block of the model; • CS-KD [14]: an IC-SKD method that uses two batched of data samples from the same class and minimizes the output discrepancy between the two batches; • PS-KD [8]: a TW-SKD method that employs the in the last epoch as a teacher; • DLB [9]: a TW-SKD method that regards the model in the last iteration as a teacher, meanwhile employing different augmentation techniques for the same data batch. It differs from PS-KD in the supervision granularity and data preprocessing. • ZF-LS lb [29]: a label smoothing method that minimizes the cross entropy between the ranked model outputs and zipf's distribution; • TF-KD reg [10]: an SKD based on ameliorating LSR. In the face of complex tasks, our results show lower probabilities for the GT class and higher probabilities for other classes. This suggests that our methods extract more integrated information and are less overconfident and overfitting, resulting in a more careful and delicate decision-making process." }, { "figure_ref": [ "fig_4", "fig_5" ], "heading": "B. Experimental results", "publication_ref": [ "b34", "b8" ], "table_ref": [ "tab_1", "tab_2", "tab_3" ], "text": "1) Results on CIFAR100 and TinyImageNet: Our results are presented in Table I (for CIFAR100) and Table II (for TinyImageNet).\nCompared with baseline algorithms, we have the following observations:\n• Compared with vanilla single model training: our methods consistently outperform the vanilla single model training in top-1 accuracy, with a significant improvement ranging from 1.26% to 2.87%. • Compared with BYOT, CS-KD, PS-KD, ZF-LS, and TF-KD: Our methods generally achieve higher accuracy than these methods, with an average accuracy boost of 1.08%. Particularly for CIFAR100 over ResNet18 model, our methods exceed their maximum accuracy by 0.97%. • Compared with DLB: To the best of our knowledge, DLB is the current claimed SOTA. Our results show that our methods perform better than DLB. Especially, our methods surpass DLB on large-scale networks, such as the ResNet100 for CIFAR100. This is because DLB uses the same images with different transformations, which may lead to overfitting and diluting the regularization effects in larger networks. Our methods avoid this problem. Notably, the combination of our methods, i.e., DRG+DSR, is particularly effective and has achieved SOTA performance. Although DSR may not individually achieve SOTA, it has contributed significantly to the success of the combination (+0.51% on ResNet18, TinyImageNet; +0.63% on ResNet18 and TinyImageNet), surpassing its individual accuracy boost.\nTime and space costs. The time and space costs of different methods on CIFAR100 dataset with various models are shown in Fig. 4, where the time cost is evaluated by the consuming time of each iteration and the space cost is the storage space of the models. We can observe that BYOT takes about 0.064s per iteration on ResNet18 and spends much more when the model gets larger. Although DLB is faster than BYOT on small models, it incurs a vast time cost on ResNet101, which may result from re-sampling the training dataset to construct minibatches and frequently recording the images and outputs of the last iteration. Remarkably, our combined method DRG+DSR receives the least time and space cost. Specifically, the time cost of our DRG+DSR is about only 70 percent of that of others; the Space-cost of our DRG+DSR is also extraordinarily smaller than others (×0.67 ∼ ×0.83). Most importantly, we can achieve better performance than BYOT and DLB.\nRobustness. Our proposed methods are more robust over different neural network models than baselines. Specifically for CIFAR100, we achieve the best results among all methods, especially for large-scale models such as ResNet100, ResNeXt50 32×4d, and DenseNet-121, indicating the robustness of our methods across different models.\n2) Results on large-scale fine-grained datasets: We extend our experiments to include the large fine-grained datasets of Stanford Dogs and CUB200. Figure 5 shows the ranked model output probability of the top 30 classes for two data examples. The Green bars mark the ground-truth label. Our results indicate that vanilla training of a single model may give a wrong prediction as the predicted label with the highest probability is not consistent with the true label. In comparison, our methods generate model output probability with low variance, exerting higher probabilities for several classes outside the true label. This means our models could select a range of candidate classes and make decisions more carefully and delicately, rather than making an exact decision that neglects the relationships between different classes.\n3) Compatibility Analysis: To validate the effectiveness and compatibility of our methods over the existing methods, we plug DSR and DRG into Cutout [35] and PS-KD. Cutout is a popular data augmentation technique, which employs a mask to randomly eliminate part of the input image data. We set the number of masks and mask size to 1 and 16px respectively, which is consistent with [9]. Table III presents the performance of these methods before and after the integration of DRG, DSR, and their combination. The results demonstrate that the addition of DRG, DSR, or their combination significantly improves the accuracy of Cutout by 0.82% to 2.42%. Similarly, the integration of these methods with PS-KD results in an accuracy boost of 0.39% to 0.71% compared to vanilla PS-KD." }, { "figure_ref": [], "heading": "VI. ABLATION STUDY", "publication_ref": [], "table_ref": [], "text": "In this section, we conduct an ablation study of our proposed methods. We first explore the number of teachers and the position of selected blocks in DRG. Then we evaluate the effect of different hyperparameters including temperature and the coefficients in objective loss functions." }, { "figure_ref": [], "heading": "A. AC number and block position in DRG", "publication_ref": [], "table_ref": [], "text": "For DRG, we can choose one or a subset of blocks in the neural network model to add ACs to selected blocks in order to accelerate the model learning process while maintaining accuracy. Table IV displays the accuracy and time cost results of CIFAR100, over the ResNet18 model for different sets of selected blocks.\nWe have the following observation:\n• When adding AC to a single block in the deeper layer of the model, such as the third block (B #3) compared to the first and second blocks, DRG experiences a sharp decrease in test accuracy. That is because the outputs of deeper layers have a higher level of similarity with the final output, contributing less to the formulation fusion and possibly leading to overfitting. Therefore, only constructing one \"poor\" teacher is enough for our DRG, resulting in a lightweight SKD design." }, { "figure_ref": [], "heading": "B. Hyperparameters", "publication_ref": [], "table_ref": [], "text": "Temperature τ : We evaluate the performance of DRG and DSR under varying temperature on CIFAR100, ResNet18, as shown in Fig. 6. The results indicate that DRG and DSR achieve the highest accuracy when the temperature are set to 1 and 4, respectively.\nCoefficients α and β: We evaluate the performance of DRG and DSR for different coefficients α and β in ( 10) and ( 12) on CIFAR100, ResNet18. We vary α and β from 0.01 to 1 and from 0.1 to 3, respectively. The results in Fig. 7 show that the best accuracy is achieved when α and β are set to 0.2 and 1, respectively. This suggests that a moderate level of usage of both DRG and DSR provides optimal performance for SKD." }, { "figure_ref": [], "heading": "VII. DISCUSSION", "publication_ref": [ "b35" ], "table_ref": [], "text": "In this section, we discuss the rationality behind our proposed methods through experiments. First, we show the capacity of DRG in information fusion. Then, we analyze the double-effect of DSR in enlarging decision boundary and label smoothing. A. Informulation Fusion of DRG DRG achieves the information fusion of features extracted from different parts of a neural network model. To illustrate this, we employ GradCAM [36] to virtualize the features characterized by different parts of the model and our DRG method. GradCAM is a method for generating attention heatmaps to visualize the focusing position of a model for the input data. We present the GradCAM results of the output of AC after the shallow layer (i.e., the second block of ResNet18 in our experiments), the output of the whole model, and out DRG method in Fig. 8.\nThe results show that the classifier after the shallow layer mainly focuses on the edge and shape features of the input date, such as the legs of the table and the outline of the panda. In contrast, the whole model with more layers forgets edge features and extracts more determined information, such as the ears of the panda. By using the classifier after the shallow layer as the \"poor\" teacher of KD, DRG can capture both edge and detailed information of the input data, providing valuable insights into the information fusion of our DRG method." }, { "figure_ref": [], "heading": "B. Double-effect of DSR", "publication_ref": [], "table_ref": [], "text": "We can interpret the rationality behind DSR from the following two perspectives.\nFirst, DSR is capable of achieving the consensus of ranked model output probability, which enlarges the decision boundary among different classes. Fig. 9 demonstrates the virtualized decision boundary of DRG and DSR over (CIFAR100, ResNet18) using FIT-SNE [37] results2 . We randomly sample 50 classes to clearly show the FIT-SNE virtualization. We can observe that our DSR method exerts a more clear decision boundary than vanilla single model training and DRG.\nMoreover, DSR is equivalent to a label-smoothing method that progressively designs a label from a distribution rather than a predetermined shape. Specifically, the \"soft\" label used in DSR is the ranked label of another data sample, which is randomly sampled from the dataset. This contributes to a better generalization of DSR." }, { "figure_ref": [], "heading": "VIII. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a lightweight SKD framework with two methods, DRG and DSR, to promote multi-source information fusion and improve the performance of SKD. We construct only one auxiliary teacher in DRG and highlight the inter-class model output shape in DSR to achieve better test accuracy with a low time cost. Experimental results over enormous datasets and models show that DRG and DSR, and their combination, outperform the baselines with lower or competitive time costs and better robustness. In summary, our proposed methods demonstrate significant improvements in self-knowledge distillation through novel approaches to multisource information fusion." } ]
Knowledge Distillation (KD) is a powerful technique for transferring knowledge between neural network models, where a pre-trained teacher model is used to facilitate the training of the target student model. However, the availability of a suitable teacher model is not always guaranteed. To address this challenge, Self-Knowledge Distillation (SKD) attempts to construct a teacher model from itself. Existing SKD methods add Auxiliary Classifiers (AC) to intermediate layers of the model or use the history models and models with different input data within the same class. However, these methods are computationally expensive and only capture time-wise and classwise features of data. In this paper, we propose a lightweight SKD framework that utilizes multi-source information to construct a more informative teacher. Specifically, we introduce a Distillation with Reverse Guidance (DRG) method that considers different levels of information extracted by the model, including edge, shape, and detail of the input data, to construct a more informative teacher. Additionally, we design a Distillation with Shape-wise Regularization (DSR) method that ensures a consistent shape of ranked model output for all data. We validate the performance of the proposed DRG, DSR, and their combination through comprehensive experiments on various datasets and models. Our results demonstrate the superiority of the proposed methods over baselines (up to 2.87%) and state-of-the-art SKD methods (up to 1.15%), while being computationally efficient and robust. The code is available at https://github.com/xucong-parsifal/LightSKD.
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 1 Lightweight Self-Knowledge Distillation with Multi-source Information Fusion
[ { "figure_caption": "Fig. 1 :1Fig.1: Overview of existing SKD methods, i.e., multi-exit SKD, TW-SKD, and IC-SKD, and our methods, i.e., DRG and DSR.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 :2Fig. 2: Illustrations of proposed methods. Left: DRG, where an AC is added to the output of a shallow layer to construct a \"poor\" teacher to guide the whole model training. Right: DSR, where model outputs are ranked to form a inter-class regularization.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 :3Fig.3: The variance of ranked outputs in one epoch along the training process (left) and Pearson's coefficient of variance and accuracy (right) for different datasets trained with various models. ResNet, CIFAR100, TinyImageNet are abbreviated as \"Res\", \"C100\", and \"Tin\" respectively.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Algorithm 22Distillation with Shape-wise Regularization. Input: D, γ, τ, β, T 1: Initialize θ ← θ 0 , z-1 ← 0; 2: for t ∈ {0, . . . , T -1} do 3:", "figure_data": "", "figure_id": "fig_3", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig.4: Time and space cost of different methods trained with various models on CIFAR100. ResNet is abbreviated as \"Res\". Blue, Green and Red points represent experiments of BYOT, DLB and our methods respectively.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 :5Fig. 5: Example experimental results on Stanford Dogs (top) and CUB200 (bottom). All bar figures show the ranked predictive probability of the top 30 classes, with ground-truth (GT) classes marked in Green. The baseline results for vanilla single model training are shown in the second column, while the other columns display results from DRG and DSR, and their combination.In the face of complex tasks, our results show lower probabilities for the GT class and higher probabilities for other classes. This suggests that our methods extract more integrated information and are less overconfident and overfitting, resulting in a more careful and delicate decision-making process.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "•Fig. 6 :Fig. 7 :67Fig. 6: Performance of DRG and DSR under varying temperature.", "figure_data": "", "figure_id": "fig_6", "figure_label": "67", "figure_type": "figure" }, { "figure_caption": "Fig. 8 :Fig. 9 :89Fig. 8: GradCAM heatmaps of different methods on Caltech101 over ResNet18. From left to right: input images, output of AC after shallow layer, output of model by BYOT, and the output of DSR (ours). As the heatmaps exemplify, instead of excessive care of one single feature, DRG merges the feature of both classifiers after the shallow layer and the whole model.", "figure_data": "", "figure_id": "fig_7", "figure_label": "89", "figure_type": "figure" }, { "figure_caption": "Algorithm 1 Distillation with Reverse Guidance (DRG). Input: D, γ, τ, α, T 1: Initialize θ ← θ 0 , w ← w 0 ;", "figure_data": "5:Compute loss L HL using (8);6:", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Top-1 test accuracy on CIFAR100. Values marked in Red, Blue are the best and the second best accuracy respectively.", "figure_data": "METHODSRESNET18RESNET50RESNET101RESNEXT50 32X4DDENSENET-121VANILLA77.29%77.07%78.52%78.87%78.70%BYOT78.25%79.63%80.71%80.18%79.63%CS-KD78.55%76.91%77.43%79.69%78.92%PS-KD78.67%79.02%79.41%80.38%79.52%DLB79.52%79.88%80.02%80.52%79.64%ZF-LS lb77.49%77.38%77.27%79.42%78.87%TF-KDreg78.33%78.30%79.19%79.27%79.38%DRG (OURS)79.07% (+1.78%) 79.87% (+2.80%) 80.86% (+2.34%)81.01% (+2.14%)79.99% (+1.29%)DSR (OURS)78.15% (+0.88%) 79.12% (+2.05%) 79.78% (+1.26%)79.01% (+0.14%)79.08% (+0.38%)DRG+DSR (OURS) 79.30% (+2.01%) 79.94% (+2.87%) 80.72% (+2.20%)80.91% (+2.04%)79.76% (+1.26%)", "figure_id": "tab_1", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "Top-1 test accuracy on TinyImageNet. Values marked in Red, Blue are the best and the second best accuracy respectively.", "figure_data": "METHODSRESNET18RESNET50RESNEXT50 32X4DVANILLA56.69%58.07%59.55%BYOT57.69%60.59%60.07%PS-KD57.05%60.70%60.87%DLB57.09%59.89%60.65%DRG (OURS)57.57% (+0.88%)60.41% (+2.34%)60.94% (+1.39%)DSR (OURS)56.75% (+0.06%)58.34% (+0.27%)60.34% (+0.79%)DRG+DSR (OURS)58.08% (+1.39%)61.04% (+2.97%)61.14% (+1.59%)fully-connected blocks as a means of improving feature reuseand facilitating information flow between layers.Environment and hardwares: Our implementations arebased on PyTorch, with Python version 3.8.5, Torch version1.13.0, and Torchvision version 0.14.0. All experiments wereconducted using an NVIDIA RTX 3090 with 24GB memory.", "figure_id": "tab_2", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "Results of different combinations of our methods and existing methods for CIFAR100 over ResNet18.", "figure_data": "METHODSACCURACYCUTOUT77.39%CUTOUT+DRG80.12%(+2.73%)CUTOUT+DSR78.21%(+0.82%)CUTOUT+DRG+DSR79.81%(+2.42%)PS-KD78.67%PS-KD+DRG79.18%(+0.51%)PS-KD+DSR79.38%(+0.71%)PS-KD+DRG+DSR79.06%(+0.39%)", "figure_id": "tab_3", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "Accuracy and time-cost of different block subsets in DRG for CIFAR100 over ResNet18.", "figure_data": "B #1 B #2 B #3ACCURACY %TIME-COST (S/ITER).78.93%(×0.9953)0.044(×0.99)79.30%0.04576.96%(×0.9705)0.046(X1.01)79.42%(×1.0015)0.052(×1.17)78.54%(×0.990)0.054(×1.21)79.32%(×1.0002)0.055(×1.22)", "figure_id": "tab_4", "figure_label": "IV", "figure_type": "table" } ]
Xucong Wang; Pengchao Han; Lei Guo
[ { "authors": "G Hinton; O Vinyals; J Dean", "journal": "", "ref_id": "b0", "title": "Distilling the knowledge in a neural network", "year": "2015" }, { "authors": "T Furlanello; Z Lipton; M Tschannen; L Itti; A Anandkumar", "journal": "PMLR", "ref_id": "b1", "title": "Born again neural networks", "year": "2018" }, { "authors": "L Zhang; J Song; A Gao; J Chen; C Bao; K Ma", "journal": "", "ref_id": "b2", "title": "Be your own teacher: Improve the performance of convolutional neural networks via self distillation", "year": "2019" }, { "authors": "M Ji; S Shin; S Hwang; G Park; I.-C Moon", "journal": "", "ref_id": "b3", "title": "Refine myself by teaching myself: Feature refinement via self-knowledge distillation", "year": "2021" }, { "authors": "S Li; M Lin; Y Wang; Y Wu; Y Tian; L Shao; R Ji", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b4", "title": "Distilling a powerful student model via online knowledge distillation", "year": "2022" }, { "authors": "M Phuong; C H Lampert", "journal": "", "ref_id": "b5", "title": "Distillation-based training for multi-exit architectures", "year": "2019" }, { "authors": "C Yang; L Xie; C Su; A L Yuille", "journal": "", "ref_id": "b6", "title": "Snapshot distillation: Teacherstudent optimization in one generation", "year": "2019" }, { "authors": "K Kim; B Ji; D Yoon; S Hwang", "journal": "", "ref_id": "b7", "title": "Self-knowledge distillation with progressive refinement of targets", "year": "2021" }, { "authors": "Y Shen; L Xu; Y Yang; Y Li; Y Guo", "journal": "", "ref_id": "b8", "title": "Self-distillation from the last mini-batch for consistency regularization", "year": "2022" }, { "authors": "L Yuan; F E Tay; G Li; T Wang; J Feng", "journal": "", "ref_id": "b9", "title": "Revisiting knowledge distillation via label smoothing regularization", "year": "2020" }, { "authors": "C Szegedy; V Vanhoucke; S Ioffe; J Shlens; Z Wojna", "journal": "", "ref_id": "b10", "title": "Rethinking the inception architecture for computer vision", "year": "2016" }, { "authors": "H Kim; S Suh; S Baek; D Kim; D Jeong; H Cho; J Kim", "journal": "", "ref_id": "b11", "title": "Aikd: Adversarial learning and implicit regularization for self-knowledge distillation", "year": "2022" }, { "authors": "I Goodfellow; J Pouget-Abadie; M Mirza; B Xu; D Warde-Farley; S Ozair; A Courville; Y Bengio", "journal": "Communications of the ACM", "ref_id": "b12", "title": "Generative adversarial networks", "year": "2020" }, { "authors": "S Yun; J Park; K Lee; J Shin", "journal": "", "ref_id": "b13", "title": "Regularizing class-wise predictions via self-knowledge distillation", "year": "2020" }, { "authors": "T.-B Xu; C.-L Liu", "journal": "", "ref_id": "b14", "title": "Data-distortion guided self-distillation for deep neural networks", "year": "2019" }, { "authors": "J Kim; S Park; N Kwak", "journal": "Advances in neural information processing systems", "ref_id": "b15", "title": "Paraphrasing complex network: Network compression via factor transfer", "year": "2018" }, { "authors": "Z Li; X Li; L Yang; B Zhao; R Song; L Luo; J Li; J Yang", "journal": "", "ref_id": "b16", "title": "Curriculum temperature for knowledge distillation", "year": "2022" }, { "authors": "X.-C Li; W.-S Fan; S Song; Y Li; B Li; Y Shao; D.-C Zhan", "journal": "", "ref_id": "b17", "title": "Asymmetric temperature scaling makes larger networks teach well again", "year": "2022" }, { "authors": "A Romero; N Ballas; S E Kahou; A Chassang; C Gatta; Y Bengio", "journal": "", "ref_id": "b18", "title": "Fitnets: Hints for thin deep nets", "year": "2014" }, { "authors": "B Heo; J Kim; S Yun; H Park; N Kwak; J Y Choi", "journal": "", "ref_id": "b19", "title": "A comprehensive overhaul of feature distillation", "year": "2019" }, { "authors": "H Chen; Y Wang; C Xu; Z Yang; C Liu; B Shi; C Xu; C Xu; Q Tian", "journal": "", "ref_id": "b20", "title": "Data-free learning of student networks", "year": "2019" }, { "authors": "K Binici; S Aggarwal; N T Pham; K Leman; T Mitra", "journal": "", "ref_id": "b21", "title": "Robust and resource-efficient data-free knowledge distillation by generative pseudo replay", "year": "2022" }, { "authors": "B Zhao; Q Cui; R Song; Y Qiu; J Liang", "journal": "", "ref_id": "b22", "title": "Decoupled knowledge distillation", "year": "2022" }, { "authors": "J Wang; W Bao; L Sun; X Zhu; B Cao; S Y Philip", "journal": "", "ref_id": "b23", "title": "Private model compression via knowledge distillation", "year": "2019" }, { "authors": "H Ma; T Chen; T.-K Hu; C You; X Xie; Z Wang", "journal": "", "ref_id": "b24", "title": "Undistillable: Making a nasty teacher that cannot teach students", "year": "2021" }, { "authors": "S Kundu; Q Sun; Y Fu; M Pedram; P Beerel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b25", "title": "Analyzing the confidentiality of undistillable teachers in knowledge distillation", "year": "2021" }, { "authors": "S Jandial; Y Khasbage; A Pal; V N Balasubramanian; B Krishnamurthy", "journal": "Springer", "ref_id": "b26", "title": "Distilling the undistillable: Learning from a nasty teacher", "year": "2022" }, { "authors": "C.-B Zhang; P.-T Jiang; Q Hou; Y Wei; Q Han; Z Li; M.-M Cheng", "journal": "IEEE Transactions on Image Processing", "ref_id": "b27", "title": "Delving deep into label smoothing", "year": "2021" }, { "authors": "J Liang; L Li; Z Bing; B Zhao; Y Tang; B Lin; H Fan", "journal": "Springer", "ref_id": "b28", "title": "Efficient one pass self-distillation with zipf's label smoothing", "year": "2022" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b29", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "A Krizhevsky; G Hinton", "journal": "", "ref_id": "b30", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "J Deng; W Dong; R Socher; L.-J Li; K Li; L Fei-Fei", "journal": "Ieee", "ref_id": "b31", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "S Xie; R Girshick; P Dollár; Z Tu; K He", "journal": "", "ref_id": "b32", "title": "Aggregated residual transformations for deep neural networks", "year": "2017" }, { "authors": "G Huang; Z Liu; L Van Der Maaten; K Q Weinberger", "journal": "", "ref_id": "b33", "title": "Densely connected convolutional networks", "year": "2017" }, { "authors": "T Devries; G W Taylor", "journal": "", "ref_id": "b34", "title": "Improved regularization of convolutional neural networks with cutout", "year": "2017" }, { "authors": "R R Selvaraju; M Cogswell; A Das; R Vedantam; D Parikh; D Batra", "journal": "", "ref_id": "b35", "title": "Grad-cam: Visual explanations from deep networks via gradient-based localization", "year": "2017" }, { "authors": "G C Linderman; M Rachh; J G Hoskins; S Steinerberger; Y Kluger", "journal": "Nature methods", "ref_id": "b36", "title": "Fast interpolation-based t-sne for improved visualization of single-cell rna-seq data", "year": "2019" } ]
[ { "formula_coordinates": [ 3, 342.78, 338.45, 220.25, 26.56 ], "formula_id": "formula_0", "formula_text": "p (z|x) = softmax (z, τ ) = exp (z/τ ) K k=1 exp(z k /τ ) ,(1)" }, { "formula_coordinates": [ 3, 363.69, 407.91, 199.35, 30.55 ], "formula_id": "formula_1", "formula_text": "L CE (p (z|x) , y) = 1 K K k=1 y k log p k ,(2)" }, { "formula_coordinates": [ 3, 374.39, 474.44, 188.65, 14.66 ], "formula_id": "formula_2", "formula_text": "min θ E {x,y}∈D L CE (p (z|x) , y).(3)" }, { "formula_coordinates": [ 3, 348.81, 641.07, 214.23, 30.55 ], "formula_id": "formula_3", "formula_text": "L KL (q (z |x) , p (z|x)) = 1 K K k=1 q k log q k p k .(4)" }, { "formula_coordinates": [ 3, 344.13, 694.09, 218.91, 24.83 ], "formula_id": "formula_4", "formula_text": "L KD (p, y, q) =L CE (p (z|x) , y) + τ 2 • L KL (q (z |x) , p (z|x))(5)" }, { "formula_coordinates": [ 4, 66.11, 185.75, 233.91, 9.68 ], "formula_id": "formula_5", "formula_text": "L ST (h (θ, x) , I (θ, w)) := ρ (h (θ, w) , I (θ, x)) ,(6)" }, { "formula_coordinates": [ 4, 56.48, 316.67, 243.55, 9.68 ], "formula_id": "formula_6", "formula_text": "L SKD = L CE (p (z|x) , y) + ζ • L ST (h (θ, x) , I (θ, x)) (7)" }, { "formula_coordinates": [ 4, 313.75, 177.56, 129.76, 22.06 ], "formula_id": "formula_7", "formula_text": "θ t+1 ← -θ t -γ • ∇L DRG ; 10:" }, { "formula_coordinates": [ 4, 348.62, 366.57, 214.41, 9.65 ], "formula_id": "formula_8", "formula_text": "L HL = L CE (q(z |x), y) + L CE (p(z|x), y).(8)" }, { "formula_coordinates": [ 4, 368.18, 418.13, 194.85, 11.72 ], "formula_id": "formula_9", "formula_text": "L RG = τ 2 • L KL (q(z |x), p(z|x))(9)" }, { "formula_coordinates": [ 4, 385.4, 460.19, 173.49, 9.65 ], "formula_id": "formula_10", "formula_text": "L DRG = L HL + α • L RG , (10" }, { "formula_coordinates": [ 4, 558.89, 460.51, 4.15, 8.64 ], "formula_id": "formula_11", "formula_text": ")" }, { "formula_coordinates": [ 5, 49.56, 519.96, 250.46, 24.43 ], "formula_id": "formula_12", "formula_text": "t 1 , zt 1 , • • • zt K }, such that zt 1 ≤ zt 1 ≤ . . . ≤ zt K ." }, { "formula_coordinates": [ 5, 98.06, 599.29, 201.96, 12.69 ], "formula_id": "formula_13", "formula_text": "L t SR = τ 2 • L KL (p(z t-1 |x), p(z t |x)).(11)" }, { "formula_coordinates": [ 5, 101.65, 656.88, 198.37, 12.69 ], "formula_id": "formula_14", "formula_text": "L DSR = L CE (p(z|x), y) + β • L t SR ,(12)" }, { "formula_coordinates": [ 5, 317.73, 142.14, 6.2, 6.91 ], "formula_id": "formula_15", "formula_text": "6:" }, { "formula_coordinates": [ 5, 317.73, 150.9, 124.23, 22.06 ], "formula_id": "formula_16", "formula_text": "θ t+1 ← -θ t -γ • ∇L DSR ; 8:" }, { "formula_coordinates": [ 5, 372.97, 277.43, 190.06, 12.69 ], "formula_id": "formula_17", "formula_text": "L = L HL + α • L RG + β • L t SR .(13)" } ]
10.1145/1553374.1553380
2023-05-19
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b2", "b1", "b32", "b23", "b41", "b8", "b35", "b4", "b18", "b38", "b34", "b47", "b26", "b30", "b25", "b15" ], "table_ref": [], "text": "Information extraction (IE) is a crucial task in natural language processing (NLP) that involves extracting structured knowledge from unstructured text data (Bing et al., 2013(Bing et al., , 2015)), enabling various applications such as information retrieval (Ruambo and Nicholaus, 2019), knowledge graph construction (Oramas et al., 2016;Wang et al., 2019), and question answering (Khot et al., 2017). Depending on what kind of information is to be extracted, IE consists of a wide range of tasks, including named entity recognition (NER) (Li et al., 2022a), joint entity and relation extraction (RE) (Taillé et al., 2020;Chia et al., 2022), event extraction (EE) (Li et al., 2022b), and aspect-based sentiment analysis (ABSA) (Zhang et al., 2022b).\nTraditionally, IE has been approached with specialized models that are designed to handle specific IE tasks. For example, NER is often formulated as a sequence labeling (Ma and Hovy, 2016;Xu et al., 2021b) or span-based classification (Wang et al., 2020) problem. The more complex RE or EE task is usually solved with pipeline approaches that split the original task into several sequential subtasks and design specific models for each subtask (Subburathinam et al., 2019;Yang et al., 2019;Peng et al., 2020). These models often require extensive task-specific knowledge to design dedicated model architectures and thus suffer from poor generalization. Recently, motivated by pre-trained generative models such as T5 (Raffel et al., 2020) that handle multiple tasks with the unified text-to-text format, there has been a shift towards the use of unified models for IE as well, which can tackle all IE tasks with a single model structure. For example, TANL (Paolini et al., 2021) tackles various IE tasks with a text-to-text generative model by framing them as translation between augmented natural languages. UIE (Lu et al., 2022) models heterogeneous IE structures into a uniform representation via a structural extraction language.\nDespite the success of existing unified models on various IE tasks, they typically adopt a one-stage learning paradigm, i.e., directly learning to predict the target structure given the input text. In contrast, humans often learn to tackle a task in an easy-tohard manner. They learn basic concepts or skills before solving more complex problems and often tackle harder examples to gain a better understanding of the problem. Taking the RE task as an example, it aims to extract relational triplets, where each triplet consists of a head entity, a relation, and a tail entity. To tackle it, humans first learn some basic skills, such as identifying entities, recognizing relations, and associating entities and relations, before extracting complex relational triplets. This process facilitates humans to learn meaningful substructures and the dependencies among them. Moreover, in practical scenarios, humans usually encounter harder cases, i.e., long input context of multiple sentences containing more entities and relations. By solving hard cases, humans improve their understanding of the task and problem-solving skills. By comparison, models are only trained with the provided training data. The gap between the model and human learning strategies hinders IE models from further development.\nTo bridge the gap, we propose an easy-to-hard (E2H) learning framework for IE tasks in this paper. E2H mimics the human learning procedure to learn each IE task in stages, i.e., the easy stage, the hard stage, and the main stage. The easy stage aims to help the model acquire basic skills of the task, and the hard stage aims to assist the model in handling broad-range variations of the task via training the model with diverse and harder data. Finally, the main stage focuses on the main task at hand for training. Thus an immediate question is how to prepare the data with different levels of difficulty for the easy and hard stages. It is labor-intensive and challenging to construct such data manually. In this work, we attempt only to leverage the existing data of the main task for constructing the data. Specifically, for the easy stage, we observe that the target IE structure often has meaningful substructures. Therefore, we identify several basic skills for each task according to the substructures of its target structure. Returning to the RE example, the skills can be recognizing the entities, relations, and dependencies between them. We can automatically construct training data for learning these skills by modifying the input prompt and decomposing the target structure of the main task. For the hard stage, we combine two training instances of the main task to build a harder training instance by concatenating their input texts to form the new text and their targets to build the new target. The new instance contains more entities, relations, and complicated contexts, making it harder than the original instances. Through these two novel construction strategies, we can reduce much human effort to obtain the data for different stages.\nTo summarize, our contributions are three-fold:\n(1) We propose a unified easy-to-hard (E2H) learning framework for IE tasks by imitating the human learning process; (2) We develop two novel strategies to build the easy and hard stages of our framework without using any additional resources;\n(3) We conduct comprehensive evaluations on 17 datasets across four IE tasks and achieve state-ofthe-art results on 13 datasets. Notably, our E2H method consistently outperforms the one-stage learning counterpart by introducing two extra learning stages with an average increase of 0.38, 2.96, 1.33, and 1.39 absolute points on the NER, RE, EE, and ABSA tasks, respectively." }, { "figure_ref": [], "heading": "Task Definition", "publication_ref": [], "table_ref": [], "text": "This paper investigates four common IE tasks, i.e., NER, RE, EE, and ABSA. In this section, we provide formal definitions of these tasks. Detailed examples of these tasks are in Appendix A.3.\nNamed Entity Recognition (NER) Given an input text T , the task is to identify and classify entities in T into predefined categories, i.e., extract {(e i , c i )}, where e i is the i-th entity, which is a continuous text span in T , c i ∈ C is its category, and C is the entity category set.\nRelation Extraction (RE) Given an input text T , RE is to identify a set of (head entity, relation, tail entity) triplets, i.e., extract {((e h i , c h i ), r i , (e t i , c t i ))}, where the superscripts h and t denote the head and tail entities, r i ∈ R is the i-th relation, and R is the relation set.\nEvent Extraction (EE) Given an input text T , the task is to identify a set of events where each event consists of an event trigger and a set of corresponding arguments, i.e., extract\n{ (e tri i , c tri i ), (e arg 1 i , c arg 1 i ), • • • , (e argm i , c argm i ) }\n, where e tri i is the i-th trigger, which is a continuous text span in T , c tri i ∈ C event is its category, e\narg j i\nis the j-th argument of the i-th event, which is also a continuous text span in T , c\narg j i\n∈ C event is its category, and C event consists of all event and argument categories." }, { "figure_ref": [], "heading": "Aspect-based Sentiment Analysis (ABSA)", "publication_ref": [ "b26" ], "table_ref": [], "text": "There are four essential elements in ABSA, namely aspect category c, aspect term a, opinion term o, and sentiment polarity p. We focus on the aspect sentiment triplet extraction (ASTE) task (Peng et al., 2020) and the aspect sentiment quad " }, { "figure_ref": [], "heading": "T5", "publication_ref": [], "table_ref": [], "text": "The main stage" }, { "figure_ref": [], "heading": "RE NER", "publication_ref": [], "table_ref": [], "text": "The easy stage\nThe hard stage\nThe main stage" }, { "figure_ref": [], "heading": "EE", "publication_ref": [], "table_ref": [], "text": "The easy stage\nThe hard stage\nThe main stage" }, { "figure_ref": [], "heading": "ABSA", "publication_ref": [], "table_ref": [], "text": "The easy stage\nThe hard stage\nThe main stage prediction (ASQP) task (Zhang et al., 2021a) given their popularity. Given an input text T , the ASTE task is to identify a set of {(a i , o i , p i )} triplets, and the ASQP task is to identify a set of {(c i , a i , o i , p i )} quadruplets, where c i ∈ C absa is i-th aspect category, a i is i-th aspect term, o i is i-th opinion term, both a i and o i are continuous spans in T , p i ∈ {positive, negative, neutral} is i-th sentiment polarity, and C absa is the aspect category set." }, { "figure_ref": [], "heading": "Our E2H Framework", "publication_ref": [], "table_ref": [], "text": "Our proposed easy-to-hard (E2H) framework consists of three sequential stages: the easy stage, the hard stage, and the main stage. In this section, we first introduce our text-to-structure formulation for facilitating three-stage learning in a unified framework. Next, we will describe how to realize the easy and hard stages. Finally, we will discuss the main stage as well as the detailed training and inference process of our framework." }, { "figure_ref": [ "fig_1" ], "heading": "Unified Text-to-Structure Formulation", "publication_ref": [ "b15" ], "table_ref": [], "text": "Similar to UIE (Lu et al., 2022) Taking the RE task as an example, as depicted in Figure 1, Hint consists of one or both of an entity hint and a relation hint. The entity hint, represented by the special token [HE], guides the model to extract entities, and the relation hint, represented by the special token [HR], guides the model to extract relations. The use of both hints guides the model to extract both entity and relation information, in the form of (head entity, relation, tail entity) triplets. Constraint is a specific entity or relation, which limits the target structure to be related to that entity or relation. Lastly, Schema contains pre-defined entity categories or relations or both of them, depending on the information that needs to be extracted. It provides essential information for identifying entities and relations in a text." }, { "figure_ref": [], "heading": "The Easy Stage", "publication_ref": [], "table_ref": [], "text": "The goal of the easy stage is to enable the model to learn basic skills that will aid in tackling the main task. To achieve this, we identify several skills for each task and automatically construct the training" }, { "figure_ref": [], "heading": "Task Basic Skills", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "NER", "publication_ref": [], "table_ref": [], "text": "Skill 1 : T → a set of entity categories {c i } Skill 2 : T and an entity category constraint c → a set of entities of c {(e i , c)}" }, { "figure_ref": [], "heading": "RE", "publication_ref": [], "table_ref": [], "text": "Skill 1 : T → a set of entities {(e i , c i )} Skill 2 : T and a head entity constraint (e h , c h ) → a set of relational triplets {((e h , c h ), r i , e t i )} Skill 3 : T → a set of relations {r i } Skill 4 : T and a relation constraint r → a set of relational triplets {((e h i , c h i ), r, e t i )}" }, { "figure_ref": [ "fig_1" ], "heading": "EE", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "Skill 1 : T → a set of event triggers {(e tri i , c tri i )} Skill 2 : T and a trigger constraint (e tri , c tri ) → the event (e tri , c tri ), (e arg1 , c arg1 ), • • • , (e argm , c argm ) ASTE Skill 1 : T → a set of aspect terms {a i } and a set of opinion terms {o i } Skill 2 : T and an aspect term constraint a → a set of triplets data for them based on the data of the main task. Table 1 presents the basic skills of NER, RE, EE, ASTE, and ASQP. We design each skill to be a subtask of the main task according to its target structure. These skills are more fundamental and welldefined. Combining these skills gives the model a whole picture of how to tackle the main task. For example, the RE task has four skills. Skill 1 and Skill 3 help the model recognize substructures of the relational triplet, i.e., the entity and relation, respectively, and Skill 2 and Skill 4 help the model learn the dependencies between these substructures.\n{(a, o i , p i )} Skill 3 : T → a set of sentiment polarities {p i } Skill 4 : T and a sentiment polarity constraint p → a set of triplets {(a i , o i , p)} ASQP Skill 1 : T → a set of aspect categories {c i } Skill 2 : T → a set of (aspect category, aspect term) tuples {(c i , a i )} Skill 3 : T → a set of (aspect category, opinion term) tuples {(c i , o i )} Skill 4 : T → a set of (aspect category, sentiment polarity) tuples {(c i , p i )}\nTo construct the training data for each skill, we modify the input and target of the main task's training data. Specifically, the input text is the same for the skills and the main task, but the prompt is different. As shown in Figure 1, for the RE task, there is only [HE] in the hint of Skill 1 as it only extracts entities and only [HR] in the hint of Skill 3 as it only extracts relations. Both [HE] and [HR] are in the hints of Skill 2 , Skill 4 , and the main task because they extract (head entity, relation, tail entity) triplets. For Skill 2 and Skill 4 , there is also a Constraint, i.e., a head entity or relation, which requires their targets to be triplets related to a specific head entity or relation. The schema of the RE task consists of both entity categories and relations. For a specific skill of RE, the schema only contains entity categories or relations. The target of each skill is a part of the target of the RE task. For Skill 1 and Skill 3 , which extract a substructure of the relational triplet, we use the substructure as the target. For Skill 2 and Skill 4 , we use the corresponding subset of triplets of the RE task as the target." }, { "figure_ref": [ "fig_1" ], "heading": "The Hard Stage", "publication_ref": [], "table_ref": [], "text": "The hard stage aims to construct training examples that are harder than the original training examples of the main task to train the model. Intuitively, the training instance is harder if the input text contains more structural elements and more complicated contexts. To this end, we combine two training instances of the original task to construct a harder instance. Formally, given two training instances (P, T 1 , S 1 ) and (P, T 2 , S 2 ), we can construct a harder training instance (P,\nT 1 •T 2 , S 1 •S 2 ),\nwhere P is the prompt, T i is the i-th text, S i is the i-th target structure, and • denotes concatenation. An example is shown in the hard stage part of the RE task in Figure 1. The model has to process and understand the combined information from both instances, making it more challenging for the model to correctly extract the target structure.\nLet N denote the number of training examples of the original task. For each training example, we randomly sample M training examples whose target structures are not empty to construct M hard instances. This results in a total of N * M hard instances. This approach allows us to easily construct a large amount of diverse hard training data." }, { "figure_ref": [], "heading": "The Main Stage", "publication_ref": [ "b30" ], "table_ref": [], "text": "After training the model in the easy and hard stages, we train the model with the main task in this stage.\nTraining We adopt the pre-trained sequence-tosequence model T5 (Raffel et al., 2020) as the backbone of E2H. The model is trained with a maximum likelihood objective. Given the training example (P, T, S), the loss function L θ is defined as\nL θ = - n i=1 log P θ (S i | S <i , P, T ) (1)\nwhere θ is the model parameters, P is the prompt, T is the text, S is the target structure, and n is the length of S. We train the model in the easy, hard, and main stages sequentially. For the easy stage, we adopt the weights of pre-trained T5 to initialize the model. For the hard and main stages, we initialize the model with the weights of the model trained in the previous stage.\nInference Once the training process is complete, we use the model trained in the main stage to generate the target structure S for any given tuple of the prompt and text (P, T ). Although our training process has three stages, the inference is a one-stage process. The computational load is the same as that of the one-stage learning counterpart." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b36", "b22", "b37", "b31", "b37", "b16", "b37", "b33", "b44", "b3", "b29", "b28", "b27", "b46", "b40", "b55", "b14", "b7", "b19", "b25", "b15", "b30" ], "table_ref": [], "text": "Datasets We conduct experiments on 17 datasets across four IE tasks, i.e., NER, RE, EE, and ABSA. We evaluate the flat NER task with CoNLL03 (Tjong Kim Sang and De Meulder, 2003), and the nested NER task with ACE04-Ent (Mitchell et al., 2005) and ACE05-Ent (Walker et al., 2006). For RE, we experiment on CoNLL04 (Roth and Yih, 2004), ACE05-Rel (Walker et al., 2006), and Sci-ERC (Luan et al., 2018). Regarding to EE, we use ACE05E, ACE05E+ (Walker et al., 2006), and CASIE (Satyapanich et al., 2020). As for ABSA, we consider the ASTE and ASQP tasks. For ASTE, we adopt four popular datasets, including Rest14, Laptop14, Rest15, and Rest16 provided by Xu et al. (2020). For ASQP, we use R-ACOS and L-ACOS provided by Cai et al. (2021), and Rest15 and Rest16 provided by Zhang et al. (2021a). These ABSA datasets are derived from the datasets provided by the SemEval ABSA challenges (Pontiki et al., 2014(Pontiki et al., , 2015(Pontiki et al., , 2016)) Baselines We divide our baselines into two categories: specialized models and unified models. Specialized models are designed for a particular IE task, while unified models are designed for general IE. For specialized models, we use state-of-theart methods such as BARTNER (Yan et al., 2021) and DeBias (Zhang et al., 2022a) for NER, UniRE (Wang et al., 2021) and PURE (Zhong and Chen, 2021) for RE, Text2Event (Lu et al., 2021) and DEGREE (Hsu et al., 2022) for EE, and PARA-PHRASE (Zhang et al., 2021a) and Seq2Path (Mao et al., 2022) for ABSA. For unified models, we use TANL (Paolini et al., 2021), UIE (Lu et al., 2022), andLasUIE (Fei et al., 2022) as baselines. To make a fair comparison with one-stage learning methods, we also build T5-base and T5-large baselines. We set their inputs and outputs the same as those of E2H and only train them in the main stage.\nImplementation Details E2H has two model sizes: E2H-base and E2H-large, which are initialized with pre-trained T5-base and T5-large models (Raffel et al., 2020), respectively. Other details are reported in Appendix A.2." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [ "b15" ], "table_ref": [], "text": "We compare E2H with state-of-the-art specialized and unified models. and E2H-large obtains an average improvement of 0.38, 2.96, 1.33, and 1.39 absolute points over T5-large on the NER, RE, EE, and ABSA tasks, respectively. This demonstrates the strong generalization ability of our framework. (3) Without using any external resources, our method exhibits comparable or stronger performance than models with large-scale continued pre-training. Compared with UIE (Lu et al., 2022), which is pre-trained with large-scale structured, unstructured, and parallel data, E2H-large achieves better performance on the RE, EE, and ASTE tasks and obtains comparable results on the NER task. (4) Easy-to-hard learning brings more benefits to complex tasks than simple tasks. Specifically, compared with the improvement on the NER task, which only extracts entities, the improvements of E2H over T5 are more significant on the other three tasks, which extract tuples with multiple elements. This shows that our method can help the model effectively capture the structural dependency of complex structures." }, { "figure_ref": [ "fig_2" ], "heading": "Low-Resource Results", "publication_ref": [], "table_ref": [], "text": "Our experiments in low-resource scenarios show that E2H is particularly effective in situations where there is limited training data. As shown in Figure 2, by training on a fraction (1%, 5%, and 10%) of the original data1 , we observe that E2H-base significantly outperforms T5-base on all datasets. For example, when there is only 5% of the training data, E2H-base obtains an average of 7.1, 12.0, 6.4, and 8.2 absolute points of improvement over T5-base on ACE04-Ent, ACE05-Rel, ACE05-E, and Rest14 respectively. This highlights the effectiveness of our easy-to-hard learning framework when data is scarce. On one hand, the easy stage facilitates the model to identify the substructures of the target structure and capture the dependencies among them, which are difficult when there is limited data. On the other hand, the hard stage provides diverse and harder data to help the model tackle broad-range variations of the task, which is especially important in low-source scenarios." }, { "figure_ref": [], "heading": "More Analysis", "publication_ref": [], "table_ref": [ "tab_10" ], "text": "Analysis on different learning strategies In the main result table, we report the results of E2H trained with the easy→hard→main strategy, i.e., training the model in the easy, hard, and main stages sequentially. In this section, we investigate alternative learning strategies. Table 6 reports the results of T5-base models trained with different learning strategies on four datasets across four tasks. We have the following observations: (1) The easy→hard→main strategy is the best among the seven concerned strategies. It performs better than other strategies on all datasets.\n(2) Easy-to-hard multi-stage learning outperforms multi-task learning (i.e., easy+main+hard). When the easy, main, and hard parts of the training data are used, the easy→hard→main and easy→main→hard strategies show superiority over the easy+main+hard strategy on all datasets. This indicates that easy-tohard multi-stage learning is essential to the model's performance.\n(3) Each stage is critical to our E2H framework. Removing any of the stages will reduce the performance of E2H. (4) In general, three-stage learning is better than two-stage learning, and they are better than one-stage learning. Is each skill necessary in the easy stage? To quantify the contribution of each skill, we examine the performance of E2H-base after removing a basic skill for training in the easy stage. Ablation results on four datasets across four tasks are shown in Table 5. Removing any skill degrades the performance of E2H on the main task, indicating that recognizing substructures and the dependency between them is crucial to the model's performance." }, { "figure_ref": [], "heading": "Does easy-to-hard learning improve the model's cross-domain generalization ability?", "publication_ref": [], "table_ref": [ "tab_11" ], "text": "To answer this question, we compare the performance of the E2H-base model and the T5-base model trained on a dataset on another dataset in a different domain of the same task. Table 7 reports the results of the cross-domain generalization performance of different models on two dataset pairs: CoNLL03↔ACE04-Ent of the NER task and Rest16↔Laptop14 of the ASTE task. E2H-base performs better than T5-base in all scenarios. This indicates that easy-to-hard learning can enhance the model's cross-domain generalization ability." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b21", "b18", "b53", "b12", "b17", "b38", "b54", "b57", "b56", "b5", "b11", "b20", "b45", "b46", "b14", "b25", "b15", "b6", "b0", "b39" ], "table_ref": [], "text": "IE is a long-standing research area in natural language processing. Over the years, the paradigm for IE has undergone several transitions. Early approaches to IE focus on sequence labeling techniques (McCallum and Li, 2003;Ma and Hovy, 2016;Zhang et al., 2018;Li et al., 2019;Zhang et al., 2021b), in which each word in a text is assigned a label indicating its role in the extraction task. Span-based approaches (Luan et al., 2019;Wang et al., 2020;Zhao et al., 2020;Xu et al., 2021a;Zhou et al., 2022Zhou et al., , 2023)), which involve identifying spans in the text that correspond to the desired information, are later introduced for IE. MRC-based methods (Du and Cardie, 2020;Li et al., 2020;Mao et al., 2021;Xu et al., 2023) that frame the extraction task as a reading comprehension problem and generation-based methods (Yan et al., 2021;Lu et al., 2021;Zhang et al., 2021c) that generate the extracted information directly from the text have gained popularity in recent years for IE. They have been shown to be more effective and flexible. Most of these methods target a specific IE task. There have been some efforts to develop unified IE methods (Paolini et al., 2021;Lu et al., 2022;Fei et al., 2022), which can unify various IE tasks with one framework. Our E2H framework, a unified IE framework, introduces a novel easy-to-hard learning paradigm for IE to reduce the gap between model and human learning.\nFrom the perspective of improving the learning process, E2H shares similar spirits with transfer learning (Pan and Yang, 2010), which uses the knowledge gained from solving one task to help solve another related task. By comparison, E2H learns basic skills specifically designed to assist with the target task. E2H is also related to curriculum learning (Bengio et al., 2009;Wang et al., 2022) in its fundamental motivation of learning from easy to hard. Curriculum learning, inspired by the human learning process, presents examples starting from the easiest samples, then gradually introducing more complex ones. However, curriculum learning involves the intricate task of ordering instances based on their difficulty. This requires a reliable difficulty criterion or a ranking system, which can be challenging to define and often necessitates substantial human effort. In contrast, E2H emphasizes on mastering certain fundamental skills prior to tackling more intricate tasks, eliminating the requirement for a difficulty criterion. This approach can be particularly beneficial in scenarios where the target task requires a distinct set of skills, or when the learning setting does not naturally provide a straightforward measure of difficulty." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper proposes an easy-to-hard learning framework consisting of the easy stage, the hard stage, and the main stage for IE. Two novel strategies are proposed to build the easy and hard parts of the framework to enable the learning process. Experimental results in both full and low-resource scenarios demonstrate the effectiveness of our framework and its superiority over one-stage learning methods." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "While the results have shown the effectiveness of our framework in IE without using any additional resources, we did not explore the potential enhancement by utilizing existing resources in the easy-tohard learning process. On one hand, we can build the easy stage with the help of existing data of simpler tasks. On the other hand, the data of harder tasks can be used for the hard stage. To enhance the E2H framework via effectively using existing resources is an interesting and promising direction. Another limitation is that we did not extensively explore the possible skill sets for each task. Exploring more approaches to obtain the skill sets is also open for future research. We plan to investigate these possibilities in our future work. " }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.2 Implementation Details", "publication_ref": [ "b15", "b13" ], "table_ref": [], "text": "We set the maximum input length to 384 and the maximum target length to 256. Following the practices of Lu et al. (2022), we use a batch size of 64 for E2H-base and 32 for E2H-large. The learning rate is chosen from {1e-4, 3e-4} for E2H-base and {5e-5, 1e-4} for E2H-large, and we use the AdamW optimizer (Loshchilov and Hutter, 2019) with linear learning rate decay. The number of training epochs for the easy, hard, and main stages are set to [15,30,30] or [25,50,50], with the easy stage having fewer epochs as it typically has more data.\nFor the hard stage, we choose M from {1, 2} for the datasets of the NER, RE, and EE tasks and from {1, 2, 3} for the datasets of the ABSA task. The parameters are chosen based on the model's performance on the development set. Generally, for large datasets such as ACE05-E, a smaller value of M like 1 is more appropriate, while for smaller datasets such as Laptop14, a larger value of M such as 3 is preferred. All experiments are conducted on NVIDIA Tesla A100." }, { "figure_ref": [], "heading": "A.3 Examples of IE tasks", "publication_ref": [ "b15" ], "table_ref": [ "tab_14", "tab_15", "tab_16", "tab_17", "tab_7", "tab_7" ], "text": "Detailed examples of different IE tasks are shown in Tables 910111213. We use the structural extraction language proposed by Lu et al. (2022) to encode the target structure. The demonstrator embodies an interesting combination of hand-built, symbolic resources and stochastic processes.\n((method: stochastic processes (part of: demonstrator)))\nSkill 3 [HR] [Rel] compare [Rel] conjunction [Rel] evaluate for [Rel] feature of [Rel] hyponym of [Rel] part of [Rel] used for [Text]\nThe demonstrator embodies an interesting combination of hand-built, symbolic resources and stochastic processes.\n((part of) (conjunction))\nSkill 4 [HE] [HR] [Rel] conjunction [Ent] generic [Ent] mate- rial [Ent] method [Ent] metric [Ent] other scientific term [Ent] task [Text]\nThe demonstrator embodies an interesting combination of hand-built, symbolic resources and stochastic processes.\n((material: hand-built, symbolic resources (conjunction: stochastic processes))) The pizza is delicious.\n((category: food quality (polarity: positive))\nTable 13: Detailed Examples for ASQP. We provide an instance for the main task and each skill. We highlight Hint in red, Constraint in brown, and Schema in blue. We treat the aspect term, opinion term, and sentiment polarity as the arguments of the aspect category.\n[HC] and [HA] are the aspect category hint and argument hint, respectively.\n[Cat] and [Arg] are special tokens to denote the aspect category and its arguments, respectively." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Our code is available at https:// github.com" }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "/DAMO-NLP-SG/IE-E2H* This work was supported by Alibaba Group through Alibaba Research Intern Program. It was also partially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Code: 14200719). This work was done when Chang Gao was an intern at Alibaba DAMO Academy." } ]
Information extraction (IE) systems aim to automatically extract structured information, such as named entities, relations between entities, and events, from unstructured texts. While most existing work addresses a particular IE task, universally modeling various IE tasks with one model has achieved great success recently. Despite their success, they employ a one-stage learning strategy, i.e., directly learning to extract the target structure given the input text, which contradicts the human learning process. In this paper, we propose a unified easy-to-hard learning framework consisting of three stages, i.e., the easy stage, the hard stage, and the main stage, for IE by mimicking the human learning process. By breaking down the learning process into multiple stages, our framework facilitates the model to acquire general IE task knowledge and improve its generalization ability. Extensive experiments across four IE tasks demonstrate the effectiveness of our framework. We achieve new state-of-the-art results on 13 out of 17 datasets.
Easy-to-Hard Learning for Information Extraction *
[ { "figure_caption": "Figure 1 :1Figure1: Overview of E2H consisting of three stages, i.e., the easy stage, the hard stage, and the main stage. We highlight Hint in red, Constraint in brown, and Schema in blue.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Results of E2H-base and T5-base in lowresource scenarios.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Skill 1[HT] [Tri] acquit[Tri] appeal[Tri] arrest jail[Tri] attack[Tri] born[Tri] charge indict[Tri] convict [Tri] declare bankruptcy [Tri] demonstrate [Tri] die [Tri] divorce [Tri] elect [Tri] end organization [Tri] end position [Tri] execute [Tri] extradite [Tri] fine [Tri] injure [Tri] marry [Tri] meet [Tri] merge organization [Tri] nominate [Tri] pardon [Tri] phone write [Tri] release parole [Tri] sentence [Tri] start organization [Tri] start position [Tri] sue [Tri] transfer money [Tri] transfer ownership [Tri] transport [Tri] trial hearing [Text] It was talking something about the war in Iraq. I guess it's a good thing about the elections that are going on.((attack: war) (elect: elections))Skill 2 [HT] [HA] [Tri] attack: war [Arg] adjudicator [Arg] agent [Arg] artifact [Arg] attacker [Arg] beneficiary [Arg] buyer [Arg] defendant [Arg] destination [Arg] entity [Arg] giver [Arg] instrument [Arg] organization [Arg] origin [Arg] person [Arg] place [Arg] plaintiff [Arg] prosecutor [Arg] recipient [Arg] seller [Arg] target [Arg] vehicle [Arg] victim [Text] It was talking something about the war in Iraq. I guess it's a good thing about the elections that are going on. ((attack: war (place: Iraq)))", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "] [HA] [Cat] category [Arg] aspect [Arg] opinion [Arg] polarity [Text] The pizza is delicious. ((category: food quality (aspect: pizza) (opinion: delicious) (polarity: positive)) Skill 1 [HC] [Cat] category [Text] The pizza is delicious. ((category: food quality)) Skill 2 [HC] [HA] [Cat] category [Arg] aspect [Text] The pizza is delicious.((category: food quality (aspect: pizza))Skill 3 [HC] [HA] [Cat] category [Arg] opinion [Text]The pizza is delicious.((category: food quality (opinion: delicious)) Skill 4 [HC] [HA] [Cat] category [Arg] polarity [Text]", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "{T} : Pichai is the CEO of Google which is located in California. {T1} : Bob is a Microsoft engineer.[tokens to denote the entity category and relation, respectively {ES} : Entity Schema, i.e., [", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Schema, i.e., [", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Basic skills for NER, RE, EE, ASTE, and ASQP. We omit Hint and Schema for simplicity. Detailed examples are in Appendix A.3.", "figure_data": "", "figure_id": "tab_4", "figure_label": "1", "figure_type": "table" }, { "figure_caption": ", except L-ACOS which is collected from the Amazon Laptop domain. Statistics of these datasets are provided in Appendix A.1.Evaluation We use Micro-F1 as the primary evaluation metric. For each experimental result, we report the average performance on three random seeds. For NER, RE, EE, and ASTE, we followLu et al. (2022) to use Entity F1, Relation Strict F1, Event Trigger F1 and Argument F1, and Sentiment Triplet F1 as the evaluation metrics and map the generated string-level extraction results to offsetlevel for evaluation. For ASQP, we followZhang et al. (2021a) to use Sentiment Quad F1 to evaluate the model. A sentiment quad is correct if and only if the four elements are exactly the same as those in the gold sentiment quad.", "figure_data": "", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Experimental results on the NER and RE tasks. The best results are in bold and the second-best results are underlined. Models marked with * conduct large-scale continued pre-training with external resources. Except for T5-base and T5-large, the results of baselines are taken from their original papers.", "figure_data": "Tables 2-4 report the experi-mental results on 17 datasets across four IE tasks.We have the following observations: (1) E2H isan effective framework for various IE tasks. E2H-", "figure_id": "tab_6", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Experimental results on the EE task. The best results are in bold and the second-best results are underlined. Models marked with * conduct large-scale continued pre-training with external resources. Except for T5-base and T5-large, the results of baselines are taken from their original papers.", "figure_data": "ASTEASQPModelsRest14 Laptop14 Rest15 Rest16 Avg R-ACOS L-ACOS Rest15 Rest16 AvgSpecialized ModelsPARAPHRASE (Zhang et al., 2021a) 72.0361.1362.56 71.70 66.86--46.93 57.93-Seq2Path (Mao et al., 2022)75.5264.8265.88 72.87 69.77 58.41 42.97---Unified ModelsUIE * (Lu et al., 2022)74.5263.8867.15 75.07 70.16-----T5-base (Raffel et al., 2020)72.1163.0666.27 72.24 68.42 59.26 43.12 48.24 58.92 52.39T5-large (Raffel et al., 2020)73.4863.6267.08 74.85 69.76 61.24 44.37 51.76 60.93 54.58E2H-base75.4065.7868.58 73.83 70.90 60.66 43.51 49.45 59.55 53.29E2H-large75.9265.9868.80 75.46 71.54 63.50 44.51 52.39 61.86 55.57", "figure_id": "tab_7", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_8", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Experimental results of T5-base models trained with different learning strategies. The easy+main+hard strategy represents that the model is trained with the easy, main, and hard parts in a multi-task learning manner. The arrow → indicates the order between different stages.", "figure_data": "Learning StrategyTypeNER ACE04-Ent ACE05-Rel ACE05-E Rest14 RE EE ABSAAvgeasy→hard→main three-stage86.2465.4450.9875.40 69.52easy→main→hard three-stage86.2365.4049.7674.45 68.96easy+main+hardmulti-task86.1064.4649.1673.94 68.42easy→maintwo-stage85.9363.8550.3174.52 68.65hard→maintwo-stage85.9964.4149.2674.67 68.58easy→hardtwo-stage86.1865.3546.6975.34 68.39mainone-stage85.6062.9149.6872.11 67.58Models CoNLL03→ACE04-Ent ACE04-Ent→CoNLL03T5-base19.5417.45E2H-base19.7130.08ModelsRest16→Laptop14Laptop14→Rest16T5-base42.3760.50E2H-base44.8662.32", "figure_id": "tab_10", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Cross-domain generalization performance of E2H-base and T5-base.", "figure_data": "", "figure_id": "tab_11", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Statistics of datasets.", "figure_data": "A.1 Statistics of DatasetsStatistics of datasets are reported in Table 8.#Train #Val #TestCoNLL0314,041 3,250 3,453ACE04-Ent6,202745812ACE05-Ent7,299971 1,060CoNLL04922231288ACE05-Rel10,051 2,420 2,050SciERC1,861275551ACE05-E17,172 923832ACE05-E+19,216 901676CASIE11,189 1,778 3,208Rest141,266310492Laptop14906219328Rest15-ASTE605148322Rest16-ASTE857210326R-ACOS1,530171583L-ACOS2,934326816Rest15-ASQP834209537Rest16-ASQP 1,264316544", "figure_id": "tab_12", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Detailed Examples for NER. We provide an instance for the main task and each skill. We highlight Hint in red, Constraint in brown, and Schema in blue.[HEC] and [HES] are the entity category hint and entity span hint, respectively. [Ent] is a special token to denote the entity category.", "figure_data": "Task Input", "figure_id": "tab_14", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "", "figure_data": "Task Input", "figure_id": "tab_15", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Detailed Examples for EE. We provide an instance for the main task and each skill. We highlight Hint in red, Constraint in brown, and Schema in blue. [HT] and [HA] are the event trigger hint and event argument hint, respectively. [Tri] and [Arg] are special tokens to denote the event category and argument category, respectively. Skill 3 [HR] [Rel] negative [Rel] neutral [Rel] positive [Text] Great food but the service was dreadful! ((positive) (negative)) Skill 4 [HE] [HR] [Rel] positive [Ent] aspect [Ent] opinion [Text] Great food but the service was dreadful!", "figure_data": "Task InputTargetASTE [HE] [HR] [Ent] aspect [Ent] opinion [Rel] nega-((opinion: Great) (aspect: foodtive [Rel] neutral [Rel] positive [Text] Great food(positive: Great)) (aspect: ser-but the service was dreadful!vice (negative: dreadful))(opinion: dreadful))Skill 1 [HE] [Ent] aspect [Ent] opinion [Text] Great food((opinion: Great) (aspect: food)but the service was dreadful!(aspect: service) (opin-ion: dreadful))Skill 2 [HE] [HR] [Ent] aspect: sevice [Rel] negative((aspect: service (nega-[Rel] neutral [Rel] positive [Text] Great food buttive: dreadful)))the service was dreadful!((aspect: food (posi-tive: Great)))", "figure_id": "tab_16", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "Detailed Examples for ASTE. We provide an instance for the main task and each skill. We highlight Hint in red, Constraint in brown, and Schema in blue. FollowingLu et al. (2022), we formulate ASTE as the RE task, where aspect terms and opinion terms are entities, and sentiment polarities are relations.[HE] and[HR] are the entity hint and relation hint, respectively. [Ent] and[Rel] are special tokens to denote the entity category and relation, respectively.", "figure_data": "", "figure_id": "tab_17", "figure_label": "12", "figure_type": "table" } ]
Chang Gao; Wenxuan Zhang; Wai Lam; Bing Lidong
[ { "authors": "Yoshua Bengio; Jérôme Louradour; Ronan Collobert; Jason Weston", "journal": "Association for Computing Machinery", "ref_id": "b0", "title": "Curriculum learning", "year": "2009" }, { "authors": "Lidong Bing; Sneha Chaudhari; Richard Wang; William Cohen", "journal": "", "ref_id": "b1", "title": "Improving distant supervision for information extraction using label propagation through lists", "year": "2015" }, { "authors": "Lidong Bing; Wai Lam; Tak-Lam Wong", "journal": "", "ref_id": "b2", "title": "Wikipedia entity expansion and attribute extraction from the web using semi-supervised learning", "year": "2013" }, { "authors": "Hongjie Cai; Rui Xia; Jianfei Yu", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Aspectcategory-opinion-sentiment quadruple extraction with implicit aspects and opinions", "year": "2021" }, { "authors": "Ken Yew; Lidong Chia; Soujanya Bing; Luo Poria; Si", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "RelationPrompt: Leveraging prompts to generate synthetic data for zero-shot relation triplet extraction", "year": "2022" }, { "authors": "Xinya Du; Claire Cardie", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Event extraction by answering (almost) natural questions", "year": "2020" }, { "authors": "Shengqiong Hao Fei; Jingye Wu; Bobo Li; Fei Li; Libo Li; Meishan Qin; Min Zhang; Tat-Seng Zhang; Chua", "journal": "", "ref_id": "b6", "title": "LasUIE: Unifying information extraction with latent adaptive structure-aware generative language model", "year": "2022" }, { "authors": "I-Hung Hsu; Kuan-Hao Huang; Elizabeth Boschee; Scott Miller; Prem Natarajan; Kai-Wei Chang; Nanyun Peng", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "DEGREE: A data-efficient generation-based event extraction model", "year": "2022" }, { "authors": "Tushar Khot; Ashish Sabharwal; Peter Clark", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "Answering complex questions using open information extraction", "year": "2017" }, { "authors": "Jing Li; Aixin Sun; Jianglei Han; Chenliang Li; ; ", "journal": "IEEE Trans. Knowl. Data Eng", "ref_id": "b9", "title": "A survey on deep learning for named entity recognition", "year": "2022" }, { "authors": "Qian Li; Jianxin Li; Jiawei Sheng; Shiyao Cui; Jia Wu; Yiming Hei; Hao Peng; Shu Guo; Lihong Wang; Amin Beheshti; Philip S Yu", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b10", "title": "A survey on deep learning event extraction: Approaches and applications", "year": "2022" }, { "authors": "Xiaoya Li; Jingrong Feng; Yuxian Meng; Qinghong Han; Fei Wu; Jiwei Li", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "A unified MRC framework for named entity recognition", "year": "2020" }, { "authors": "Xin Li; Lidong Bing; Piji Li; Wai Lam", "journal": "", "ref_id": "b12", "title": "A unified model for opinion target extraction and target sentiment prediction", "year": "2019" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b13", "title": "Decoupled weight decay regularization", "year": "2019" }, { "authors": "Yaojie Lu; Hongyu Lin; Jin Xu; Xianpei Han; Jialong Tang; Annan Li; Le Sun; Meng Liao; Shaoyi Chen", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Text2Event: Controllable sequence-tostructure generation for end-to-end event extraction", "year": "2021" }, { "authors": "Yaojie Lu; Qing Liu; Dai Dai; Xinyan Xiao; Hongyu Lin; Xianpei Han; Le Sun; Hua Wu", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "Unified structure generation for universal information extraction", "year": "2022" }, { "authors": "Yi Luan; Luheng He; Mari Ostendorf; Hannaneh Hajishirzi", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction", "year": "2018" }, { "authors": "Yi Luan; Dave Wadden; Luheng He; Amy Shah; Mari Ostendorf; Hannaneh Hajishirzi", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "A general framework for information extraction using dynamic span graphs", "year": "2019" }, { "authors": "Xuezhe Ma; Eduard Hovy", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF", "year": "2016" }, { "authors": "Yue Mao; Yi Shen; Jingchao Yang; Xiaoying Zhu; Longjun Cai", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Seq2Path: Generating sentiment tuples as paths of a tree", "year": "2022" }, { "authors": "Yue Mao; Yi Shen; Chao Yu; Longjun Cai", "journal": "", "ref_id": "b20", "title": "A joint training dual-mrc framework for aspect based sentiment analysis", "year": "2021" }, { "authors": "Andrew Mccallum; Wei Li", "journal": "", "ref_id": "b21", "title": "Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons", "year": "2003" }, { "authors": "Alexis Mitchell; Stephanie Strassel; Shudong Huang; Ramez Zakhary", "journal": "", "ref_id": "b22", "title": "Ace 2004 multilingual training corpus", "year": "2005" }, { "authors": "Sergio Oramas; Luis Espinosa-Anke; Mohamed Sordo; Horacio Saggion; Xavier Serra", "journal": "Data & Knowledge Engineering", "ref_id": "b23", "title": "Information extraction for knowledge base construction in the music domain", "year": "2016" }, { "authors": "Jialin Sinno; Qiang Pan; Yang", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b24", "title": "A survey on transfer learning", "year": "2010" }, { "authors": "Giovanni Paolini; Ben Athiwaratkun; Jason Krone; Jie Ma; Alessandro Achille; Rishita Anubhai; Cicero Nogueira Dos Santos; Bing Xiang; Stefano Soatto", "journal": "", "ref_id": "b25", "title": "Structured prediction as translation between augmented natural languages", "year": "2021" }, { "authors": "Haiyun Peng; Lu Xu; Lidong Bing; Fei Huang; Wei Lu; Luo Si", "journal": "AAAI Press", "ref_id": "b26", "title": "Knowing what, how and why: A near complete solution for aspect-based sentiment analysis", "year": "2020-02-07" }, { "authors": "Maria Pontiki; Dimitris Galanis; Haris Papageorgiou; Ion Androutsopoulos; Suresh Manandhar; Al-Smadi Mohammad; Mahmoud Al-Ayyoub; Yanyan Zhao; Bing Qin; Orphée De Clercq; Véronique Hoste; Marianna Apidianaki; Xavier Tannier; Natalia Loukachevitch; Evgeniy Kotelnikov; Nuria Bel; Salud María Jiménez-Zafra; Gülşen Eryigit", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "SemEval-2016 task 5: Aspect based sentiment analysis", "year": "2016" }, { "authors": "Maria Pontiki; Dimitris Galanis; Haris Papageorgiou; Suresh Manandhar; Ion Androutsopoulos", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "SemEval-2015 task 12: Aspect based sentiment analysis", "year": "2015" }, { "authors": "Maria Pontiki; Dimitris Galanis; John Pavlopoulos; Harris Papageorgiou; Ion Androutsopoulos; Suresh Manandhar", "journal": "", "ref_id": "b29", "title": "SemEval-2014 task 4: Aspect based sentiment analysis", "year": "2014" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "Journal of Machine Learning Research", "ref_id": "b30", "title": "Exploring the limits of transfer learning with a unified text-totext transformer", "year": "2020" }, { "authors": "Dan Roth; Wen-Tau Yih", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "A linear programming formulation for global inference in natural language tasks", "year": "2004" }, { "authors": "Francis A Ruambo; Mrindoko R Nicholaus", "journal": "", "ref_id": "b32", "title": "Towards enhancing information retrieval systems: A brief survey of strategies and challenges", "year": "2019" }, { "authors": "Taneeya Satyapanich; Francis Ferraro; Tim Finin", "journal": "", "ref_id": "b33", "title": "Casie: Extracting cybersecurity event information from text", "year": "2020" }, { "authors": "Ananya Subburathinam; Di Lu; Heng Ji; Jonathan May; Shih-Fu Chang; Avirup Sil; Clare Voss", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Cross-lingual structure transfer for relation and event extraction", "year": "2019" }, { "authors": "Bruno Taillé; Vincent Guigue; Geoffrey Scoutheeten; Patrick Gallinari", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Let's Stop Incorrect Comparisons in End-to-end Relation Extraction!", "year": "2020" }, { "authors": "Erik F Tjong; Kim Sang; Fien De; Meulder ", "journal": "", "ref_id": "b36", "title": "Introduction to the conll-2003 shared task: Language-independent named entity recognition", "year": "2003" }, { "authors": "Christopher Walker; Stephanie Strassel; Julie Medero; Kazuaki Maeda", "journal": "", "ref_id": "b37", "title": "Ace 2005 multilingual training corpus", "year": "2006" }, { "authors": "Jue Wang; Lidan Shou; Ke Chen; Gang Chen", "journal": "Association for Computational Linguistics", "ref_id": "b38", "title": "Pyramid: A layered model for nested named entity recognition", "year": "2020" }, { "authors": "Xin Wang; Yudong Chen; Wenwu Zhu", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b39", "title": "A survey on curriculum learning", "year": "2022" }, { "authors": "Yijun Wang; Changzhi Sun; Yuanbin Wu; Hao Zhou; Lei Li; Junchi Yan", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "UniRE: A unified label space for entity relation extraction", "year": "2021" }, { "authors": "Zihao Wang; Kwunping Lai; Piji Li; Lidong Bing; Wai Lam", "journal": "Association for Computational Linguistics", "ref_id": "b41", "title": "Tackling long-tailed relations and uncommon entities in knowledge graph completion", "year": "2019" }, { "authors": "Lu Xu; Yew ; Ken Chia; Lidong Bing; ; ", "journal": "", "ref_id": "b42", "title": "Learning span-level interactions for aspect sentiment triplet extraction", "year": "2021" }, { "authors": "Lu Xu; Zhanming Jie; Wei Lu; Lidong Bing", "journal": "Association for Computational Linguistics", "ref_id": "b43", "title": "Better feature integration for named entity recognition", "year": "2021" }, { "authors": "Lu Xu; Hao Li; Wei Lu; Lidong Bing", "journal": "Association for Computational Linguistics", "ref_id": "b44", "title": "Position-aware tagging for aspect sentiment triplet extraction", "year": "2020" }, { "authors": "Weiwen Xu; Xin Li; Yang Deng; Lidong Bing; Wai Lam", "journal": "", "ref_id": "b45", "title": "Peerda: Data augmentation via modeling peer relation for span identification tasks", "year": "2023" }, { "authors": "Hang Yan; Tao Gui; Junqi Dai; Qipeng Guo; Zheng Zhang; Xipeng Qiu", "journal": "Association for Computational Linguistics", "ref_id": "b46", "title": "A unified generative framework for various NER subtasks", "year": "2021" }, { "authors": "Sen Yang; Dawei Feng; Linbo Qiao; Zhigang Kan; Dongsheng Li", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "Exploring pre-trained language models for event extraction and generation", "year": "2019" }, { "authors": "Shuai Zhang; Yongliang Shen; Zeqi Tan; Yiquan Wu; Weiming Lu; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b48", "title": "De-bias for generative extraction in unified NER task", "year": "2022" }, { "authors": "Wenxuan Zhang; Yang Deng; Xin Li; Yifei Yuan; Lidong Bing; Wai Lam; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b49", "title": "Aspect sentiment quad prediction as paraphrase generation", "year": "2021" }, { "authors": "Wenxuan Zhang; Ruidan He; Haiyun Peng; Lidong Bing; Wai Lam", "journal": "Association for Computational Linguistics", "ref_id": "b50", "title": "Cross-lingual aspectbased sentiment analysis with aspect term codeswitching", "year": "2021" }, { "authors": "Wenxuan Zhang; Xin Li; Yang Deng; Lidong Bing; Wai Lam", "journal": "Association for Computational Linguistics", "ref_id": "b51", "title": "Towards generative aspect-based sentiment analysis", "year": "2021" }, { "authors": "Wenxuan Zhang; Xin Li; Yang Deng; Lidong Bing; Wai Lam", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b52", "title": "A survey on aspect-based sentiment analysis: Tasks, methods, and challenges", "year": "2022" }, { "authors": "Yuan Zhang; Hongshen Chen; Yihong Zhao; Qun Liu; Dawei Yin", "journal": "International Joint Conferences on Artificial Intelligence Organization", "ref_id": "b53", "title": "Learning tag dependencies for sequence tagging", "year": "2018" }, { "authors": "He Zhao; Longtao Huang; Rong Zhang; Quan Lu; Hui Xue", "journal": "Association for Computational Linguistics", "ref_id": "b54", "title": "SpanMlt: A span-based multi-task learning framework for pair-wise aspect and opinion terms extraction", "year": "2020" }, { "authors": "Zexuan Zhong; Danqi Chen", "journal": "Association for Computational Linguistics", "ref_id": "b55", "title": "A frustratingly easy approach for entity and relation extraction", "year": "2021" }, { "authors": "Ran Zhou; Xin Li; Lidong Bing; Erik Cambria; Chunyan Miao", "journal": "", "ref_id": "b56", "title": "Improving self-training for cross-lingual named entity recognition with contrastive and prototype learning", "year": "2023" }, { "authors": "Ran Zhou; Xin Li; Lidong Bing; Erik Cambria; Luo Si; Chunyan Miao", "journal": "", "ref_id": "b57", "title": "ConNER: Consistency training for cross-lingual named entity recognition", "year": "2022" } ]
[ { "formula_coordinates": [ 2, 306.14, 589.38, 212.89, 15.4 ], "formula_id": "formula_0", "formula_text": "{ (e tri i , c tri i ), (e arg 1 i , c arg 1 i ), • • • , (e argm i , c argm i ) }" }, { "formula_coordinates": [ 2, 507.42, 615.65, 15.64, 16.23 ], "formula_id": "formula_1", "formula_text": "arg j i" }, { "formula_coordinates": [ 2, 457.28, 642.75, 15.64, 16.23 ], "formula_id": "formula_2", "formula_text": "arg j i" }, { "formula_coordinates": [ 4, 78.64, 216.79, 335.95, 87 ], "formula_id": "formula_3", "formula_text": "{(a, o i , p i )} Skill 3 : T → a set of sentiment polarities {p i } Skill 4 : T and a sentiment polarity constraint p → a set of triplets {(a i , o i , p)} ASQP Skill 1 : T → a set of aspect categories {c i } Skill 2 : T → a set of (aspect category, aspect term) tuples {(c i , a i )} Skill 3 : T → a set of (aspect category, opinion term) tuples {(c i , o i )} Skill 4 : T → a set of (aspect category, sentiment polarity) tuples {(c i , p i )}" }, { "formula_coordinates": [ 4, 457.26, 572.79, 68.51, 10.63 ], "formula_id": "formula_4", "formula_text": "T 1 •T 2 , S 1 •S 2 )," }, { "formula_coordinates": [ 5, 104.11, 202.34, 185.02, 31.85 ], "formula_id": "formula_5", "formula_text": "L θ = - n i=1 log P θ (S i | S <i , P, T ) (1)" }, { "formula_coordinates": [ 14, 78.73, 573.51, 255.12, 33.02 ], "formula_id": "formula_6", "formula_text": "Skill 3 [HR] [Rel] compare [Rel] conjunction [Rel] evaluate for [Rel] feature of [Rel] hyponym of [Rel] part of [Rel] used for [Text]" }, { "formula_coordinates": [ 14, 78.73, 639.66, 256.58, 33.02 ], "formula_id": "formula_7", "formula_text": "Skill 4 [HE] [HR] [Rel] conjunction [Ent] generic [Ent] mate- rial [Ent] method [Ent] metric [Ent] other scientific term [Ent] task [Text]" } ]
2023-05-16
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b23", "b16", "b38", "b16", "b1", "b1", "b13", "b15", "b18", "b21", "b34", "b37", "b3", "b6", "b31", "b23", "b38", "b16", "b10" ], "table_ref": [], "text": "This paper focuses on 3D LiDAR single object tracking (SOT), which is emerged in recent years but is an essential task for 3D applications like autonomous driving, robotics, and surveillance system with the development of 3D sensors like LiDAR. The task aims at tracking a specific target in a video by giving the corresponding 3D target bounding box in the first frame. It is challenging since the target will undergo several changes like occlusions and fast motions, and be sparse and incomplete.\nPrior arts mainly addressed the above challenges with a conventional extractor-matcher-decoder paradigm. The extractor is used to encode the features of the template and the search areas. The matcher is employed to build the template-search relationship and enhance the potential target features, i.e., embedding the template features into the features of the search region, which is also called correlation operation. The decoder is leveraged to generate bunches of 3D target proposals based on the features from the matcher. Since the modern backbones [23,24] have become the mainstream and even default choices for the extractor, most trackers in this paradigm dedicate to design more robust and elaborate matchers [5, 16,25,36,38], and more powerful decoders [16,25]. Despite their great success, we find they still suffer from two limitations. (1) The relationship between the template and the search features is always modeled only in the matcher, which is not sufficient for completely cross-source interaction and target enhancement. (2) The point downsampling in the commonly used backbones will inevitably exacerbate the sparsity of the point clouds.\nThe first limitation is mainly caused by the conventional paradigm structure, which separates the extractor and matcher, and makes the matcher be responsible for the template feature enhancement. Inspired by the 2D SOT methods [1,7,13,15,18,21,30,34], previous methods in this paradigm [9, 11, 16, 17, 25-27, 36, 38] always employ a Siamese-like backbone in the extractor that independently embeds the template and search frames without any intermediate interaction. They then design various matchers to fuse the template features into the search features. However, a standalone matcher is redundant, and the extracted highlevel features used in the matcher are not sufficient. M 2 -Track [37] has realized part of this problem and proposed a motion-centric paradigm to avoid the Siamese-like backbone and predicts the motion directly. Nevertheless, they still need a motion transformation module to integrate the template information into the search representations, and a two-stage refinement is used to ensure the performance. Inspired by the recent trend in 2D SOT [3,6,29,31], our key insight is that the extractor should be responsible for feature representation and matching simultaneously, so no extra matcher is needed.\nFor the second limitation, we have noticed that the backbone of previous extractors in 3D SOT remains unexplored. The default configuration is the Siamese-like PointNet [23]/PointNet++ [24], which are not originally designed for 3D SOT. For example, the most commonly-used PointNet++ usually downsamples the input points (typically 1024 or 512 points) by 8×, remaining fewer points (128 or 64 points) for matcher and decoder, and making the intermediate features from the extractor even much sparser than the already sparse input. Actually, the feature embedding backbone plays a core role in object tracking but is overlooked in the previous 3D trackers. It needs to provide a discriminative target representation of the input sparse point clouds, with the around background that inevitably includes distractors and noises. To ease this problem, PTTR [38] proposes a relation-aware sampling strategy for preserving more template-relevant points. However, the total preserved points are still sparse, so they need a two-stage refinement to keep the performance. We emphasize this problem from another perspective, which is to adjust the architecture of the backbone and keep the points from all the stages to formulate multi-scale representations, strengthening the representational capacity of the proposed framework.\nConsidering the above two limitations, we propose a novel Correlation Pyramid Network (CorpNet). To be specific, the encoder of CorpNet introduces Self Attention (SA) and Cross Attention (CA) modules in multiple stages of the backbone, strengthening the frame representation at multiple levels by SA and facilitating sufficient interaction to replace the original matcher by CA. Afterward, to cope with the sparsity brought by the downsampling operation, we formulate a correlation pyramid architecture in the encoder to reserve as many points as possible. More specifically, a lateral correlation pyramid structure is devised to effectively combine the point features from all the stages, which have different amounts of points and feature dimensions. Then the pyramidally fused features are voxelized to a volumetric representation and fed to the decoder. The main branch of the encoder deeply embeds the cross-source feature in multiple layers, and the lateral correlation pyramid extensively and directly combines correlated features from low level to high level, resulting in sufficient targetaware feature extraction. Moreover, our CorpNet builds a new decoder based on [16] to process the powerful representation from the encoder, considering that the motion of the x-y plane and the z axis are not exactly identical. Finally, as shown in Fig. 1, with the proposed CorpNet, we achieve state-of-the-art performance on two widely adopted datasets (i.e., KITTI [10] and NuScenes [2]).\nThe main contributions of our paper can be summarized as follows:\n• We propose a novel Correlation Pyramid Network dubbed CorpNet, which integrates multi-level self attentions to enrich the representations and multiscale cross attentions to empower sufficient interaction/matching between them.\n• We emphasize the sparsity problem of the downsampling operation by a new correlation pyramid structure that fuses the hierarchical correlated features to reserve features of all the points inside different stages.\n• We design a new motion-factorized decoder to explicitly decouples the prediction of the x-y plane and z axis for their different motion patterns." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b11", "b14", "b33", "b11", "b11", "b16", "b26", "b32", "b38", "b16", "b37", "b38", "b16", "b16", "b26", "b26", "b16", "b23" ], "table_ref": [], "text": "3D object tracking [4,11,14,22,33] works with 3D input data like point clouds, stereo images and even monocular images. Here we discuss the task of 3D single object tracking task. This is a new task that has emerged in recent years, which is first defined in SC3D [11]. Inspired by the structure of 2D SOT, previous methods [5,9,11,16,25,26,32,36,38], of 3D SOT all inherit a extractor-matcher-decoder paradigm. As a pioneer, SC3D lays the foundation of this paradigm with simple components, which matches cosine similarity (matcher) of features between candidates and target (Siamese-like extractor) and regularizes the training using shape completion (decoder).\nThe following trackers mainly focus on improving SC3D from two folds. The first dedicates to design more robust matchers [5, 16,25,27,[36][37][38]. For instance, MLVS-Net [27] uses the CBAM module [28] to enhance the vote cluster features with channel attentions and spatial attentions. V2B [16] employs the global and local template feature embedding to strengthen the correlation between the template and search area. M 2 -Track introduces a motioncentric paradigm and uses input merge and motion transformation to combine the template and search features instead of conventional correlation operation in the matcher. However, a two-stage refinement is needed to ensure the performance for the lack of appearance matching. The prior effort in the matcher still struggles with standalone matchers, which could not explicitly benefit the extractor and make the structures redundant. In the second fold, the trackers [9, 16,[25][26][27] have tried to improve the decoder part. P2B [25] employs Hough Voting to predict the target location and many methods follow [36] or improve [26,27] this manner. LTTR [9] and V2B [16] use center-based regressions to predict several object properties. Even though the matcher and decoder are explored a lot, we find the extractor is always neglected since the modern backbones [23,24] become the mainstream and default choice. We figure out that the extractor is crucial for a powerful representation to play as the fundamental of the matcher and decoder. Therefore, we shed light on this point and propose a new single-stage correlation pyramid network to explore a unified backbone specified for the 3D SOT task and merge the extractor and matcher together." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Problem Definition", "publication_ref": [], "table_ref": [], "text": "The 3D LiDAR single object tracking (SOT) task is defined as: Given a dynamic 3D sequence of T point clouds {P i } T i=1 and an initial bounding box (BBox)\nB 1 = (x 1 , y 1 , z 1 , w 1 , l 1 , h 1 , θ 1 ) of a target, our goal is to local- ize the target BBoxs {B i } T\ni=2 in sequential frames online, where the subscript stands for the sequence id, (x, y, z) in-dicates the center coordinate of the BBox, (w, l, h) is the BBox size and θ is the heading angle (the rotation around the up-axis). Generally, the BBox size is assumed to keep unchanged across all frames in the 3D scenes even for nonrigid objects (the BBox size for a non-rigid object is defined by its maximum extent in the scene), so we do not need to re-predict the size and simplify B i from R 7 to R 4 .\nMore specifically, we follow the conventional tracking pipelines to generate the input consisting of a template area P t = {P t i } Nt i=1 and a search area P s = {P s i } Ns i=1 , where P i ∈ R 3+d is a 3D (x, y, z) point with d-dimensional features like intensity and elongation. N t and N s are the point number of the template and search areas, respectively. The template P t is cropped and centered in the template frames with its corresponding BBox and the search region P s is cropped and centered according to B k-1 with an enlarged area in the search frame." }, { "figure_ref": [ "fig_1" ], "heading": "CorpNet", "publication_ref": [], "table_ref": [], "text": "The overall architecture of the proposed CorpNet is shown in Fig. 2, which consists of a unified encoder and a factorized decoder." }, { "figure_ref": [ "fig_1" ], "heading": "Encoder", "publication_ref": [ "b12" ], "table_ref": [], "text": "The function of the encoder is not only to extract features for the template and search regions but also to simultaneously be responsible for the feature matching or correlation. The popular Siamese-like backbones are not suitable anymore in this situation. We design a unified structure to satisfy the requirements.\nAs shown in Fig. 2, the encoder of our CorpNet consists of three stages, where each of them contains a set abstraction module [24] to gradually reduce the point number, a self attention (SA) module to enrich the feature representations and a cross attention (CA) module to implement fea- Most attention mechanisms [8,12] are global attentions with scalar dot products. It is known that the computation of global attention over all input tokens leads to quadratic complexity. However, the inference speed should be considered in real-time applications for the 3D SOT task. Therefore, a local vectorized self-attention mechanism [35] is leveraged in CorpNet to diminish the computational overhead. In other words, for one specific point, the attention is calculated from several adjacent points around it rather than all the points. Let x i , p i denote the feature, location of the i-th point, respectively. To illustrate the commonalities of SA and CA modules, we omit the superscript s/t of the search/template region here. A SA/CA module (Fig. 3(a)) consists of a linear layer f 1 , a SA/CA block, another linear layer f 2 and a residual connection. Formally,\nxi = f 1 (x i ), xi = SA/CA(x i ),\ny i = f 2 (x i ) + x i . (1)\nIn the SA/CA block (Fig. 3(b/c)), we obtain the query, key and value (q i , k i , v i ) features as:\nq i = f q (x i ), k i = f k (kNN(p i ), xi ), v i = f v (kNN(p i ), xi ) (2)\nwhere f q , f k , f v are linear projections and kNN denotes the k nearest neighbors. The position encoding is defined as:\ne ij = f e (p i -p j )(3)\nwhere p i and p j are the 3D position for points i and j. f e is a linear projection. Then the core local attention mechanism could be formulated as:\nxi = k j=1 σ( 1 √ k f a (q i -k ij + e ij )) (v ij + e ij ))(4)\nwhere k stands for the k nearest neighbors, f a is a twolayer MLPs, represents the element-wise multiplication and σ is a softmax function. As illustrated in Fig. 3(b), in SA module, both kNN and (q i , k i , v i ) are calculated from a same source (the template or search region), and the output of the SA block of Eq. 1 is modified to:\ny s i = f 2 (x s i ) + x s i , y t i = f 2 (x t i ) + x t i (5\n)\nwhere s stands for search region and t is the template region. While in the CA module (Fig. 3(c)), kNN is calculated with different sources. Also, q i and (k i , v i ) are calculated crossly to interact between the template and the search regions and absorb useful target-related features. More specifically, we have:\ny s i = f 2 (x st i ) + x s i , y t i = f 2 (x ts i ) + x t i .(6)\nFrom Eq. 5 and Eq. 6, it can be seen that the original features are aggregated with the corresponding transformed/attended one in SA and sufficiently interacted with each other source in CA. By stacking multiple stages, the features of both the template and search regions are gradually concentrated on the beneficial target-relevant features." }, { "figure_ref": [ "fig_2" ], "heading": "Lateral Correlation Pyramid", "publication_ref": [ "b23", "b16", "b38", "b19" ], "table_ref": [], "text": "In the traditional extractor-matcher-decoder paradigm, researchers always overlook the importance of the extractor, which always has several downsampling operations [23,24] that are used to reduce the model size but exacerbate the sparsity problem. Besides, due to the separation of the extractor and matcher, using the highest-level to do the matching operation in the matcher is a default configuration [9, 16,25,36,38], which hinders useful multi-scale combination. Differently, we break the above limitations and try to explore a pyramid structure to equip our CorpNet with rich semantics at all feature levels, as shown in Fig. 4. The feature pyramid is exploited in 2D detection [19] but remains unexplored in 3D SOT. In particular, to compensate for the exacerbated sparsity caused by the downsampling of the set abstraction module and keep as many points as possible, we formulate a correlation pyramidal encoder by leveraging the encoder's pyramidal feature hierarchy, which has semantics from low to high levels. The inherent multi-scale correlated features output from the CA modules of all stages are fused together. Note that the point amount of these features is different due to the downsampling of the set abstraction modules. Also, the feature dimension and the semantic level are different. To this end, for the feature of each stage\nF k ∈ R N k ×C k , k ∈ {1, 2, 3}\n, we first unify the feature dimension by a 1D convolution block which consists of a 1D convolution layer, a BN layer, a ReLU activation, and another 1D convolution layer. Then the obtained features are concatenated with the last features of the pyramid along the point amount dimension. The output features of the lateral pyramid F ∈ R (N1+N2+N3)×C will then be voxelized as a volumetric representation F m ∈ R C×H×L×W by averaging the 3D coordinates and features of the points in the same voxel bin. Given the voxel size (v x , v y , v z ) and the range of the search region [(x min , x max ), (y min , y max ), (z min , z max )], the resolution (W, L, H) of the F m is:\nW = x max -x min v x + 1, L = y max -y min v y + 1, H = z max -z min v z + 1(7)\nwhere • indicates the floor operation. Since only the search region representations are fed into the decoder for prediction in our CorpNet, thus the lateral correlation pyramid serves only for the multi-level embeddings of the search region." }, { "figure_ref": [ "fig_1" ], "heading": "Decoder", "publication_ref": [ "b16", "b16", "b17", "b20" ], "table_ref": [], "text": "Since the obtained encoded feature F m ∈ R C×H×L×W is a voxel representation, we can take advantage of regular convolutions that are commonly used in the images. The most similar decoder is from [16], which first uses 3D convolutions on their encoded features and pools a BEV feature map to regress the final results, including the z-axis location.\nIn fact, the motion pattern of the x-y plane and the movement in z axis are not exactly the same. Considering this, we decide to separately model them in our decoder as presented in the right part of Fig. 2. First, we disentangle the 3D convolution block into a 2D convolution block (consisting of a 2D convolution layer, a batch normalization layer and a ReLU activation layer) and a 1D convolution block (consisting of a 1D convolution layer, a batch normalization layer, and a ReLU activation layer), operating successively on the BEV and the vertical direction. We can further gain another advantage that is an additional nonlinear rectification between these two operations. This effectively doubles the number of nonlinearities compared to a single 3D convolutions block, thus rendering the model capable of representing more complex functions. After stacking several decomposed 3D convolution blocks, we apply two max pooling operations on the z axis and x-y plane to get the BEV features F BEV and vertical features F z , respectively. Mathematically,\nF BEV = MaxPool z φ(F m ), F z = MaxPool xy φ(F m ) (8\n)\nwhere φ is the decomposed 3D convolution blocks. The subscript of the MaxPool means applying max pooling on the corresponding dimension. Next, different to [16,17], we employ two separate prediction subnetworks on top of the F BEV and F z . Each subnetwork applies a stack of common convolutions blocks (2D convolution for BEV feature maps and 1D convolution for vertical features) on the dense feature map (F BEV and F z ) to strengthen and adjust the features for sufficient local information in the corresponding feature maps. Then, for each subnetwork, a center classification head is attached to predict the discrete object center. To compensate for the discretization error, we regress the offset of the continuous ground truth center with an offset regression head. The rotation regression is predicted together with the BEV offset regression head. The final training objective is expressed as:\nL = λ cls (L BEV cls + L z cls ) + λ reg (L BEV reg + L z reg ) (9)\nwhere L * cls is the focal loss [20] with default hyperparameters for center classification heads and L * reg is the L1 loss for the BBox center offset and rotation regression heads. Ground Truth Construction. Here we take the ground truth construction of BEV heads for instance, and the ground truth of the z-axis heads can be similarly formulated. Let (x, y, z) denote the 3D target location, the target 2D center (c x , c y ) in the BEV is computed as:\nc x = x -x min v x , c y = y -y min v y . (10\n)\nThe discrete 2D center is defined by ĉx = c x and ĉy = c y . The ground truth of the BEV center classification is y cls ∈ R L×W , where the value in location (i, j) becomes 1 if i = ĉx and j = ĉy , 0 if (i, j) not in the 2D target BBox, otherwise 1 γ+1 , where γ represents the Euclidean distance between the pixel (i, j) and the discrete target center." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experiment Setups", "publication_ref": [], "table_ref": [], "text": "Implementation details. We set both N t and N s to 1024 for the input template and search regions by randomly duplicating and discarding points. In the encoder, the set abstraction modules from PointNet++ [24] are simplified into 2 MLP layers to decrease parameters, diminishing the input points to 512, 256, 128 ones, respectively. The radius of these layers is set to 0.3, 0.5, and 0.7 meters by default. Parameters of the set abstraction modules and SA modules are shared for the template and search regions. The correlated features after the lateral correlation pyramid are all with 64 channels. In the voxelization process, we set the region [(x min , x max ), (y min , y max ), (z min , z max )] to [(-5.6,5.6),(-3.6,3.6),(-2.4,2.4)] to cover most target points. The voxel size (v x , v y , v z ) is set to (0.3,0.3,0.3). For the decoder, three decomposed 3D blocks are stacked before the pooling operations of Eq. 8. The two subnetworks include three 2D convolution blocks and three 1D convolution blocks for feature aggregation, respectively. The classification loss has a weight λ cls of 1 and the regression loss has a weight λ reg of 1. The radius r is set to 2. For all experiments, we use the Adam optimizer with an initial learning rate of 0.001 for training, and the learning rate decays by 0.2 every six epochs by default. It takes about 20 epochs to train our model. In the inference, CorpNet runs at 36 FPS. Training and Testing. In the training stage, we use the points chosen from the ground truth BBox in the first frame and the ground truth BBox dealt by a random shift from the last frame as the template. The search region is generated with ground truth BBox enlarged by 2 meters plus with the random shift. In the testing stage, we use the points inside the BBox of the first frame and the last predicted BBox as the template. The search region is generated by the last predicted BBox enlarged by 2 meters." }, { "figure_ref": [], "heading": "Comparison with State-of-the-Art Trackers", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Comparison on KITTI", "publication_ref": [ "b10", "b11", "b37", "b11", "b17", "b16", "b26", "b37", "b16" ], "table_ref": [], "text": "KITTI [10] contains 21 training sequences and 29 test sequences. We follow previous works [11,36,37] to split the training set into train/val/test splits due to the inaccessibility of the test labels. We use scenes 0-16 for training, scenes 17-18 for validation, and scenes 19-20 for testing.\nWe compare the proposed CorpNet with current state-ofthe-art methods, from the pioneering SC3D [11] to the most recent STNet [17]. As shown in Tab. 1, our CorpNet performs significantly better than other methods on the mean results of four categories. We yield the best results on most categories. We find that our sufficient self aggregations and cross interactions, as well as the correlation pyramid in the encoder, make our CorpNet learn effectively on the datarare categories like Cyclist and Van. While most previous methods like V2B [16], BAT [36] and PTT [26] are hard to handle these classes. The second best M2Track [37] also proposes to change the Siamese-like pipeline by constructing a spatial-temporal point cloud to avoid the Siameselike backbone and predicts the motion directly. Nevertheless, they still need a motion transformation module to integrate the template information into the search representations, and a two-stage refinement is used to improve the performance. As a result, we achieve better performance than M2Track with a single stage. Besides, compared with V2B [16] which is the decoder baseline of our method, our proposed encoder and the improved decoder make CorpNet yield better performance in all categories." }, { "figure_ref": [], "heading": "Experiments on NuScenes", "publication_ref": [ "b16", "b16" ], "table_ref": [], "text": "NuScenes [2] has 1000 scenes, which are divided into 700/150/150 scenes for train/val/test. Officially, the train set is further evenly split into \"train track\" and \"train detect\" to remedy overfitting. Following [16], we train our model with \"train track\" split and test it on the val set.\nNote that the NuScenes dataset only labels keyframes and provides official interpolated results for the remaining frames, so there are two configurations for this dataset. The first is from [36] which trains and tests both only on the keyframes. The other one is from [16] which trains and tests on all the frames. These two configurations result in different datasets with different performances. We believe that the motion in key frames is extremely large, which is not in line with the practical applications. Therefore, we follow the second set in this paper. The methods that follow the second way are compared, and the performance evaluated on the keyframes is reported.\nIn Tab. 2, we report the comparison results on NuScenes. The proposed CorpNet exceeds all the competitors under all categories, mostly by large margins. NuScenes is much more challenging than KITTI due to the label scarcity, pervasive distractors, and drastic appearance changes. Corp-Net still surpasses other methods on both rigid (e.g., Car) and non-rigid (e.g., Pedestrian) objects, both small (Bicycle) and large (Truck) objects. Besides, our method signif-icantly improves categories (i,e, Bicycle and Truck) with fewer data. The thorough cross-source interactions and the lateral pyramid helps to learn useful features even though the annotation may be incorrect (interpolated labels) on this dataset." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [ "b16" ], "table_ref": [], "text": "We comprehensively perform ablation studies to analyze each component in our proposed CorpNet on the Car category of the KITTI dataset. Location of SA modules and CA modules. We study the influence of the SA and CA modules separately in Tab. 3. For both SA and CA, we find that merely adding one module to the last stage has already exceeded all the previous methods. When placing SA and CA on all the three stages, we could obtain the best results. Therefore, we use this configuration in our final CorpNet. These results are intuitive since the representation could be enriched more if the features of all levels are self-aggregated by SA modules. Similarly, the template information could be sufficiently interacted and enhanced if the correlation operations existed in all the stages, yielding better results. Number of neighbors in the local attention. In Tab. 4, we investigate the influence of the number of neighbors k, which determines the considered local neighborhood around each point. We use the same setting for SA and CA modules. In our CorpNet, the best results are obtained when k=32. When k is smaller, like 8 and 16, the model will have insufficient context for prediction. More specifically, SA may not have enough information to strengthen its features, and CA can not interact with sufficient related points. When k is larger, like 64, the performance will be deteriorated by excessive noise, which is farther and less relevant. The best choice of k is 32 in CorpNet. How important are the correlation pyramid? We conduct an ablation study about the correlation pyramid in Tab. 5. When only using the correlated features of stage 3 without the pyramid structure, the performance drops significantly (2.7%) compared with the final CorpNet. When fusing the features of stage 2 and stage 3, the results are still worse than fusing all the three stages (-1.0%). This demonstrates that using the correlation from all stages for the proposed correlation pyramid is important and beneficial to the model's accuracy. Is the z-axis separation beneficial? We now study our contribution in the decoder in Tab. 6, where 3D Conv is short for 3D We now study our contribution in the decoder in Tab. 6, where 3D Conv is short for 3D convolution blocks, BEV stands for the 2D prediction head operating on the BEV feature maps, Decomposed 3D Conv represents our proposed decomposed 3D blocks and z denotes our standalone prediction head for the z-axis features. \"3D Conv + BEV\" is actually the original decoder implement of V2B [16]. The results show that 1) directly using V2B's decoder improves 1.8% (success) over V2B, which validates the effectiveness of our proposed encoder, 2) only factorizing the 3D convolution block but using only the BEV head is not working, and we attribute to the fact that decomposed 3D convolution first explicitly embeds the z-axis information but then pools it will damage the decomposed features and lead to failures.\n3) factorizing the 3D convolution block and adding a separate z-axis prediction head together is beneficial and further improve the performance with 1.3% in success, which verifies our initial consideration that the motion pattern of the x-y plane and the movement in z axis are not exactly the same. " }, { "figure_ref": [], "heading": "Computational Cost", "publication_ref": [], "table_ref": [], "text": "We analyze the computational cost in Tab. 7. The results are tested under the same PyTorch platform and a single TITAN RTX GPU on the Car category of KITTI. We can see that the speed of our method is real-time and comparable with V2B, but our CorpNet performs better than V2B with 3.1% in success. Although BAT and P2B infer faster than our CorpNet, our performance is significantly better than them (+17.4% and +13.1% in success). For the training time, our CorpNet, V2B and BAT all converge fast with about 8 hours, while SC3D and P2B require more time. Besides, the parameters and FLOPs of CorpNet are also comparable with other methods. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper proposes a novel correlation pyramid network (CorpNet) for the 3D single object tracking task, which merges the extractor and matcher of the traditional pipeline to jointly learn the target-aware representation, enabling the extractor and matcher to benefit each other. Particularly, CorpNet integrates multi-level self attentions and cross attentions to enrich the template and search region features, and sufficiently realize their fusion and interaction, respectively. Furthermore, a lateral correlation pyramid structure is designed in the encoder to handle the sparsity problem from the downsampling operations by combining the hierarchical correlated features of all the points that existed in the encoder. The decoder of our method has a new z-axis-separated structure, which explicitly learns the movement of the z axis and the x-y plane. Finally, our method achieves state-of-theart results on two commonly-used datasets (KITTI and NuScenes)." } ]
3D LiDAR-based single object tracking (SOT) has gained increasing attention as it plays a crucial role in 3D applications such as autonomous driving. The central problem is how to learn a target-aware representation from the sparse and incomplete point clouds. In this paper, we propose a novel Correlation Pyramid Network (CorpNet) with a unified encoder and a motion-factorized decoder. Specifically, the encoder introduces multi-level self attentions and cross attentions in its main branch to enrich the template and search region features and realize their fusion and interaction, respectively. Additionally, considering the sparsity characteristics of the point clouds, we design a lateral correlation pyramid structure for the encoder to keep as many points as possible by integrating hierarchical correlated features. The output features of the search region from the encoder can be directly fed into the decoder for predicting target locations without any extra matcher. Moreover, in the decoder of CorpNet, we design a motion-factorized head to explicitly learn the different movement patterns of the up axis and the x-y plane together. Extensive experiments on two commonly-used datasets show our CorpNet achieves state-of-the-art results while running in real-time.
Correlation Pyramid Network for 3D Single Object Tracking
[ { "figure_caption": "Figure 1 .1Figure 1. Visualization results of the four different categories. The points of the targets are highlighted in yellow. The red boxes are ground truth bounding boxes. The green boxes are the objects tracked by our CorpNet, while the blue boxes are the results of V2B.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. The overall architecture of the proposed CorpNet. Given template and search point clouds and the BBox of the template, a unified encoder is used to extract the representations and match the template and search area simultaneously. A lateral correlation pyramid structure is proposed to handle the sparsity and incomplete challenges by leveraging the hierarchical correlated features. Then a decomposed decoder is designed to obtain the motion of the x-y plane and z-axis separately. The whole pipeline is end-to-end trained with only a single stage.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. The structure of the lateral correlation pyramid. The left feature maps are obtained by the three stages of the main branch in the encoder. On the right is the lateral correlation pyramid, which combines every correlated feature, feeding through a 1D convolution block and then merging with the features of the last level by concatenation.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 3. Illustration of the SA/CA module and the SA/CA blocks of the encoder. \"att\" is short for attention.", "figure_data": "LinearkNNpos embedkNNpos embedSA/CALinearLinearLinearLinearLinearLinearLinearMLPs,MLPs,softmaxsoftmax(a) SA/CA module(b) SA block(c) CA block", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison among our CorpNet and the state-of-the-art methods on the KITTI datasets. Mean shows the average result weighed by frame numbers. Bold and underline denote the best performance and the second-best performance, respectively.", "figure_data": "MethodsCar (6424) Success Precision Success Precision Success Precision Success Precision Success Precision Cyclist (308) Van (1248) Pedestrian (6088) Mean (14068)SC3D [11]41.357.941.570.440.447.018.237.831.248.5SC3D-RPN [32]36.351.043.081.4--17.947.8--P2B [25]56.272.832.144.740.848.428.749.642.460.0MLVSNet [27]56.074.034.344.552.061.434.161.145.766.63DSiamRPN [9]58.276.236.149.045.652.835.256.246.664.9LTTR [5]65.077.166.289.935.845.633.256.848.765.8PTT [26]67.881.837.247.343.652.544.972.055.174.2BAT [36]60.577.733.745.452.467.042.170.151.272.8V2B [16]70.581.340.849.750.158.048.373.558.475.2PTTR [38]65.277.465.190.552.561.850.981.658.477.8STNet [17]72.184.073.593.758.070.649.977.261.380.1M2Track [37]65.580.873.293.553.870.761.588.262.983.4CorpNet73.684.174.394.258.766.555.682.464.582.0", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison among our CorpNet and the state-of-the-art methods on the NuScenes datasets. Mean shows the average result weighed by frame numbers. Bold and underline denote the best performance and the second-best performance, respectively.", "figure_data": "MethodsCar (15578) Success Precision Success Precision Success Precision Success Precision Success Precision Bicycle (501) Truck (3710) Pedestrian (8019) Mean (27808)", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Location of SA and CA modules.", "figure_data": "Stage 1 Stage 2 Stage 3SACA71.2/81.9 70.8/81.971.3/82.8 72.8/83.973.6/84.1 73.6/84.1", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Number of neighbors k.", "figure_data": "k Success Precision869.981.81671.481.43273.684.16472.183.6", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Influence of the pyramid structure.", "figure_data": "Stage 1 Stage 2 Staget 3 Success Precision70.981.572.682.973.684.1", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "z-axis separation in the decoder.", "figure_data": "ConfigurationsSuccess Precision3D Conv + BEV72.381.73D Conv + BEV+z70.682.1Decomposed 3D Conv + BEV59.876.1Decomposed 3D Conv + BEV+z73.684.1", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "The Computational cost of different trackers.", "figure_data": "MethodParameters FLOPs FPS Training time SuccessSC3D [11]6.45 M20.07 G6∼13 h41.3P2B [25]1.34 M4.28 G48∼13 h56.2BAT [36]1.47 M5.53 G54∼8 h60.5V2B [16]1.35 M5.47 G39∼8 h70.5CorpNet1.95 M7.02 G36∼8 h73.6", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" } ]
Mengmeng Wang; Teli Ma; Xingxing Zuo; Jiajun Lv; Yong Liu
[ { "authors": "", "journal": "SC3D", "ref_id": "b0", "title": "", "year": "" }, { "authors": "Luca Bertinetto; Jack Valmadre; Joao F Henriques; Andrea Vedaldi; Philip Hs Torr", "journal": "Springer", "ref_id": "b1", "title": "Fully-convolutional siamese networks for object tracking", "year": "2016" }, { "authors": "Holger Caesar; Varun Bankiti; Alex H Lang; Sourabh Vora; Venice Erin Liong; Qiang Xu; Anush Krishnan; Yu Pan; Giancarlo Baldan; Oscar Beijbom", "journal": "", "ref_id": "b2", "title": "nuscenes: A multimodal dataset for autonomous driving", "year": "2020" }, { "authors": "Boyu Chen; Peixia Li; Lei Bai; Lei Qiao; Qiuhong Shen; Bo Li; Weihao Gan; Wei Wu; Wanli Ouyang", "journal": "", "ref_id": "b3", "title": "Backbone is all your need: A simplified architecture for visual object tracking", "year": "2022" }, { "authors": "Alberto Crivellaro; Mahdi Rad; Yannick Verdie; Kwang Moo Yi; Pascal Fua; Vincent Lepetit", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b4", "title": "Robust 3d object tracking from monocular images using stable parts", "year": "2017" }, { "authors": "Yubo Cui; Zheng Fang; Jiayao Shan; Zuoxu Gu; Sifan Zhou", "journal": "", "ref_id": "b5", "title": "3d object tracking with transformer", "year": "2021" }, { "authors": "Yutao Cui; Cheng Jiang; Limin Wang; Gangshan Wu", "journal": "", "ref_id": "b6", "title": "Mixformer: End-to-end tracking with iterative mixed attention", "year": "2022" }, { "authors": "Xingping Dong; Jianbing Shen; Dongming Wu; Kan Guo; Xiaogang Jin; Fatih Porikli", "journal": "IEEE Transactions on Image Processing", "ref_id": "b7", "title": "Quadruplet network with one-shot learning for fast visual object tracking", "year": "2019" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b8", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Zheng Fang; Sifan Zhou; Yubo Cui; Sebastian Scherer", "journal": "IEEE Sensors Journal", "ref_id": "b9", "title": "3d-siamrpn: an end-to-end learning method for real-time 3d single object tracking using raw point cloud", "year": "2020" }, { "authors": "Andreas Geiger; Philip Lenz; Raquel Urtasun", "journal": "IEEE", "ref_id": "b10", "title": "Are we ready for autonomous driving? the kitti vision benchmark suite", "year": "2012" }, { "authors": "Silvio Giancola; Jesus Zarzar; Bernard Ghanem", "journal": "", "ref_id": "b11", "title": "Leveraging shape completion for 3d siamese tracking", "year": "2019" }, { "authors": "Meng-Hao Guo; Jun-Xiong Cai; Zheng-Ning Liu; Tai-Jiang Mu; Shi-Min Ralph R Martin; Hu", "journal": "Computational Visual Media", "ref_id": "b12", "title": "Pct: Point cloud transformer", "year": "2021" }, { "authors": "Anfeng He; Chong Luo; Xinmei Tian; Wenjun Zeng", "journal": "", "ref_id": "b13", "title": "A twofold siamese network for real-time object tracking", "year": "2018" }, { "authors": "Hou-Ning Hu; Yung-Hsu Yang; Tobias Fischer; Trevor Darrell; Fisher Yu; Min Sun", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b14", "title": "Monocular quasi-dense 3d object tracking", "year": "2022" }, { "authors": "Lianghua Huang; Xin Zhao; Kaiqi Huang", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b15", "title": "Got-10k: A large high-diversity benchmark for generic object tracking in the wild", "year": "2019" }, { "authors": "Le Hui; Lingpeng Wang; Mingmei Cheng; Jin Xie; Jian Yang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b16", "title": "3d siamese voxel-to-bev tracker for sparse point clouds", "year": "2021" }, { "authors": "Le Hui; Lingpeng Wang; Linghua Tang; Kaihao Lan; Jin Xie; Jian Yang", "journal": "", "ref_id": "b17", "title": "3d siamese transformer network for single object tracking on point clouds", "year": "2022" }, { "authors": "Bo Li; Junjie Yan; Wei Wu; Zheng Zhu; Xiaolin Hu", "journal": "", "ref_id": "b18", "title": "High performance visual tracking with siamese region proposal network", "year": "2018" }, { "authors": "Tsung-Yi Lin; Piotr Dollár; Ross Girshick; Kaiming He; Bharath Hariharan; Serge Belongie", "journal": "", "ref_id": "b19", "title": "Feature pyramid networks for object detection", "year": "2017" }, { "authors": "Tsung-Yi Lin; Priya Goyal; Ross Girshick; Kaiming He; Piotr Dollár", "journal": "", "ref_id": "b20", "title": "Focal loss for dense object detection", "year": "2017" }, { "authors": "Xiankai Lu; Chao Ma; Jianbing Shen; Xiaokang Yang; Ian Reid; Ming-Hsuan Yang", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b21", "title": "Deep object tracking with shrinkage loss", "year": "2020" }, { "authors": "Jonah Ong; Ba-Tuong Vo; Ba-Ngu Vo; Du ; Yong Kim; Sven Nordholm", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b22", "title": "A bayesian filter for multi-view 3d multiobject tracking with occlusion handling", "year": "2020" }, { "authors": "Hao Charles R Qi; Kaichun Su; Leonidas J Mo; Guibas", "journal": "", "ref_id": "b23", "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "year": "2017" }, { "authors": "Charles Ruizhongtai; Qi ; Li Yi; Hao Su; Leonidas J Guibas", "journal": "Advances in neural information processing systems", "ref_id": "b24", "title": "Pointnet++: Deep hierarchical feature learning on point sets in a metric space", "year": "2017" }, { "authors": "Haozhe Qi; Chen Feng; Zhiguo Cao; Feng Zhao; Yang Xiao", "journal": "", "ref_id": "b25", "title": "P2b: Point-to-box network for 3d object tracking in point clouds", "year": "2020" }, { "authors": "Jiayao Shan; Sifan Zhou; Zheng Fang; Yubo Cui", "journal": "IEEE", "ref_id": "b26", "title": "Ptt: Point-track-transformer module for 3d single object tracking in point clouds", "year": "2021" }, { "authors": "Zhoutao Wang; Qian Xie; Yu-Kun Lai; Jing Wu; Kun Long; Jun Wang", "journal": "", "ref_id": "b27", "title": "Mlvsnet: Multi-level voting siamese network for 3d visual tracking", "year": "2021" }, { "authors": "Sanghyun Woo; Jongchan Park; Joon-Young Lee; In So Kweon", "journal": "", "ref_id": "b28", "title": "Cbam: Convolutional block attention module", "year": "2018" }, { "authors": "Fei Xie; Chunyu Wang; Guangting Wang; Yue Cao; Wankou Yang; Wenjun Zeng", "journal": "", "ref_id": "b29", "title": "Correlation-aware deep tracking", "year": "2022" }, { "authors": "Yinda Xu; Zeyu Wang; Zuoxin Li; Ye Yuan; Gang Yu", "journal": "", "ref_id": "b30", "title": "Siamfc++: Towards robust and accurate visual tracking with target estimation guidelines", "year": "2020" }, { "authors": "Botao Ye; Hong Chang; Bingpeng Ma; Shiguang Shan; Xilin Chen", "journal": "Springer", "ref_id": "b31", "title": "Joint feature learning and relation modeling for tracking: A one-stream framework", "year": "2022" }, { "authors": "Jesus Zarzar; Silvio Giancola; Bernard Ghanem", "journal": "", "ref_id": "b32", "title": "Efficient bird eye view proposals for 3d siamese tracking", "year": "2019" }, { "authors": "Yifu Zhang; Chunyu Wang; Xinggang Wang; Wenyu Liu; Wenjun Zeng", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b33", "title": "Voxeltrack: Multi-person 3d human pose estimation and tracking in the wild", "year": "2022" }, { "authors": "Zhipeng Zhang; Houwen Peng", "journal": "", "ref_id": "b34", "title": "Deeper and wider siamese networks for real-time visual tracking", "year": "2019" }, { "authors": "Hengshuang Zhao; Li Jiang; Jiaya Jia; Vladlen Philip Hs Torr; Koltun", "journal": "", "ref_id": "b35", "title": "Point transformer", "year": "2021" }, { "authors": "Chaoda Zheng; Xu Yan; Jiantao Gao; Weibing Zhao; Wei Zhang; Zhen Li; Shuguang Cui", "journal": "", "ref_id": "b36", "title": "Box-aware feature enhancement for single object tracking on point clouds", "year": "2008" }, { "authors": "Chaoda Zheng; Xu Yan; Haiming Zhang; Baoyuan Wang; Shenghui Cheng; Shuguang Cui; Zhen Li", "journal": "", "ref_id": "b37", "title": "Beyond 3d siamese tracking: A motion-centric paradigm for 3d single object tracking in point clouds", "year": "2022" }, { "authors": "Changqing Zhou; Zhipeng Luo; Yueru Luo; Tianrui Liu; Liang Pan; Zhongang Cai; Haiyu Zhao; Shijian Lu", "journal": "", "ref_id": "b38", "title": "Pttr: Relational 3d point cloud object tracking with transformer", "year": "2022" } ]
[ { "formula_coordinates": [ 3, 50.11, 668.33, 236.25, 33.56 ], "formula_id": "formula_0", "formula_text": "B 1 = (x 1 , y 1 , z 1 , w 1 , l 1 , h 1 , θ 1 ) of a target, our goal is to local- ize the target BBoxs {B i } T" }, { "formula_coordinates": [ 4, 132.73, 553.27, 70.28, 24.95 ], "formula_id": "formula_1", "formula_text": "xi = f 1 (x i ), xi = SA/CA(x i )," }, { "formula_coordinates": [ 4, 132.73, 568.99, 153.63, 25.21 ], "formula_id": "formula_2", "formula_text": "y i = f 2 (x i ) + x i . (1)" }, { "formula_coordinates": [ 4, 122.44, 628.75, 163.92, 39.61 ], "formula_id": "formula_3", "formula_text": "q i = f q (x i ), k i = f k (kNN(p i ), xi ), v i = f v (kNN(p i ), xi ) (2)" }, { "formula_coordinates": [ 4, 127.06, 703.88, 159.31, 10.75 ], "formula_id": "formula_4", "formula_text": "e ij = f e (p i -p j )(3)" }, { "formula_coordinates": [ 4, 319.29, 317.21, 225.82, 41.27 ], "formula_id": "formula_5", "formula_text": "xi = k j=1 σ( 1 √ k f a (q i -k ij + e ij )) (v ij + e ij ))(4)" }, { "formula_coordinates": [ 4, 389.32, 440.24, 151.92, 30.07 ], "formula_id": "formula_6", "formula_text": "y s i = f 2 (x s i ) + x s i , y t i = f 2 (x t i ) + x t i (5" }, { "formula_coordinates": [ 4, 541.24, 451.77, 3.87, 8.64 ], "formula_id": "formula_7", "formula_text": ")" }, { "formula_coordinates": [ 4, 387.82, 548.98, 157.3, 30.07 ], "formula_id": "formula_8", "formula_text": "y s i = f 2 (x st i ) + x s i , y t i = f 2 (x ts i ) + x t i .(6)" }, { "formula_coordinates": [ 5, 50.11, 324.02, 236.25, 22.89 ], "formula_id": "formula_9", "formula_text": "F k ∈ R N k ×C k , k ∈ {1, 2, 3}" }, { "formula_coordinates": [ 5, 112.97, 484.27, 173.39, 74.31 ], "formula_id": "formula_10", "formula_text": "W = x max -x min v x + 1, L = y max -y min v y + 1, H = z max -z min v z + 1(7)" }, { "formula_coordinates": [ 5, 367.18, 606.67, 174.06, 26.67 ], "formula_id": "formula_11", "formula_text": "F BEV = MaxPool z φ(F m ), F z = MaxPool xy φ(F m ) (8" }, { "formula_coordinates": [ 5, 541.24, 616.38, 3.87, 8.64 ], "formula_id": "formula_12", "formula_text": ")" }, { "formula_coordinates": [ 6, 64.57, 214.82, 221.8, 12.69 ], "formula_id": "formula_13", "formula_text": "L = λ cls (L BEV cls + L z cls ) + λ reg (L BEV reg + L z reg ) (9)" }, { "formula_coordinates": [ 6, 134.92, 354.47, 147.3, 48.09 ], "formula_id": "formula_14", "formula_text": "c x = x -x min v x , c y = y -y min v y . (10" }, { "formula_coordinates": [ 6, 282.21, 375.02, 4.15, 8.64 ], "formula_id": "formula_15", "formula_text": ")" } ]
2024-02-26
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b1", "b21", "b38", "b48", "b36", "b11", "b13" ], "table_ref": [], "text": "A train runs between Rome and Viterbo. A bus runs between Rome and Viterbo. No, not exactly. The bus only runs when the train does not work. Something believed may turn wrong at later time. How to revise it is belief revision.\nThe first studies [AGM85,Gär88] deemed the factual beliefs like the train running and the bus running sufficient when facing new information like \"either the train or the bus do not run\". They are minimally changed to satisfy it. When more than one minimal change exists, they are all equally likely.\nNo actual agent revises like this. The train usually runs, except when it snows. Or not: the train line is in testing, and only runs occasionally. Beliefs are not sufficient. Their strength is necessary.\nThe strength of their combinations is necessary. That the train runs while the bus does not is more credible than the other way around, and both scenarios are more likely that the two services being both shut down. Beliefs are not independent to each other. Their combined strength may not be just the sum of their individual strength. The bus running is unlikely, but almost certain when the train does not work. The collection of this kind of information is called doxastic state. It tells how much each possible scenario is believed to be the case.\nThe simplest and most used form of doxastic state is a connected preorder between propositional models. Each model stands for a possible situation; the preorder says which is more believed than which.\nEven in the simplest setting, propositional logic, the models are exponentially many. Moreover, the doxastic state is not static: the revisions change it. Scenarios conflicting with new information decrease in credibility. Scenarios supported by new information increase. The problem of iterated belief revision is not only how to revise a doxastic state, but also how to store it in reasonable space. A list of comparisons \"this scenario is more credible that this other one\" is always exponentially longer than the number of individual beliefs. Exponential means intractable.\nNo actual agent holds an exponential amount of information. Artificial or otherwise: computers have limited memory; people do not memorize long lists easily.\nHow can a computer store a doxastic state? How do people remember a doxastic state? Not in the form of a list. Somehow else. In a short way. Maybe by some general rules, with exceptions.\nHow to store a doxastic state is not a new question. It showed up early in the history of iterated belief revision [Rya91, Wil92, Dix93, DW93, Wil95], resurfacing rarely from time to time [BDP99, BKPPP00, JT05, ZPM07, Rot09] until attracting interest recently [GK18, SMV19, SH19, SKLM20, Ara20, SB22, SKPP22]. In spite of the many studies that concentrate on how to change the doxastic state neglecting its size [HSvB + 21, SVM21, SKB22, KHB22, KSB23], some solutions already exist.\nBeside listing the strength of beliefs in each possible scenario one by one, the most common form of doxastic state is a list of logical formulae. The first is true in the most likely scenario. The second is true in the most likely among the remaining ones, and so on [Wil92, Dix93, Wil95, DW93, JT05, MGC02, Rot09, SKLM20].\nA common alternative is to not store the doxastic state itself but what creates it. The current beliefs come from what learned in the past. The past revisions make the doxastic state. Rather than strenuously compiling and storing the strength of belief in every single possible scenario, it is only computed when necessary from the sequence of previous revisions. The history is the doxastic state [Rya91, BDP99, BKPPP00, KPP00, RvdHM03, Zha04, HPD07, SKPP22].\nOf the many ways of changing a doxastic state [Rot09], the two most studied ones are considered: lexicographic and natural revision [Spo88,Nay94,Bou96,CB23]. They complete the list of the four representations compared:\nExplicit representation: the mathematical representation of a connected preorder: a set of pairs of models; each model describes a possible scenario, a pair expresses a stronger belief in the first than in the second; every doxastic state can be represented by such a set or pairs, which may however be very large;\nLevel representation: a sequence of formulae; the first describes the most strongly believed scenarios, the second describes the most strongly believed remaining ones, and so on; these sequences represent every doxastic state, and may do that in less space than in the explicit representation;\nHistories of natural revisions: they represent all doxastic states; they can be converted into the level representation and back without an exponential growth;\nHistories of lexicographic revisions: they represent all doxastic states, and do that in the most space-saving way among the four considered methods: the others can be converted into lexicographic histories with a limited increase of size, while the inverse translation may not.\nFigure 1 shows the existence of polynomially-bounded translations between the four considered representations.\nThe following is a summary of the article section by section. Section 2 formally defines the four considered representations of doxastic states and formalizes their equivalence. Section 3 shows the equivalence classes of the connected preorders resulting from natural and lexicographic revisions. This proves that the inductive definitions of the previous section match the usual definitions in terms of equivalence classes, and also proves that both representations are translatable into the level representation.\n❅ ❅ ❅ ❅ ❅ ❅ ■ ✒ ✒ ❅ ❅ ❅ ❅ ❅ ❅ ■ ✲ ✛\nSection 4 shows that the considered representations are universal, and completes their space efficiency comparison. Lexicographic histories are strictly more compact than the level and natural histories representations, which compare the same and are strictly better than the explicit representation.\nA comparison with the related literature in Section 5, followed by a discussion of the results and the possible future directions of study in Section 6." }, { "figure_ref": [], "heading": "Definitions", "publication_ref": [ "b20" ], "table_ref": [], "text": "Doxastic states may take several forms, such as connected preorders, rankings and systems of spheres [FH11]. Connected preorders are studied in this article: an order between propositional models that is reflexive (I ≤ I), transitive (I ≤ J and J ≤ H entail I ≤ H) and connected (either I ≤ J or J ≤ I).\nEpistemically, I ≤ J means that I is a more believed scenario than J.\nMathematically, an order between propositional models is a set of pairs of models: it contains I, J if I is less than or equal than J, or I ≤ J. This explicit representation of an order may take space linear in the number of models, which is exponential in the number of variables in propositional logics.\nThe alternative representations of an ordering considered in this article are by a formula for equivalence class and by a sequence of lexicographic or natural revisions. They all comprise a sequence of formulae. This calls for a wording simplification, where a sequence is identified with the order it represents. For example, \"the lexicographic order [S 1 , . . . , S m ]\" is the order that results from the sequence of lexicographic revisions S m , . . . , S 1 starting from a void doxastic state, where models are all equally believed.\nThe same sequence has different meanings in different representations: the natural order [a ∨ b, ¬a] differs from the lexicographic order [a ∨ b, ¬a].\nIn the other way around, the same order is given by different sequences in different representation. For example, the lexicographic order [a, b] is the same as the level order [a ∧ b, a ∧ ¬b, ¬a ∧ b, ¬a ∧ ¬b]. Technically, they are equivalent: I ≤ J holds in the first if and if it holds in the second. Equivalence is equality of I ≤ J for all pair of models." }, { "figure_ref": [], "heading": "The explicit order", "publication_ref": [], "table_ref": [], "text": "The explicit order is the mathematical definition of an order between propositional models: a set of pairs of models. The set of all propositional models over the given alphabet is denoted by M.\nDefinition 1 The explicit order induced by S ⊆ M × M compares I ≤ S J if I, J ∈ S, where M is the set of all models.\nA connected preorder is reflexive, transitive and connected: reflexive: I, I ∈ S for every I ∈ M; transitive: I, J ∈ S and J, H ∈ S imply I, H ∈ S for every I, J, H ∈ M;\nconnected: either I, J ∈ S or J, I ∈ S for every I, J ∈ M.\nA connected preorder S ⊆ M × M is the same as a sequence of disjoint sets of models S = [S 1 , . . . , S m ], where every element S i is a set of models: S i ⊆ M and S i ∩ S j if i = j. The correspondence is:\n• I ≤ S J if and only if I ∈ M i , J ∈ M j and i ≤ j; • S i = {I ∈ M\\(S 1 ∪ • • • ∪ S i-1 ) | ∀J ∈ M\\(S 1 ∪ • • • ∪ S i-1 ) . I ≤ J}\nThe first set S 1 comprises all minimal models according to ≤. The second set S 2 comprises the minimal models except for S 1 . They are minimal among what remains: M\\S 1 ." }, { "figure_ref": [], "heading": "The level order", "publication_ref": [], "table_ref": [], "text": "The explicit order takes quadratic space in the number of models, which is exponential in the number of variables. Space can often be significantly reduced by turning every set M i into a propositional formula. This is the most used realistic representation of a doxastic state in iterated belief revision [Wil92, Dix93, Wil95, DW93, JT05, MGC02, Rot09, SKLM20].\nDefinition 2 (Level order) The level order induced by the sequence of formulae S = [S 1 , . . . , S m ] that are mutually inconsistent and whose disjunction is tautological compares I ≤ S J if i ≤ j with I |= S i and J |= S j .\nVariants lift the condition of mutual inconsistency or tautological disjunction or add the requirement of no single inconsistent formula. In the first, i and j are the minimal indexes of formulae satisfied by I and J. In the second, the definition is added \"or J does not satisfy any formula of S\". The third does not require any modification. These changes are inessential:\n1. the order [S 1 , . . . , S m ] is the same as [S 1 , S 2 ∧ ¬S 1 , . . . , S m ∧ ¬S m-1 ∧ • • • ∧ S 1 ], which comprises mutually inconsistent formulae; 2. the order [S 1 , . . . , S m ] is the same as [S 1 , . . . , S m , ¬S 1 ∨• • •∨¬S m ],\nwhose disjunction of formulae is tautological;\n3. the order [S 1 , . . . , S m ] is the same as the same sequence with all inconsistent formulae removed." }, { "figure_ref": [], "heading": "The lexicographic order", "publication_ref": [ "b48", "b36", "b31" ], "table_ref": [], "text": "The lexicographic order is what results from a sequence of lexicographic revisions [Spo88,Nay94] applied to a void doxastic state, where all models compare the same. A number of other iterated revision operators have been proved to be reducible to it [Lib23], making it a good candidate for representing arbitrary doxastic states. The first step in the definition of this order is the order induced by a single formula: believing a formula is the same as believing that every scenario where it is true is more likely than every scenario where it is false. Mathematically, its models are less than the others.\nDefinition 3 The order induced by a formula\nF compares I ≤ F J if either I |= F or J |= F .\nThe definition implies that I ≤ F J holds in exactly three cases:\n• I |= F and J |= F (strict order),\n• I |= F and J |= F (equivalence, first case), or\n• I |= F and J |= F (equivalence, second case).\nThe principle of the lexicographic order is that the last-coming formula makes the bulk of the ordering, separating its satisfying models from the others. The previous formulae matter only for ties. The following definition applies this principle to the condition I ≤ S J, where S = [S 1 , . . . , S m ] is a sequence of lexicographic revisions in reverse order: S m is the first, S 1 the last.\nDefinition 4 The lexicographic order induced by the sequence of formulae S = [S 1 , . . . , S m ] compares I ≤ S J if\n• either S = [] or • I ≤ S 1 J and either J ≤ S 1 I or I ≤ R J, where R = [S 2 , . . . , S m ].\nThe sequence S is identified with the order, giving the simplified wording \"the lexicographic order S\".\nThe lexicographic order I ≤ S J is equivalently defined in terms of the strict part and the equivalence relation of ≤ F :\n• either I < S 1 J, or • I ≡ S 1 J and I ≤ R J, where R = [S 2 , . . . , S m ]." }, { "figure_ref": [], "heading": "The natural order", "publication_ref": [ "b48", "b36", "b13" ], "table_ref": [], "text": "Like the lexicographic order is what results from a sequence of lexicographic revisions, every other revision gives a way to represent an ordering. One early and much studied such operator is the natural revision [Spo88,Nay94]. Along with lexicographic and restrained revision is one of the three elementary revision operators [CB23].\nThe founding principle of natural revision is to alter the doxastic state as little as possible to make the revising formula believed. A scenario becomes believed when it is one of the most believed scenario according to the formulae. The comparison is otherwise unchanged.\nDefinition 5 (Natural order) The natural order induced by the sequence of formulae S = [S 1 , . . . , S m ] compares I ≤ S J if either S = [] or:\n• I ∈ Mod(S 1 ) and ∀K ∈ Mod(S 1 ).I ≤ R K, or • I ≤ R J and either J ∈ Mod(S 1 ) or ∃K ∈ Mod(S 1 ).J ≤ R K, where R = [S 2 , . . . , S m ].\nThe simplified wording \"the natural order S\" stands for the order induced by the sequence S.\nBeing ≤ R a connected preorder, the recursive subcondition J ≤ R K is the same as K < R J.\nThis definition implements its justification: that the minimal models of the revising formula are made minimal while not changing the relative order among the others. The next section will prove it by expressing the natural order on equivalence classes." }, { "figure_ref": [], "heading": "Different sequences, same order", "publication_ref": [], "table_ref": [], "text": "The same order can be represented in different ways. The explicit order S may be the same as the level order R, the lexicographic orders Q and T and the natural order V . Same order, different representations or different sequences in the same representation. These sequences are equivalent on their induced order. Definition 6 (Equivalence) Two orders S and R are equivalent, denoted S ≡ R, if I ≤ S J and I ≤ R J coincide for all pairs of models I and J.\nThis definition allows writing \"the level order R is equivalent to the lexicographic order Q and to the lexicographic order T \", meaning that the three sequences R, Q and T represent the same order.\nSuch statements are used when comparing two different representations, like when proving that every natural order S is equivalent to a lexicographic order R.\nSometimes, non-equivalence is easier to handle than equivalence: two sequences S and R are not equivalent if I ≤ S J and I ≤ R J or the same with S and R swapped for some models I and J. The same conditions with I and J swapped is not necessary because I and J are arbitrary." }, { "figure_ref": [], "heading": "Classes", "publication_ref": [ "b38" ], "table_ref": [], "text": "Four representations of doxastic states are given in the previous section: explicit, level, lexicographic and natural. They are all defined in terms of whether I ≤ J holds or not.\nIterated belief revision is often defined in terms of how they change the doxastic state expressed in terms of its equivalence classes [Rot09]. For example, natural revision moves the models of the first class having models consistent with the new information to a new, first equivalence class.\nA sequence of equivalence classes is the same as the level order. These definitions say how lexicographic and natural revision change a level order. This section does that for the definitions in the previous section. It shows how to translate lexicographic and natural orders into level orders. This also proves that the definitions of the lexicographic and natural revisions match the definitions in terms of equivalence classes from the literature.\nThe proof scheme is:\n• the natural order [] is equivalent to the level order [true] since I ≤ J holds for all models in both;\n• the two orders are kept equivalent while adding formulae at the front of the natural order; this requires:\n-showing the order resulting from adding a single formula in front of the natural order;\n-expressing that order in the level representation.\nThe lexicographic representation is treated similarly. Details are in Appendix A." }, { "figure_ref": [], "heading": "From natural to level orders", "publication_ref": [], "table_ref": [], "text": "The reduction from natural orders to level orders follows the scheme outlined above: the base case is a correspondence between the level order [true] and the natural order []; the induction case maintains the correspondence while adding a formula at time to the natural order. The first proof step shows the correspondence for the natural order []." }, { "figure_ref": [], "heading": "Lemma 1 The natural order [] is equivalent to the level order [true].", "publication_ref": [ "b38" ], "table_ref": [], "text": "The induction step starts from the equivalence of a natural and a level order and maintains the equivalence while adding a formula at time at the front of the natural order, which is the same as a single natural revision.\nThis requires a property of natural orders: the models of the first formula that is consistent with a new formula are also the minimal models of the new formula according to the order.\nLemma 2 If Q c is the first formula of the level order Q that is consistent with the formula S 1 , then I |= S 1 ∧Q c is equivalent to I ∈ Mod(S 1 ) and ∀K ∈ Mod(S 1 ).I ≤ Q K.\nThe following lemma shows how a single natural revision by a formula S 1 changes a level order Q. Technically, \"a single natural revision\" is formalized as an addition to the front of a natural order. Since a natural order is a sequence of natural revisions, revising [S 2 , . . . , S m ] makes it S = [S 1 , S 2 , . . . , S m ]. The lemma expresses I ≤ S J. The expression is in terms of a natural order [S 2 , . . . , S m ] equivalent to a level order Q. In other words, it tells I ≤ S J where S is equivalent to naturally revising Q by a formula S 1 .\nLemma 3 If S = [S 1 , S 2 , . . . , S m ] is a natural order, Q is a level order equivalent to the natural order [S 2 , . . . , S m ] and Q c is the first formula of Q that is consistent with S 1 , then I ≤ S J is: • true if I |= S 1 ∧ Q c ; • false if I |= S 1 ∧ Q c and J |= S 1 ∧ Q c ; • same as I ≤ Q J otherwise.\nThe plan of the proof is to start from the equivalent natural order [] and level order [true] and to keep adding a single formula at time to the front of the first while keeping the second equivalent to it. This requires the level order that results from applying a natural revision. The previous lemma shows the order in terms of a condition equivalent to I ≤ J. The following shows this order in the level representation.\nLemma 4 If the natural order [S 2 , . . . , S m ] is equivalent to the level order Q = [Q 1 , . . . , Q k ], then the natural order S = [S 1 , S 2 , . . . , S m ] is equivalent to the level order R = [R 1 , R 2 , . . . , R k+1 ], where R 1 = S 1 ∧Q c , R i = ¬R 1 ∧Q i-1 for every i > 1 and Q c is the first formula of Q such that S 1 ∧Q c is consistent.\nThe two requirements for induction are met: the base step by Lemma 1 and the induction step by Lemma 4. Starting from the natural order [] and the level order [true] and adding S m , then S m-1 , and continuing until S 1 to the first results in a level order equivalent to [S 1 , . . . , S m-1 , S m ]. This is enough for translating a natural order into a level order of comparable size. Yet, it is not the way natural revision is normally expressed in terms of equivalence class. That would prove that the definition of natural revision of the previous section matches that commonly given. The following theorem provides that.\nTheorem 1 If the natural order [S 2 , . . . , S m ] is equivalent to the level order Q = [Q 1 , . . . , Q k ], then the natural order S = [S 1 , S 2 , . . . , S m ] is equivalent to the following level order R, where Q c is the first formula of Q that is consistent with S 1 . R = [S 1 ∧ Q c , Q 1 , . . . , Q c-1 , ¬S 1 ∧ Q c , Q c+1 , . . . , Q k ] The level order R = [S 1 ∧ Q c , Q 1 , . . . , Q c-1 , ¬S 1 ∧ Q c , Q c+1 , . . . , Q k ] expresses the natural revision of a level order [Q 1 , . . . , Q k ]\n. This is the same as naturally revising an order given as its sequence of equivalence classes [Rot09].\nThe first resulting equivalence class comprises some of the models of the revising formula S 1 . Namely, they are the ones in the first class that contains some models of S 1 . All subsequent classes comprise the remaining models in the same relative order as before. This correspondence of definitions was a short detour in the path from expressing a natural order by a level order. The proof was already in place, since both the base and the induction steps are proved.\nTheorem 2 Every natural order is equivalent to a level order of size bounded by a polynomial in the size of the natural order." }, { "figure_ref": [], "heading": "From lexicographic to level orders", "publication_ref": [], "table_ref": [], "text": "The reduction from lexicographic to level orders follows the same scheme as that of natural orders: base case and induction case. Only the reduction is shown, details are in Appendix A.\nThe base case proves the level order [true] equivalent to the empty lexicographic order [].\nThe induction step changes the level order to maintain the equivalence when adding a formula to the lexicographic order. Namely, prefixing a formula S 1 to the lexicographic order [S 2 , . . . , S m ] is the same as turning the corresponding level ordering\n[Q 2 , . . . , Q k ] into [S 1 ∧ Q 2 , . . . , S 1 ∧ Q k , ¬S 1 ∧ Q 2 , . . . , ¬S 1 ∧ Q k ].\nThis proves that every lexicographic order is equivalent to some level order.\nTheorem 3 Every lexicographic order is equivalent to a level order.\nThe theorem proves that every lexicographic order can be translated into a level order, but neglects size. It does not say that the level order is polynomial in the size of the lexicographic order. As a matter of facts, it is not. Some lexicographic orders explode into exponentially larger level orders. The next section proves this." }, { "figure_ref": [], "heading": "Comparison", "publication_ref": [], "table_ref": [], "text": "Which representations are able to represent all doxastic states? Which do it shortly?" }, { "figure_ref": [], "heading": "Expressivity", "publication_ref": [ "b22", "b46" ], "table_ref": [], "text": "In propositional logic on a finite alphabet, all considered four representations are universal [GK18,SKPP22]: each represents all connected preorders. The explicit representation is actually just the mathematical formalization of a connected preorder. Every connected preorder is representable by definition. The explicit representation is universal.\nTheorem 4 Every connected preorder is the level ordering of a sequence of mutually inconsistent formulae.\nThe natural and lexicographic representations are proved universal indirectly: the level representation is translated into each of them. Since the level representation is universal, these are as well. These two translations are in the next section." }, { "figure_ref": [], "heading": "Compactness", "publication_ref": [ "b31" ], "table_ref": [], "text": "The translations from natural and lexicographic orders to level orders are in the previous section. A translation from natural to lexicographic orders is in a previous article [Lib23], which however neglects size.\nSince natural and lexicographic orders are defined inductively, an inductive definition of level orders facilitates the translations." }, { "figure_ref": [], "heading": "Lemma 5 It holds I ≤ S J holds if and only if the following condition holds,", "publication_ref": [], "table_ref": [], "text": "where S is a level order, S 1 is its first formula and R the sequence of the following ones. S = [] or I ∈ Mod(S 1 ) or (J ∈ Mod(S 1 ) and I ≤ R J)" }, { "figure_ref": [], "heading": "From level to natural orders", "publication_ref": [], "table_ref": [], "text": "Level orders translate into natural orders in polynomial time and space: every level order of a sequence of mutually inconsistent formulae is the natural order of the same sequence.\nThe proof comprises two steps. The first is a technical result: the models that falsify all formulae of a natural order are greater than or equal to every other model. This is the case because the formulae of a natural order state a belief in the truth of their models. The falsifying models are unsupported, and therefore unbelieved.\nThe second step of the proof is an inductive expression of the natural order I ≤ S J: it holds if S is either empty, or the first formula of S supports I, it denies J, or I is less than or equal to J according to the rest of S. This expression is the same as the level order of the same sequence.\nTheorem 5 Level orders translate into natural orders in polynomial time and space." }, { "figure_ref": [], "heading": "From level to lexicographic orders", "publication_ref": [ "b31" ], "table_ref": [], "text": "That level orders translate to lexicographic orders is a consequence of the translation from level to natural orders shown above and the translation from natural to lexicographic orders proved in a previous article [Lib23]. Yet, the latter translation is not polynomial in time. What shown next is one that is.\nThe translation is the identity: every level order of a sequence of mutually inconsistent formulae is the lexicographic order of the same sequence. This is proved by showing that the lexicographic order I ≤ S J holds if and only if either S is empty, its first formula makes I true, or it makes J false or the rest of the order compares it greater than or equal to I. This is the same as the expression of level orders proved by Lemma 5.\nTheorem 6 Level orders translate into lexicographic orders in polynomial time and space." }, { "figure_ref": [], "heading": "From natural to lexicographic orders", "publication_ref": [], "table_ref": [], "text": "This translation follows from two previous results: Theorem 2 shows a polynomial translation from natural to level orders; Corollary 5 show the same from level to lexicographic orders.\nTheorem 7 Natural orders translate into lexicographic orders in polynomial space." }, { "figure_ref": [], "heading": "From lexicographic to level and natural orders", "publication_ref": [], "table_ref": [], "text": "All three representations are universal: they express all connected preorders. Therefore, they translate to each other. Whether they do in polynomial time or space is another story. What proved next is that not only polynomiality is unattainable in time, but also in space: some lexicographic orders are only equivalent to exponentially long natural orders.\nThe troublesome lexicographic orders are not even complicated: [x 1 , . . . , x n ] is an example. The equivalence classes of this lexicographic order contain one model each. Therefore, they are exponentially many. The equivalence classes of level and natural orders are bounded by their number of formulae. Exponentially many classes equal exponentially many formulae.\nThe proof comprises two parts: the classes of lexicographic orders may be exponentially many; they never are for level and natural orders. Level orders are first." }, { "figure_ref": [], "heading": "Lemma 6", "publication_ref": [], "table_ref": [], "text": "The level order of a sequence of m formulae has at most m + 1 equivalence classes.\nCombining this result with the translation of Theorem 1 proves the same statement for natural orders.\nLemma 7 The natural order of a sequence of m formulae has at most m + 1 equivalence classes.\nThe second part of the proof is that the lexicographic order [x 1 , . . . , x n ] comprises 2 n equivalence classes. A preliminary lemma is necessary.\nLemma 8 The lexicographic comparisons I ≤ S J and J ≤ S I hold at the same time only if I ≤ S k J and J ≤ S k I both hold for every formula S k of S.\nThe number of equivalence classes of [x 1 , . . . , x n ] can now be proved." }, { "figure_ref": [], "heading": "Lemma 9", "publication_ref": [], "table_ref": [], "text": "The lexicographic order S = [x 1 , . . . , x n ] has 2 m equivalence classes.\nThis result negates translations from lexicographic orders to level and natural orders of polynomial size.\nTheorem 8 The lexicographic order [x 1 , . . . , x n ] is only equivalent to level and natural orders comprising at most 2 n -1 formulae." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b41", "b52", "b17", "b53", "b18", "b9", "b46", "b40", "b23", "b47", "b32", "b3", "b0", "b46" ], "table_ref": [], "text": "Most work in the iterated belief revision literature are purely semantical, but computational aspects are not neglected. An early example is the work by Ryan [Rya91], who wrote: \"Belief states are represented as deductively closed theories. This means that they are (in general) impossible to write down fully, or to store on a computer\"; he employed a partial order between a finite number of formulae to represent the doxastic state. Williams [Wil92] and Dixon [Dix93] represented doxastic states by ordered partitions of formulae. Williams [Wil95] later introduced partial entrenchment rankings, functions from a set of formulae to integers. Dixon and Wobcke [DW93] observed: \"it is not possible to represent all entrenchments directly: some entrenchments allow infinitely many degrees of strength of beliefs. Moreover, it is impossible for the user of a system to enter all entrenchment relations: a more compact representation must be found\"; their solution is to allow for a partial specification, an ordered partition of formulae.\nComputational issues are kept into account rarely [BDP99, BKPPP00, JT05, ZPM07, Rot09], but recently attracted interest [GK18, SMV19, SH19, SKLM20, Ara20, SB22, SKPP22]. Most solutions employ structures equivalent or analogous to level orders [Wil92, Dix93, Wil95, DW93, JT05, MGC02, Rot09, SKLM20], lexicographic orders [Rya91, BKPPP00, Zha04], or histories of revisions [BDP99,SKPP22]; the history of revisions may also be necessary for semantical, rather than computational, reasons [KPP00, RvdHM03,HPD07]. Some other solutions change or extend these three solutions, and some others steer away from them. Souza, Moreira and Vieira [SMV19] employ priority graphs [Liu11], strict partial orders over a set of formulae. Aravanis [Ara20] follow Areces and Becher [AB01] in their semantics based on a fixed ordering on the models.\nSchwind, Konieczny and Pino Pérez [SKPP22] introduced a concept of doxastic state equivalence: \"two epistemic states are strongly equivalent according to a revision operator if they cannot be distinguished from each other by any such successive revision steps, which means that these epistemic states have the same behavior for that revision operator\". This abstract definition generalizes Definition 6 to representations that are not connected preorders among propositional models." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [ "b5", "b19", "b16", "b6", "b7", "b34" ], "table_ref": [], "text": "How large a doxastic state is? It depends on how it is stored. Four ways are considered: explicit, by a list of formulae expressing equivalence classes and by a history of revisions, either lexicographic or natural.\nTo compare different representation, they are defined inductively. The definitions for the history of revisions are shown equivalent to the definitions based on equivalence classes in the literature, while showing at the same time how they can be converted into the representation by equivalence classes.\nThe comparison is completed by investigating the other reductions, both their existence and their compactness, their ability to store doxastic states in little space. All four representations are universal: each represents all possible connected preorders on propositional models. They radically differ on compactness. The explicit representation is the more wasteful: it is always exponential in the number of state variables, unlike the others. The representation by equivalence classes and by natural revisions are more compact, and equally so. The most compact of the four representations is that by lexicographic revisions. The other three representations can always be converted into it with a polynomial increase in size, while the converse reduction may produce exponentially large results.\nInvestigation can proceed in many directions. Compactness does not only matter when comparing different representations. It also matters within the same. The same doxastic state has multiple lexicographic representations, for example. Some are short, some are long. The question is whether one can be made shorter. This problem is similar to Boolean formulae minimization [McC56, RSV87, Cou94, TNW96, CS02, UVSV06]. A related question is how to shrink a connected preorder below a certain size while minimizing the loss of information. A subcase of interest is revision redundancy, whether a revision can be removed from a history without changing the resulting doxastic state.\nThe level, lexicographic and natural representations of the doxastic states are the most common in iterated belief revisions, but others are possible. An example is a single formula over an alphabet Y ∪ Z that is true on a model I if and only if I[X/Y ] is less than or equal than I[X/Z]. Other representations of the doxastic state have been proposed [APMW18, ARS02, GK18, SMV19], such as prioritized bases [Bre89, Neb91, BCD + 93], weighted knowledge bases [BBB14,EBBD23] and conditionals [Kut19, ARS02, SKB22]. The preference reasoning field offers many alternatives [DHKP11]. An order among models may not suffice [BC17,BC20]. Similar representation issues also arise in belief merging [MDPP23]." }, { "figure_ref": [], "heading": "A Proofs Lemma 1 The natural order [] is equivalent to the level order [true].", "publication_ref": [], "table_ref": [], "text": "Proof. The definition of the level order is satisfied by every pair of models I, J since both models satisfy the first formula of the level order [true]; as a result, both i and j are equal to 1, and they therefore satisfy i ≤ j.\nThe definition of the natural order is satisfied because of its first part S = [].\nLemma 2 If Q c is the first formula of the level order Q that is consistent with the formula S 1 , then I |= S 1 ∧Q c is equivalent to I ∈ Mod(S 1 ) and ∀K ∈ Mod(S 1 ).I ≤ Q K.\nProof. The two cases are considered in turn: either I |= S 1 ∧ Q c holds or it does not.\n• I |= S 1 ∧ Q c\nThe claim is made of two parts: I ∈ Mod(S 1 ) and ∀K ∈ Mod(S 1 ).I ≤ Q K.\nThe first part I ∈ Mod(S 1 ) holds because\nI satisfies S 1 ∧ Q c .\nThe second part ∀K ∈ Mod(S 1 ).I ≤ Q K is now proved.\nLet Q i be the only formula of Q such that I |= Q i . The assumption\nI |= Q c implies c = i.\nLet K be an arbitrary model of S 1 . The claim is I ≤ Q K.\nOnly one formula S k is satisfied by K. Since Q satisfies S 1 and Q k , it also satisfies\nS 1 ∧ Q k . As a result, S 1 ∧ Q k is consistent.\nSince Q c is the first formula of S that is consistent with S 1 , and Q k is also consistent with S 1 , it holds c ≤ k. The equality of c and i proves i ≤ k, which defines I ≤ Q K.\n• I |= S 1 ∧ Q c\nThe claim is that either I ∈ Mod(S 1 ) or ∀K ∈ Mod(S 1 ).I ≤ Q K is false.\nIf I |= S 1 the first part of this condition is falsified and the claim is therefore proved. It remains to be proved when I |= S 1 .\nLet Q i be the only formula satisfied by I. Since I satisfies S 1 , it also satisfies S 1 ∧ Q i . This formula is therefore satisfiable. Since Q c is the first formula of the sequence that is consistent with S 1 , the index c is less than or equal than the index i. If c were equal to i, then S 1 ∧ Q c would be S 1 ∧ Q i . Yet, I is proved to satisfy the latter and assumed not to satisfy the former. The conclusion is c < i.\nSince Q c is by assumption the first formula of Q that is consistent with S 1 , the conjunction S 1 ∧ Q c is consistent. Let K be one of its models. Let Q k be the only formula of Q that K satisfies. Since K satisfies S 1 ∧ Q c , it also satisfies Q c . As a result, k coincides with c.\nThe conclusions of the last two paragraphs c < i and k = c imply k < i. This is the opposite of i ≤ k, which defined I ≤ Q K. The conclusion is that a model K of S 1 that falsifies I ≤ Q K exists. This proves the falsity of ∀K ∈ Mod(S 1 ).I ≤ Q K, as the claim requires.\nLemma 3 If S = [S 1 , S 2 , . . . , S m ] is a natural order, Q is a level order equivalent to the natural order [S 2 , . . . , S m ] and Q c is the first formula of Q that is consistent with S 1 , then I ≤ S J is:\n• true if I |= S 1 ∧ Q c ; • false if I |= S 1 ∧ Q c and J |= S 1 ∧ Q c ; • same as I ≤ Q J otherwise.\nProof. The definition of natural order is that I ≤ S J holds if and only if:\nS = [] or (I ∈ Mod(S 1 ) and ∀K ∈ Mod(S 1 ).I ≤ R K) or I ≤ R J and (J ∈ Mod(S 1 ) or ∃K ∈ Mod(S 1 ).J ≤ R K))\nSince the statement of the lemma predicates about a first formula S 1 of S, whose existence falsifies the first part of this definition. The statement also assumes that a formula of Q is consistent with S 1 , which proves its satisfiability. The statement also assumes that ≤ R is the same as ≤ Q . The definition of natural order therefore becomes: (I ∈ Mod(S 1 ) and ∀K ∈ Mod(S 1 ).I ≤ Q K) or (I ≤ Q J and (J ∈ Mod(S 1 ) or ∃K ∈ Mod(S 1 ).J ≤ Q K))\nThe three cases are considered in turn.\nI |= S 1 ∧ Q c Lemma 2 proves that I |= S 1 ∧Q c implies I ∈ Mod(S 1\n) and ∀K ∈ Mod(S 1 ).I ≤ Q K. This is the first part of the rewritten definition of I ≤ S J, which therefore holds.\nI |= S 1 ∧ Q c e J |= S 1 ∧ Q c Lemma 2 proves that I |= S 1 ∧ Q c implies that I ∈ Mod(S 1\n) and ∀K ∈ Mod(S 1 ).I ≤ Q K is false. This is the first part of the rewritten definition of I ≤ S J, which is therefore equivalent to its second part:\nI ≤ Q J and (J ∈ Mod(S 1 ) or ∃K ∈ Mod(S 1 ).J ≤ Q K)\nLemma 2 also applies to J |= S 1 ∧ Q c with J in place of I. It proves J ∈ Mod(S 1 ) and ∀K ∈ Mod(S 1 ).J ≤ Q K. Its negation is therefore false: J ∈ Mod(S 1 ) or ∃K ∈ Mod(S 1 ).J ≤ Q K. This is the second part of the rewritten definition of I ≤ S J, which is therefore false, as the conclusion of the lemma requires in this case.\nI |= S 1 ∧ Q c e J |= S 1 ∧ Q c\nAs proved above, the assumption I |= S 1 transforms the definition of I ≤ S J into I ≤ Q J and (J ∈ Mod(S 1 ) or ∃K ∈ Mod(S 1 ).J ≤ Q K).\nLemma 2 applies to J |= S 1 ∧Q c with J in place of I. It proves that J |= S 1 ∧Q c implies the falsity of J ∈ Mod(S 1 ) and ∀K ∈ Mod(S 1 ).J ≤ Q K. This condition is the second part of the rewritten definition of I ≤ S J, which is therefore equivalent to the first part I ≤ Q J, as the conclusion of the lemma requires in this case.\nLemma 4 If the natural order [S 2 , . . . , S m ] is equivalent to the level order\nQ = [Q 1 , . . . , Q k ], then the natural order S = [S 1 , S 2 , . . . , S m ] is equivalent to the level order R = [R 1 , R 2 , . . . , R k+1 ], where R 1 = S 1 ∧Q c , R i = ¬R 1 ∧Q i-1 for every i > 1 and Q c is the first formula of Q such that S 1 ∧Q c is consistent.\nProof. The sequence R is proved to be a level order and the disjunction of all its formulae to be tautologic. The sequence Q has the same properties by assumption. Each formula R i with i > 1 is ¬R 1 ∧ Q i . Its models are the models of Q i minus some models. Since Q i and Q j do not share models, R i and R j do not either. Since the models subtracted from each Q i when forming R i are moved to R 1 , which is also in R, the union of the models of R is exactly the union of the models of Q, the set of all models. Lemma 3 proves that I ≤ S J is:\n• true if I |= S 1 ∧ Q c ; • false if I |= S 1 ∧ Q c and J |= S 1 ∧ Q c ; • same as I ≤ Q J otherwise.\nThe proof shows that ≤ R has the same values in the same cases.\nI |= S 1 ∧ Q c The first formula of R is R 1 = S 1 ∧ Q c by definition. Since I satisfies S 1 ∧ Q c by assumption, it satisfies R 1 . This proves that I |= R i with i = 1.\nLet R j be the formula of R such that J |= R j . Since indexes start at 1, it holds 1 ≤ j. The equality i = 1 proves i ≤ j. The claim I ≤ R J follows.\nI |= S 1 ∧ Q c and J |= S 1 ∧ Q c\nAs in the previous case,\nJ |= S 1 ∧ Q c implies J |= R j with j = 1.\nLet R i be the formula of R satisfies by I. If i were 1, then I would satisfy R 1 , which is S 1 ∧ Q c . Since I does not satisfy this formula by assumption, i is not 1. Since indexes start at one, i is strictly greater than one: i > 1.\nThe conclusions j = 1 and i > 1 prove i > j, which is the exact opposite of i ≤ j. The claimed falsity of I ≤ R J is therefore proved.\nI |= S 1 ∧ Q c and J |= S 1 ∧ Q c\nLet R i and R j be the formulae respectively satisfied by I and J. Since neither model satisfies R 1 = S 1 ∧ Q c by assumption, both i and j are strictly greater than one: i > 1 and j > 1.\nThe formulae R i and R j for indexes greater than one are respectively defined as ¬R 1 ∧ Q i-1 and ¬R 1 ∧ Q j-1 . Since I and J respectively satisfy them, they respectively satisfy Q i-1 and Q j-1 . The level order I ≤ Q J is therefore equivalent to i -1 ≤ j -1, which is the same as i ≤ j. This is also the definition of I ≤ S J.\nThis proves the claimed equality of I ≤ S J and I ≤ Q J.\nTheorem 1 If the natural order [S 2 , . . . , S m ] is equivalent to the level order\nQ = [Q 1 , . . . , Q k ],\nthen the natural order S = [S 1 , S 2 , . . . , S m ] is equivalent to the following level order R, where\nQ c is the first formula of Q that is consistent with S 1 . R = [S 1 ∧ Q c , Q 1 , . . . , Q c-1 , ¬S 1 ∧ Q c , Q c+1 , . . . , Q k ]\nProof. Lemma 4 proves that the natural order S is equivalent to the level order\nR = [R 1 , R 2 , . . . , R k+1 ], where R 1 = S 1 ∧ Q c and R i = ¬R 1 ∧ Q i-1 for every i > 1.\nThis is the same sequence as in the statement of the lemma. The first formula is the same in both sequences: S 1 ∧ Q c . The formula R i for i = c + 1 in the sequence of the lemma is also the same as the formula in the sequence of the theorem. Since\nR 1 is S 1 ∧ Q c , the formula R i = ¬R 1 ∧ Q i-1 for i = c + 1 in the sequence of the lemma is the same as ¬(S 1 ∧ Q c ) ∧ Q c , which is equivalent to (¬S 1 ∨ ¬Q c ) ∧ Q c , in turn equivalent to (¬S 1 ∧ Q c ) ∨ (¬Q c ∧ Q c ) and to ¬S 1 ∧ Q c ,\nwhich is the formula in the sequence of the theorem.\nThe formulae R i with i strictly greater than one and different from c + 1 are\nR i = ¬R 1 ∧ Q i-1 in the sequence of the lemma. Since R 1 is S 1 ∧ Q c , this formula R i = ¬R 1 ∧ Q i-1 is the same as R i = ¬(S 1 ∧ Q c ) ∧ Q i-1 , which is equivalent to (¬S 1 ∧ Q i-1 ) ∨ (¬Q c ∨ Q i-1 ). The two formulae Q c and Q i-1 are mutually inconsistent since i is not equal to c + 1. As a result, Q i-1 implies ¬Q c . This proves ¬Q c ∨ Q i-1 equivalent to Q i-1 . Therefore, R i in the sequence of the lemma is equivalent to (¬S 1 ∧ Q i-1 ) ∨ Q i-1 , which is equivalent to Q i-1 ,\nas in the sequence of the theorem.\nTheorem 2 Every natural order is equivalent to a level order of size bounded by a polynomial in the size of the natural order.\nProof. How to translate a natural order S into a level order R is shown by induction.\nThe base is case is S = [], which translates into R = [true] by Lemma 1.\nThe induction case requires a way to translate a natural order comprising at least a formula into a level order. Let S = [S 1 , S 2 , . . . , S m ] be the natural order. By the inductive assumption, [S 2 , . . . , S m ] translates into a level order Q. Theorem 1 proves S equivalent to the level order R:\nR = [S 1 ∧ Q c , Q 1 , . . . , Q c-1 , ¬S 1 ∧ Q c , Q c+1 , . . . , Q k ]\nThis order is larger than R only by 2 × |S 1 |. This inductively proves that the translation is possible and produces a sequence that is at most twice the size of the original.\nThe following lemmas and theorems show that every lexicographic order is equivalent to a level order. The proof scheme is the same as that of natural orders: base case and induction case. The base case is that the level order [true] is equivalent to the empty lexicographic order []. The induction case adds a single formula at the front of the lexicographic sequence and shows how it changes the corresponding level order." }, { "figure_ref": [], "heading": "Lemma 10 The lexicographic order [] is equivalent to the level order [true].", "publication_ref": [], "table_ref": [], "text": "Proof. The two definitions are: level order: i ≤ j where I |= S i and J |= S j lexicographic order:\nS = [] or (I ≤ S 1 J and (J ≤ S 1 I or I ≤ R J))\nThe first definition is satisfied by every pair of models I, J since both models satisfy the first formula of the level order [true]; as a result, both i and j are equal to 1, and they therefore satisfy i ≤ j.\nThe second definition is satisfied because of its first part S = [].\nThe induction step changes the level order to keep it equivalent to the lexicographic order while adding a formula at time to the latter.\nLemma 11 If S = [S 1 , S 2 , . . . , S m ] is a lexicographic order and Q is a level order equivalent to the lexicographic order [S 2 , . . . , S m ], then I ≤ S J is:\n• true if I |= S 1 and J |= S 1 • false if I |= S 1 and J |= S 1 • same as I ≤ Q J otherwise\nProof. The definition of the lexicographic order I ≤ S J is:\nS = [] or (I ≤ S 1 J and (J ≤ S 1 I or I ≤ R J))\nThe lemma implicitly assumes S not empty. What results from removing the case S = [] is:\nI ≤ S 1 J and (J ≤ S 1 I or I ≤ R J)\nThe comparison I ≤ S J is evaluated in the four cases where I or J satisfy S 1 or not.\nI |= S 1 and J |= S 1 The definition of I ≤ S 1 J is I |= S 1 or J |= S 1 , and is therefore satisfied. This is the first part of I ≤ S J.\nSimilarly, J ≤ S 1 I is J |= S 1 or I |= S 1 . Both are false. Therefore, J ≤ S 1 I is false. Its negation J ≤ S 1 I is true. The second part of I ≤ S J is therefore true, being J ≤ S 1 I or I ≤ R J I |= S 1 and J |= S 1 The definition of I ≤ S 1 J is I |= S 1 or J |= S 1 ; both conditions are false. Since I ≤ S 1 J is false, its conjunction with J ≤ S 1 I or I ≤ R J is also false. Since this conjunction is equivalent to I ≤ S J, this comparison is false as well.\nI |= S 1 and J |= S 1 The first assumption I |= S 1 implies I ≤ S 1 J.\nThe second assumption J |= S 1 implies J ≤ S 1 I.\nThe condition I ≤ S 1 J and (J ≤ S 1 I or I ≤ R J) simplifies to the equivalent condition true and (false or I ≤ R J), which is the same as I ≤ R J.\nI |= S 1 and J |= S 1 :\nThe first assumption I |= S 1 implies J ≤ S 1 I.\nThe second assumption J |= S 1 implies I ≤ S 1 J.\nThe condition I ≤ S 1 J and (J ≤ S 1 I or I ≤ R J) simplifies to the equivalent condition true and (false or I ≤ R J), which is the same as\nI ≤ R J.\nThe second part of the induction step is representing the order I ≤ S J in the previous lemma as a level order.\nLemma 12 If the lexicographic order [S 2 , . . . , S m ] is equivalent to the level order Q = [Q 1 , . . . , Q k ], then the lexicographic order [S 1 , S 2 , . . . , S m ] is equiv- alent to the following level order. R = [S 1 ∧ Q 1 , . . . , S 1 ∧ Q k , ¬S 1 ∧ Q 1 , . . . , ¬S 1 ∧ Q k ]\nProof. The lemma assumes that Q is a level order (its formulae are mutually inconsistent) and the disjunction of all its formulae is tautologic. The models of every formula Q i are split among S 1 ∧ Q i and ¬S 1 ∧ Q i . Therefore, R has the same properties of Q.\nThe rest of the proof shows that S is equivalent to R. Lemma 11 proves that I ≤ S J is:\n• true if I |= S 1 and J |= S 1 • false if I |= S 1 and J |= S 1 • same as I ≤ Q J otherwise\nThe same is proved for I ≤ R J. The starting point is the definition of level order: I ≤ R J is i ≤ j and I |= R i and J |= R j . It is evaluated in the three cases above.\nI |= S 1 and J |= S 1 Let R i and R j be the formulae of R respectively satisfied by I and J.\nSince I also satisfies S 1 by assumption, it satisfies S 1 ∧ R i . Since J falsifies S 1 by assumption, it satisfies ¬S 1 and therefore also ¬S 1 ∧ R j .\nThese formulae S 1 ∧ R i and ¬S 1 ∧ R j are in the positions i and k + j in the sequence R. Since i is an index a sequence of length k, it holds i ≤ k. As a result, i < k + j. This inequality implies i ≤ k + j, which defines I ≤ S J.\nI |= S 1 and J |= S 1 Let R i and R j be the formulae of R respectively satisfied by I and J. The assumptions I |= S 1 implies I |= ¬S 1 . As a result, I satisfies ¬S 1 ∧ R i . Since J satisfies J |= S 1 by assumption, it also satisfies S 1 ∧ R j .\nThe formulae ¬S 1 ∧R i and S 1 ∧R j are in the positions k +i and j in the sequence R. Since j is an index a sequence of length k, it holds j ≤ k.\nAs a result, j < k + i. This inequality is the opposite of k + i ≤ j, which defines I ≤ S J. This comparison is therefore false, as required.\notherwise The two remaining cases are I |= S 1 and J |= S 1 and I |= S 1 and J |= S 1 .\nLet R i and R j be the formulae of R respectively satisfied by I and J.\nThe conditions I |= S 1 and J |= S 1 imply I |= S 1 ∧ R i and J |= S 1 ∧ R j . These formulae are in the sequence R at positions i and j. The definition of I ≤ S J is i ≤ j, which is also the definition of I ≤ R J in this case.\nThe conditions I |= S 1 and J |= S 1 imply I |= ¬S 1 and J |= ¬S 1 , which imply I |= ¬S 1 ∧ R i and J |= ¬S 1 ∧ R j . These formulae are in the sequence R at positions k + i and k + j. The definition of I ≤ S J is i ≤ j, which is equivalent to k + i ≤ k + j, which defines I ≤ R J in this case.\nThis lemma shows how to keep a level order equivalent to a lexicographic order while adding a formula to the latter.\nTheorem 3 Every lexicographic order is equivalent to a level order.\nProof. The claim is proved by induction on the length of the lexicographic order S.\nThe base case is S = []. Its equivalent level order is R = [true]. It is equivalent because both compare I ≤ J for all models. The first because S = [] is one of the condition of its definition. The second because both I and J always satisfy true, the first formula of R.\nIn the induction case, the sequence S has length one or more. Let S 1 , S 2 , . . . , S m be its formulae. Lemma 12 requires a level order Q to be equivalent to [S 2 , . . . , S m ]; it exists by the induction assumption. As a result, the claim of the lemma holds: S is equivalent to a level order R.\nTheorem 4 Every connected preorder is the level ordering of a sequence of mutually inconsistent formulae.\nProof. Every connected preorder corresponds to a sequence of disjoint sets E = [E 1 , . . . , E m ], where I ≤ J corresponds to I ∈ E i , J ∈ E j and i ≤ j. In the specific case of propositional models, every set of models E i is the set of models of a formula S i . Therefore, a connected preorder is also the level order [S 1 , . . . , S m ].\nLemma 5 It holds I ≤ S J holds if and only if the following condition holds, where S is a level order, S 1 is its first formula and R the sequence of the following ones. S = [] or I ∈ Mod(S 1 ) or (J ∈ Mod(S 1 ) and I ≤ R J)\nProof. The definition of the level order I ≤ S J is: ∀j.J |= S j or i ≤ j where I |= S i and J |= S j This definition is proved to coincide with the condition in the statement of the lemma.\nSince the condition is \"S = [] or something else\", it is true when S = []. This is also the case for the definition, since no formula of the sequence is satisfied by a model J.\nIf S is not empty, the condition of the statement of the lemma becomes:\nI ∈ Mod(S 1 ) or (J ∈ Mod(S 1 ) and\nI ≤ R J)\nThis is proved to coincide with the definition of the level orders in all cases: I satisfies S 1 , it does not and J does, and none of them do. The proof is by induction: it is assumed true on sequences strictly shorter than S.\nI |= S 1 The condition is true because it is \"I ∈ Mod(S 1 ) or something else\", and I ∈ Mod(S 1 ) is true because of the assumption I |= S 1 .\nThe definition is also true. The assumption I |= S 1 implies that I |= S i with i = 1. Let S j be the formula such that J |= S j . Since the sequence starts at 1, this index j is larger or equal than 1. Since i is equal to one, j ≥ i follows. This is the same as i ≤ j, which defines I ≤ S J.\nI |= S 1 and J |= S 1 The condition simplifies from I ∈ Mod(S 1 ) or (J ∈ Mod(S 1 ) and I ≤ R J) to false or ( true and I ≤ R J, which is false.\nThe definition is not met either. Since J |= S 1 , the first part of the definition ∀j.J |= S j is false. Since J |= S 1 , the index j such that J |= S j is 1. Since I does not satisfy S 1 , it either satisfies S i with i > 1 or it does not satisfy any formula of S. In the first case, j > i implies that i ≤ j is false. In the second case, I |= S i is false for all formulae S i . The second part of the definition i ≤ j where I |= S i and J |= S j is false either way.\nI |= S 1 and J |= S 1 These two assumptions imply that I ∈ Mod(S 1 ) and J ∈ Mod(S 1 ) are false. The condition I ∈ Mod(S 1 ) or (J ∈ Mod(S 1 ) and I ≤ R J) simplifies to false or (true and I ≤ R J)), which is equivalent to\nI ≤ R J, where R = [S 2 , . . . , S m ].\nThe definition of the level order also simplifies.\nIts first part ∀j.J |= S j is equivalent to ∀j > 1.J |= S j , since J is known not to satisfy S 1 . This is the same as ∀j.J |= R j .\nIts second part i ≤ j where I |= S i and J |= S j may only hold with i > 1 and j > 1 since neither I nor J satisfy S 1 . As a result, it simplifies to i + 1 ≤ j + 1 where I |= R i and J |= R j where R = [S 2 , . . . , S m ]. This is equivalent to i ≤ j where I |= R i and J |= R j .\nThe conclusion is that the definition of I ≤ S J is the same as ∀j.J |= R j or i ≤ j where I |= R i and J |= R j , the definition of I ≤ R J.\nThe next results prove that level orders translate into natural orders in polynomial time and space.\nLemma 5 proves that I ≤ S J equates a certain inductive condition on the level order S. The following theorem proves the same for natural orders when S comprises mutually inconsistent formulae. For these sequences the identity translates level orders into equivalent natural orders. This suffices since level orders can be restricted to mutually inconsistent formulae.\nA preliminary lemma on natural orders is necessary.\nLemma 13 If a model J falsifies all formulae of the natural order S, then I ≤ S J holds for every model I.\nProof. The definition of I ≤ S J is:\nS = [] or (I ∈ Mod(S 1 ) and ∀K ∈ Mod(S 1 ).I ≤ R K) or I ≤ R J and (J ∈ Mod(S 1 ) or ∃K ∈ Mod(S 1 ).J ≤ R K))\nIn the base case S = [] this definition is met. As a result, I ≤ S J holds by definition.\nThe induction case is proved as follows. Since J does not satisfy any formula of S = [S 1 , S 2 , . . . , S m ], it does not satisfy S 1 and does not satisfy any formula of R = [S 2 , . . . , S m ]. The latter implies I ≤ R J by the induction assumption. The definition of I ≤ S J simplifies as follows when replacing J ∈ Mod(S 1 ) and I ≤ R J by true." }, { "figure_ref": [], "heading": "S = [] or", "publication_ref": [], "table_ref": [], "text": "(I ∈ Mod(S 1 ) and ∀K ∈ Mod(S 1 ).I ≤ R K) or (I ≤ R J and (J ∈ Mod(S 1 ) or ∃K ∈ Mod(S 1 ).J ≤ R K)) ≡ false or (I ∈ Mod(S 1 ) and ∀K ∈ Mod(S 1 ).I ≤ R K) or (true and (true or ∃K ∈ Mod(S 1 ).J ≤ R K)) ≡ (I ∈ Mod(S 1 ) and ∀K ∈ Mod(S 1 ).I ≤ R K) or (true and true) ≡ (I ∈ Mod(S 1 ) and ∀K ∈ Mod(S 1 ).I ≤ R K) or true true This lemma allows expressing a natural order in the same way of a level order when its formulae are mutually inconsistent. Lemma 14 If S is a sequence of mutually inconsistent formulae, the natural order I ≤ S J holds if and only if the following condition holds, where S 1 is the first formula of S and R the sequence of the following formulae of S. S = [] or I ∈ Mod(S 1 ) or (J ∈ Mod(S 1 ) and I ≤ R J)\nProof. Since the formulae of S do not share models, if a model K satisfies S 1 it falsifies all other formulae S 2 , . . . , S m . The latter implies I ≤ R K by Lemma 13 since R = [S 2 , . . . , S m ]. In other words, I ≤ R K holds for all formulae that satisfy S 1 . In formulae, ∀K ∈ Mod(S 1 ).I ≤ R K.\nFor the same reason, ∀K ∈ Mod(S 1 ).J ≤ R K holds as well. This is the contrary of ∃K ∈ Mod(S 1 ).J ≤ R K, which is therefore false.\nReplacing ∀K ∈ Mod(S 1 ).I ≤ R K with true and ∃K ∈ Mod(S 1 ).J ≤ R K with false in the definition of the natural order I ≤ S J yields: The final condition is the claim of the lemma.\nSince both level orders and natural orders are equivalent to the same condition when their formulae are mutually inconsistent, they are equivalent.\nCorollary 1 Every level order of a sequence of mutually inconsistent formulae is the natural order of the same sequence.\nThe translation is the identity. It takes polynomial time and space. This concludes the proof.\nThe following results prove that level orders translate into lexicographic order in polynomial time and space. This is proved by showing that every level order of a sequence of mutually inconsistent formulae is the lexicographic order of the same sequence.\nThe first step is a preliminary lemma.\nLemma 15 If a model J falsifies all formulae of the sequence S, then the lexicographic ordering I ≤ S J holds for every model I.\nProof. The definition of the lexicographic order I ≤ S J is:\nS = [] or (I ≤ S 1 J and (J ≤ S 1 I or I ≤ R J))\nThis definition is the disjunction of S = [] with another condition. The base case of induction S = [] therefore meets the definition.\nIn the induction case, S = [] is false. The definition reduces to:\nI ≤ S 1 J and (J ≤ S 1 I or I ≤ R J)\nSince J does not satisfy any formula of S, it does not satisfy its first formula S 1 . In turn, J |= S 1 imply I |= S 1 or J |= S 1 , the definition of I ≤ S 1 J. Since J does not satisfy any formula of S, it does not satisfy any formula of its subsequence R. By the induction assumption, I ≤ R J holds. The definition of the lexicographic order further simplifies to: In the induction case, S is not empty. The definition and the condition respectively became: I ≤ S 1 J and (J ≤ S 1 I or I ≤ R J) I ∈ Mod(S 1 ) or (J ∈ Mod(S 1 ) and I ≤ R J)\nThe induction assumption implies that the two occurrences of I ≤ R J coincide since R is strictly shorter than S.\nTwo cases are considered: either I satisfies S 1 or not.\nI |= S 1 The condition in the statement of the lemma is true since it is a disjunction comprising I ∈ Mod(S 1 ).\nThe assumption I |= S 1 implies I ≤ S 1 J by definition. It also simplifies the definition of\nJ ≤ S 1 I from J |= S 1 or I |= S 1 into just J |= S 1 .\nThe definition of I ≤ S J therefore simplifies:\nI ≤ S 1 J and (J ≤ S 1 I or I ≤ R J) ≡ true and (J |= S 1 or I ≤ R J) ≡ J |= S 1 or I ≤ R J\nThis is true if J |= S 1 , and is now proved true in the other case, J |= S 1 .\nSince the formulae S i do not share models and J is a model of S 1 , it is not a model of any other formula S 2 , . . . , S m . These formulae make R: no formula of R is satisfied by J. Lemma 15 proves I ≤ R J.\nSince the definition of I ≤ S J is equivalent to J ≤ S 1 I or I ≤ R J, it is met.\nI |= S 1 This assumption implies J ≤ S 1 I, which makes J ≤ S 1 I false. Since I ≤ S 1 J is defined as I |= S 1 or J |= S 1 , it becomes the same as J |= S 1 .\nThe definition of I ≤ S J simplifies:\nI ≤ S 1 J and (J ≤ S 1 I or I ≤ R J) ≡ J |= S 1 and (false or I ≤ R J) ≡ J |= S 1 and I ≤ R J\nThe condition in the statement of the lemma also simplifies thanks to the current assumption I |= S 1 :\nI ∈ Mod(S 1 ) or (J ∈ Mod(S 1 ) and I ≤ R J) ≡ false or (J ∈ Mod(S 1 ) and I ≤ R J) ≡ J |= S 1 and I ≤ R J This is the same as the definition of I ≤ S J.\nThe translation is a corollary.\nCorollary 2 Every level order of a sequence of mutually inconsistent formulae is the natural order of the same sequence.\nSince the translation is the identity, it takes linear space and time. This concludes the proof that level orders translate into lexicographic orders in polynomial time and space.\nLemma 6 The level order of a sequence of m formulae has at most m + 1 equivalence classes.\nProof. As shown in Section 2, a level order is equivalent to another where every model is satisfied by exactly one formula by the addition of a single formula. This is an increase from m to m + 1 formulae.\nIn this other order, the condition ∀j.J |= S j is always false. As a result, I ≤ S J holds if and only if i ≤ j, where I |= S i and J |= S j . The reverse comparison J ≤ S I holds if and only if j ≤ i. As a result, I and J are compared the same if and only if i = j, where I |= S i and J |= S j . Two models are equivalent if and only if they satisfy the same formula of S. This is an isomorphism between the equivalence classes and the formulae of the sequence.\nLemma 7 The natural order of a sequence of m formulae has at most m + 1 equivalence classes.\nProof. Theorem 1 translates a natural order into an equivalent level order by iterating over the formulae of the sequence, each time turning a sequence Q into a sequence R:\nR = [S 1 ∧ Q c , Q 1 , . . . , Q c-1 , ¬S 1 ∧ Q c , Q c+1 , . . . , Q k ]\nThe formulae of R are the same of Q except for the absence of Q c and the presence of S 1 ∧Q c and ¬S 1 ∧Q c . The number of formulae therefore increases by one at each step. Since the process starts from Q = [] and iterates over the m formulae of the natural order, it produces a level order of the same length.\nBy Lemma 6, this level order has at most m+ 1 equivalence classes. Since the natural order is equivalent, it has the same equivalent classes.\nLemma 8 The lexicographic comparisons I ≤ S J and J ≤ S I hold at the same time only if I ≤ S k J and J ≤ S k I both hold for every formula S k of S.\nProof. The claim is that I ≤ S J and J ≤ S I imply I ≤ S k J and J ≤ S k I for every formula S k of S.\nThe definition of I ≤ S J is: S = [] or (I ≤ S 1 J and (J ≤ R 1 I or I ≤ R J))\nThe claim is proved by induction. If S is the empty sequence, the conclusion is vacuously true since S contains no formula S k .\nThe inductive assumption is that the claim holds for every sequence R shorter than S. The premise of the lemma is that both I ≤ S J and J ≤ S I hold. The claim can be split in three parts: I ≤ S 1 J, J ≤ S 1 I, and the same for every S k with k > 1.\nIn the inductive case S is not empty. The definition of I ≤ S J becomes:\nI ≤ S 1 J and (J ≤ R 1 I or I ≤ R J)\nSince I ≤ S J holds by the premise of the lemma, its first conjunct I ≤ S 1 J holds as well. For the same reason, J ≤ S I implies J ≤ S 1 I. This proves the first two parts of the claim.\nSince I ≤ S J holds by the premise of the lemma, its second conjunct J ≤ R 1 I or I ≤ R J holds as well. Its first disjunct is contradicted by J ≤ S 1 I, proved above. What is left is I ≤ R J. The same argument proves that J ≤ I implies J ≤ R I.\nBy the induction assumption, I ≤ R J and J ≤ R I imply I ≤ S k J and J ≤ S k I for every formula S k of R. These are the formulae S k of S with k > 1.\nLemma 9 The lexicographic order S = [x 1 , . . . , x n ] has 2 m equivalence classes.\nProof. Two different models differs on a variable x k at least: I |= x k and J |= x k .\nIf I and J were equivalent according to S, then Lemma 8 would apply. It implies I ≤ x k J and J ≤ x k I for every k. The second conclusion J ≤ x k I is the same as J |= x k or I |= x k , both of which are false. Therefore, I and J are not equivalent according to S.\nThe conclusion is that two different models are not equivalent. Every model is in its own class of equivalence. Since the models are 2 m , the classes of equivalence are the same number.\nTheorem 8 The lexicographic order [x 1 , . . . , x n ] is only equivalent to level and natural orders comprising at most 2 n -1 formulae.\nProof. The lexicographic order [x 1 , . . . , x n ] is assumed equivalent to a level or natural order S by contradiction. By Lemma 9, the lexicographic order [x 1 , . . . , x n ] has 2 n equivalence classes. Since the level or natural order S is equivalent to it, it has the same equivalence classes. By Lemma 6 or Lemma 7, S comprises at least 2 m -1 formulae." } ]
Iterated belief revision requires information about the current beliefs. This information is represented by mathematical structures called doxastic states. Most literature concentrates on how to revise a doxastic state and neglects that it may exponentially grow. This problem is studied for the most common ways of storing a doxastic state. All four methods are able to store every doxastic state, but some do it in less space than others. In particular, the explicit representation (an enumeration of the current beliefs) is the more wasteful on space. The level representation (a sequence of propositional formulae) and the natural representation (a history of natural revisions) are more compact than it. The lexicographic representation (a history of lexicographic revision) is even more compact than them.
Representing states in iterated belief revision
[ { "figure_caption": "Figure 1 :1Figure 1: Comparison of the four considered representations", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "S= [] or (I ∈ Mod(S 1 ) and ∀K ∈ Mod(S 1 ).I ≤ R K) or (I ≤ R J and (J ∈ Mod(S 1 ) or ∃K ∈ Mod(S 1 ).J ≤ R K)) ≡ S = [] or (I ∈ Mod(S 1 ) and true) or (I ≤ R J and (J ∈ Mod(S 1 ) or false)) ≡ S = [] or I ∈ Mod(S 1 ) or (I ≤ R J and J ∈ Mod(S 1 ))", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "true and (J ≤ S 1 I or true) ≡ true and true ≡ true This lemma allows proving the equivalent condition. Lemma 16 If S is a sequence of mutually inconsistent formulae, the lexicographic order I ≤ S J holds if and only if the following condition holds, where S 1 is the first formula of S and R the sequence of the following formulae of S. S = [] or I ∈ Mod(S 1 ) or (J ∈ Mod(S 1 ) and I ≤ R J) Proof. The condition in the statement of the lemma is proved to coincide with the definition of the lexicographic order I ≤ S J: S = [] or (I ≤ S 1 J and (J ≤ S 1 I or I ≤ R J)) Both the definition and the condition are disjunctions comprising S = []. Therefore, they are both true in the base case of induction S = [].", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" } ]
Paolo Liberatore
[ { "authors": "C Areces; V Becher", "journal": "Springer Science & Business Media", "ref_id": "b0", "title": "Iterable AGM functions", "year": "2001" }, { "authors": "C E Alchourrón; P Gärdenfors; D Makinson", "journal": "Journal of Symbolic Logic", "ref_id": "b1", "title": "On the logic of theory change: Partial meet contraction and revision functions", "year": "1985" }, { "authors": "T I Aravanis; P Peppas; M.-A. Mary-Anne Williams", "journal": "ACM Press", "ref_id": "b2", "title": "Iterated belief revision and Dalal's operator", "year": "2018" }, { "authors": "T I Aravanis", "journal": "Journal of Logic and Computation", "ref_id": "b3", "title": "On uniform belief revision", "year": "2020" }, { "authors": "H Andréka; M Ryan; P.-Y Schobbens", "journal": "Journal of Logic and Computation", "ref_id": "b4", "title": "Operators and laws for combining preference relations", "year": "2002" }, { "authors": "L Bouzar-Benlabiod; S Benferhat; T Bouabana-Tebibel", "journal": "International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems", "ref_id": "b5", "title": "Comparing and weakening possibilistic knowledge bases", "year": "2014" }, { "authors": "R Booth; J Chandler", "journal": "Journal of Philosophical Logic", "ref_id": "b6", "title": "The irreducibility of iterated to single revision", "year": "2017" }, { "authors": "R Booth; J Chandler", "journal": "Artificial Intelligence", "ref_id": "b7", "title": "On strengthening the logic of iterated belief revision: Proper ordinal interval operators", "year": "2020" }, { "authors": "S Benferhat; C Cayrol; D Dubois; J Lang; H Prade", "journal": "", "ref_id": "b8", "title": "Inconsistency management and prioritized syntax-based entailment", "year": "1993" }, { "authors": "S Benferhat; D Dubois; O Papini", "journal": "AAAI Press/The MIT Press", "ref_id": "b9", "title": "A sequential reversible belief revision method based on polynomials", "year": "1999" }, { "authors": "S Benferhat; S Konieczny; O Papini; R Pino Pérez", "journal": "", "ref_id": "b10", "title": "Iterated revision by epistemic states: Axioms, semantics and syntax", "year": "2000" }, { "authors": "C Boutilier", "journal": "Journal of Philosophical Logic", "ref_id": "b11", "title": "Iterated revision and minimal change of conditional beliefs", "year": "1996" }, { "authors": "G Brewka", "journal": "", "ref_id": "b12", "title": "Preferred subtheories: an extended logical framework for default reasoning", "year": "1989" }, { "authors": "J Chandler; R Booth", "journal": "Journal of Philosophical Logic", "ref_id": "b13", "title": "Elementary belief revision operators", "year": "2023" }, { "authors": "O Coudert", "journal": "Integration", "ref_id": "b14", "title": "Two-level logic minimization: an overview", "year": "1994" }, { "authors": "O Coudert; T Sasao", "journal": "Springer", "ref_id": "b15", "title": "Two-level logic minimization", "year": "2002" }, { "authors": "C Domshlak; E Hüllermeier; S Kaci; H Prade", "journal": "Artificial Intelligence", "ref_id": "b16", "title": "Preferences in AI: an overview", "year": "2011" }, { "authors": "S Dixon", "journal": "", "ref_id": "b17", "title": "A finite base belief revision system", "year": "1993" }, { "authors": "S Dixon; W Wobcke", "journal": "IEEE Computer Society", "ref_id": "b18", "title": "The implementation of a first-order logic AGM belief revision system", "year": "1993" }, { "authors": "O Ettarguy; A Begdouri; S Benferhat; C Delenne", "journal": "EasyChair", "ref_id": "b19", "title": "Syntactic computation of fagin-halpern conditioning in possibility theory", "year": "2023" }, { "authors": "E L Fermé; S O Hansson", "journal": "Journal of Philosophical Logic", "ref_id": "b20", "title": "AGM 25 years -twenty-five years of research in belief change", "year": "2011" }, { "authors": "P Gärdenfors", "journal": "MIT Press", "ref_id": "b21", "title": "Knowledge in Flux: Modeling the Dynamics of Epistemic States", "year": "1988" }, { "authors": "D P Guralnik; D E Koditschek", "journal": "Computing Research Repository", "ref_id": "b22", "title": "Iterated belief revision under resource constraints: Logic as geometry", "year": "2018" }, { "authors": "A Hunter; J P P Delgrande", "journal": "", "ref_id": "b23", "title": "An action description language for iterated belief change", "year": "2007" }, { "authors": " Hsvb", "journal": "", "ref_id": "b24", "title": "", "year": "" }, { "authors": "J Haldimann; K Sauerwald; M Berg; G Kern-Isberner; C Beierle", "journal": "Springer", "ref_id": "b25", "title": "Conditional descriptor revision and its modelling by a CSP", "year": "2021" }, { "authors": "Y Jin; M Thielscher", "journal": "Belief Change in Rational Agents: Perspectives from Artificial Intelligence, Philosophy, and Economics", "ref_id": "b26", "title": "Actions and belief revision: A computational approach", "year": "2005-08" }, { "authors": "G Kern-Isberner; J Heyninck; C Beierle", "journal": "", "ref_id": "b27", "title": "Conditional independence for iterated belief revision", "year": "2022" }, { "authors": "S Konieczny; R Pino Pérez", "journal": "Journal of Applied Non-classical logics", "ref_id": "b28", "title": "A framework for iterated revision", "year": "2000" }, { "authors": "G Kern-Isberner; M Sezgin; C Beierle", "journal": "Artificial Intelligence", "ref_id": "b29", "title": "A kinematics principle for iterated revision", "year": "2023" }, { "authors": "S Kutsch", "journal": "", "ref_id": "b30", "title": "InfOCF-Lib: A Java library for OCF-based conditional inference", "year": "2019" }, { "authors": "P Liberatore", "journal": "ACM Transactions on Computational Logic", "ref_id": "b31", "title": "Mixed iterated revisions: Rationale, algorithms and complexity", "year": "2023" }, { "authors": "F Liu", "journal": "Springer Science & Business Media", "ref_id": "b32", "title": "Reasoning about preference dynamics", "year": "2011" }, { "authors": "E J Mccluskey", "journal": "The Bell System Technical Journal", "ref_id": "b33", "title": "Minimization of Boolean functions", "year": "1956" }, { "authors": "A Mata Díaz; R Pino Pérez", "journal": "International Journal of Approximate Reasoning", "ref_id": "b34", "title": "On manipulation in merging epistemic states", "year": "2023" }, { "authors": "T Meyer; A Ghose; S Chopra", "journal": "", "ref_id": "b35", "title": "Syntactic representations of semantic merging operations", "year": "2002" }, { "authors": "A Nayak", "journal": "Erkenntnis", "ref_id": "b36", "title": "Iterated belief change based on epistemic entrenchment", "year": "1994" }, { "authors": "B Nebel", "journal": "", "ref_id": "b37", "title": "Belief revision and default reasoning: Syntax-based approaches", "year": "1991" }, { "authors": "H Rott", "journal": "Springer", "ref_id": "b38", "title": "Shifting priorities: Simple representations for twentyseven iterated theory change operators", "year": "2009" }, { "authors": "R L Rudell; A Sangiovanni-Vincentelli", "journal": "IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems", "ref_id": "b39", "title": "Multiple-valued minimization for PLA optimization", "year": "1987" }, { "authors": "J.-W Roorda; W Van Der Hoek; J.-J Ch; Meyer", "journal": "Journal of the Interest Group in Pure and Applied Logic", "ref_id": "b40", "title": "Iterated belief change in multi-agent systems", "year": "2003" }, { "authors": "M Ryan", "journal": "", "ref_id": "b41", "title": "Belief revision and ordered theory presentations", "year": "1991" }, { "authors": "K Sauerwald; C Beierle", "journal": "", "ref_id": "b42", "title": "Iterated belief change, computationally", "year": "2022" }, { "authors": "K Sauerwald; J Haldimann", "journal": "", "ref_id": "b43", "title": "WHIWAP: checking iterative belief changes", "year": "2019" }, { "authors": "K Sauerwald; G Kern-Isberner; C Beierle", "journal": "Computing Research Repository", "ref_id": "b44", "title": "A conditional perspective on the logic of iterated belief contraction", "year": "2022" }, { "authors": "N Schwind; S Konieczny; J.-M Lagniez; P Marquis", "journal": "", "ref_id": "b45", "title": "On computational aspects of iterated belief change", "year": "2020" }, { "authors": "N Schwind; S Konieczny; R Pino Pérez", "journal": "", "ref_id": "b46", "title": "On the representation of darwiche and pearl's epistemic states for iterated belief revision", "year": "2022" }, { "authors": "M Souza; A F Moreira; R Vieira", "journal": "AAAI Press/The MIT Press", "ref_id": "b47", "title": "Iterated belief base revision: A dynamic epistemic logic approach", "year": "2019" }, { "authors": "W Spohn", "journal": "Kluwer Academics", "ref_id": "b48", "title": "Ordinal conditional functions: A dynamic theory of epistemic states", "year": "1988" }, { "authors": "M Souzam; R Vieira; A F Moreira", "journal": "Theoretical Computer Science", "ref_id": "b49", "title": "Dynamic preference logic meets iterated belief change: Representation results and postulates characterization", "year": "2021" }, { "authors": "M Theobald; S M Nowick; T Wu", "journal": "", "ref_id": "b50", "title": "Espresso-HF: a heuristic hazard-free minimizer for two-level logic", "year": "1996" }, { "authors": "C Umans; T Villa; A L Sangiovanni-Vincentelli", "journal": "IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems", "ref_id": "b51", "title": "Complexity of two-level logic minimization", "year": "2006" }, { "authors": "M.-A Williams", "journal": "", "ref_id": "b52", "title": "Two operators for theory base change", "year": "1992" }, { "authors": "M.-A Williams", "journal": "", "ref_id": "b53", "title": "Iterated theory base change: A computational model", "year": "" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b54", "title": "", "year": "1995" }, { "authors": "D Zhang", "journal": "", "ref_id": "b55", "title": "Properties of iterated multiple belief revision", "year": "2004" }, { "authors": "Z Q Zhuang; M Pagnucco; T Meyer", "journal": "Springer", "ref_id": "b56", "title": "Implementing iterated belief change via prime implicates", "year": "2007" } ]
[ { "formula_coordinates": [ 4, 242.4, 141.4, 125.4, 124.34 ], "formula_id": "formula_0", "formula_text": "❅ ❅ ❅ ❅ ❅ ❅ ■ ✒ ✒ ❅ ❅ ❅ ❅ ❅ ❅ ■ ✲ ✛" }, { "formula_coordinates": [ 6, 128.4, 128.79, 353.14, 36.75 ], "formula_id": "formula_1", "formula_text": "• I ≤ S J if and only if I ∈ M i , J ∈ M j and i ≤ j; • S i = {I ∈ M\\(S 1 ∪ • • • ∪ S i-1 ) | ∀J ∈ M\\(S 1 ∪ • • • ∪ S i-1 ) . I ≤ J}" }, { "formula_coordinates": [ 6, 125.16, 492.99, 374.2, 51.27 ], "formula_id": "formula_2", "formula_text": "1. the order [S 1 , . . . , S m ] is the same as [S 1 , S 2 ∧ ¬S 1 , . . . , S m ∧ ¬S m-1 ∧ • • • ∧ S 1 ], which comprises mutually inconsistent formulae; 2. the order [S 1 , . . . , S m ] is the same as [S 1 , . . . , S m , ¬S 1 ∨• • •∨¬S m ]," }, { "formula_coordinates": [ 7, 110.88, 239.31, 388.57, 26.37 ], "formula_id": "formula_3", "formula_text": "F compares I ≤ F J if either I |= F or J |= F ." }, { "formula_coordinates": [ 7, 128.4, 348.51, 244.44, 11.97 ], "formula_id": "formula_4", "formula_text": "• I |= F and J |= F (equivalence, second case)." }, { "formula_coordinates": [ 7, 128.4, 506.67, 295.07, 55.46 ], "formula_id": "formula_5", "formula_text": "• either S = [] or • I ≤ S 1 J and either J ≤ S 1 I or I ≤ R J, where R = [S 2 , . . . , S m ]." }, { "formula_coordinates": [ 7, 128.4, 639.75, 243.12, 36.86 ], "formula_id": "formula_6", "formula_text": "• either I < S 1 J, or • I ≡ S 1 J and I ≤ R J, where R = [S 2 , . . . , S m ]." }, { "formula_coordinates": [ 8, 110.88, 332.31, 334.31, 62.07 ], "formula_id": "formula_7", "formula_text": "• I ∈ Mod(S 1 ) and ∀K ∈ Mod(S 1 ).I ≤ R K, or • I ≤ R J and either J ∈ Mod(S 1 ) or ∃K ∈ Mod(S 1 ).J ≤ R K, where R = [S 2 , . . . , S m ]." }, { "formula_coordinates": [ 10, 110.88, 582.63, 388.57, 93.03 ], "formula_id": "formula_8", "formula_text": "Lemma 3 If S = [S 1 , S 2 , . . . , S m ] is a natural order, Q is a level order equivalent to the natural order [S 2 , . . . , S m ] and Q c is the first formula of Q that is consistent with S 1 , then I ≤ S J is: • true if I |= S 1 ∧ Q c ; • false if I |= S 1 ∧ Q c and J |= S 1 ∧ Q c ; • same as I ≤ Q J otherwise." }, { "formula_coordinates": [ 11, 110.88, 253.71, 388.57, 55.71 ], "formula_id": "formula_9", "formula_text": "Lemma 4 If the natural order [S 2 , . . . , S m ] is equivalent to the level order Q = [Q 1 , . . . , Q k ], then the natural order S = [S 1 , S 2 , . . . , S m ] is equivalent to the level order R = [R 1 , R 2 , . . . , R k+1 ], where R 1 = S 1 ∧Q c , R i = ¬R 1 ∧Q i-1 for every i > 1 and Q c is the first formula of Q such that S 1 ∧Q c is consistent." }, { "formula_coordinates": [ 11, 110.88, 465.27, 388.61, 123.75 ], "formula_id": "formula_10", "formula_text": "Theorem 1 If the natural order [S 2 , . . . , S m ] is equivalent to the level order Q = [Q 1 , . . . , Q k ], then the natural order S = [S 1 , S 2 , . . . , S m ] is equivalent to the following level order R, where Q c is the first formula of Q that is consistent with S 1 . R = [S 1 ∧ Q c , Q 1 , . . . , Q c-1 , ¬S 1 ∧ Q c , Q c+1 , . . . , Q k ] The level order R = [S 1 ∧ Q c , Q 1 , . . . , Q c-1 , ¬S 1 ∧ Q c , Q c+1 , . . . , Q k ] expresses the natural revision of a level order [Q 1 , . . . , Q k ]" }, { "formula_coordinates": [ 12, 110.88, 371.19, 388.48, 26.91 ], "formula_id": "formula_11", "formula_text": "[Q 2 , . . . , Q k ] into [S 1 ∧ Q 2 , . . . , S 1 ∧ Q k , ¬S 1 ∧ Q 2 , . . . , ¬S 1 ∧ Q k ]." }, { "formula_coordinates": [ 18, 128.4, 315.63, 73.03, 12.39 ], "formula_id": "formula_12", "formula_text": "• I |= S 1 ∧ Q c" }, { "formula_coordinates": [ 18, 355.56, 368.79, 95.04, 12.39 ], "formula_id": "formula_13", "formula_text": "I satisfies S 1 ∧ Q c ." }, { "formula_coordinates": [ 18, 140.16, 421.95, 108.24, 12.39 ], "formula_id": "formula_14", "formula_text": "I |= Q c implies c = i." }, { "formula_coordinates": [ 18, 206.76, 474.99, 219.72, 12.39 ], "formula_id": "formula_15", "formula_text": "S 1 ∧ Q k . As a result, S 1 ∧ Q k is consistent." }, { "formula_coordinates": [ 18, 128.4, 547.47, 73.03, 12.39 ], "formula_id": "formula_16", "formula_text": "• I |= S 1 ∧ Q c" }, { "formula_coordinates": [ 19, 128.4, 414.75, 205.55, 61.11 ], "formula_id": "formula_17", "formula_text": "• true if I |= S 1 ∧ Q c ; • false if I |= S 1 ∧ Q c and J |= S 1 ∧ Q c ; • same as I ≤ Q J otherwise." }, { "formula_coordinates": [ 19, 174.84, 531.75, 280.42, 47.31 ], "formula_id": "formula_18", "formula_text": "S = [] or (I ∈ Mod(S 1 ) and ∀K ∈ Mod(S 1 ).I ≤ R K) or I ≤ R J and (J ∈ Mod(S 1 ) or ∃K ∈ Mod(S 1 ).J ≤ R K))" }, { "formula_coordinates": [ 20, 110.88, 214.95, 334.71, 12.39 ], "formula_id": "formula_19", "formula_text": "I |= S 1 ∧ Q c Lemma 2 proves that I |= S 1 ∧Q c implies I ∈ Mod(S 1" }, { "formula_coordinates": [ 20, 110.88, 267.27, 388.54, 26.36 ], "formula_id": "formula_20", "formula_text": "I |= S 1 ∧ Q c e J |= S 1 ∧ Q c Lemma 2 proves that I |= S 1 ∧ Q c implies that I ∈ Mod(S 1" }, { "formula_coordinates": [ 20, 181.56, 340.11, 276.34, 12.39 ], "formula_id": "formula_21", "formula_text": "I ≤ Q J and (J ∈ Mod(S 1 ) or ∃K ∈ Mod(S 1 ).J ≤ Q K)" }, { "formula_coordinates": [ 20, 110.88, 448.47, 139.63, 12.39 ], "formula_id": "formula_22", "formula_text": "I |= S 1 ∧ Q c e J |= S 1 ∧ Q c" }, { "formula_coordinates": [ 20, 110.88, 634.35, 388.55, 41.31 ], "formula_id": "formula_23", "formula_text": "Q = [Q 1 , . . . , Q k ], then the natural order S = [S 1 , S 2 , . . . , S m ] is equivalent to the level order R = [R 1 , R 2 , . . . , R k+1 ], where R 1 = S 1 ∧Q c , R i = ¬R 1 ∧Q i-1 for every i > 1 and Q c is the first formula of Q such that S 1 ∧Q c is consistent." }, { "formula_coordinates": [ 21, 128.4, 257.31, 203.88, 61.23 ], "formula_id": "formula_24", "formula_text": "• true if I |= S 1 ∧ Q c ; • false if I |= S 1 ∧ Q c and J |= S 1 ∧ Q c ; • same as I ≤ Q J otherwise." }, { "formula_coordinates": [ 21, 110.88, 360.87, 387.96, 40.89 ], "formula_id": "formula_25", "formula_text": "I |= S 1 ∧ Q c The first formula of R is R 1 = S 1 ∧ Q c by definition. Since I satisfies S 1 ∧ Q c by assumption, it satisfies R 1 . This proves that I |= R i with i = 1." }, { "formula_coordinates": [ 21, 116.76, 462.51, 151.27, 12.39 ], "formula_id": "formula_26", "formula_text": "I |= S 1 ∧ Q c and J |= S 1 ∧ Q c" }, { "formula_coordinates": [ 21, 265.8, 481.95, 205.8, 12.39 ], "formula_id": "formula_27", "formula_text": "J |= S 1 ∧ Q c implies J |= R j with j = 1." }, { "formula_coordinates": [ 21, 116.76, 603.03, 151.27, 12.39 ], "formula_id": "formula_28", "formula_text": "I |= S 1 ∧ Q c and J |= S 1 ∧ Q c" }, { "formula_coordinates": [ 22, 110.88, 295.71, 92.39, 12.39 ], "formula_id": "formula_29", "formula_text": "Q = [Q 1 , . . . , Q k ]," }, { "formula_coordinates": [ 22, 110.88, 310.54, 388.54, 54.56 ], "formula_id": "formula_30", "formula_text": "Q c is the first formula of Q that is consistent with S 1 . R = [S 1 ∧ Q c , Q 1 , . . . , Q c-1 , ¬S 1 ∧ Q c , Q c+1 , . . . , Q k ]" }, { "formula_coordinates": [ 22, 110.88, 394.59, 388.54, 26.37 ], "formula_id": "formula_31", "formula_text": "R = [R 1 , R 2 , . . . , R k+1 ], where R 1 = S 1 ∧ Q c and R i = ¬R 1 ∧ Q i-1 for every i > 1." }, { "formula_coordinates": [ 22, 110.88, 466.83, 388.56, 55.71 ], "formula_id": "formula_32", "formula_text": "R 1 is S 1 ∧ Q c , the formula R i = ¬R 1 ∧ Q i-1 for i = c + 1 in the sequence of the lemma is the same as ¬(S 1 ∧ Q c ) ∧ Q c , which is equivalent to (¬S 1 ∨ ¬Q c ) ∧ Q c , in turn equivalent to (¬S 1 ∧ Q c ) ∨ (¬Q c ∧ Q c ) and to ¬S 1 ∧ Q c ," }, { "formula_coordinates": [ 22, 110.88, 553.47, 388.49, 99.03 ], "formula_id": "formula_33", "formula_text": "R i = ¬R 1 ∧ Q i-1 in the sequence of the lemma. Since R 1 is S 1 ∧ Q c , this formula R i = ¬R 1 ∧ Q i-1 is the same as R i = ¬(S 1 ∧ Q c ) ∧ Q i-1 , which is equivalent to (¬S 1 ∧ Q i-1 ) ∨ (¬Q c ∨ Q i-1 ). The two formulae Q c and Q i-1 are mutually inconsistent since i is not equal to c + 1. As a result, Q i-1 implies ¬Q c . This proves ¬Q c ∨ Q i-1 equivalent to Q i-1 . Therefore, R i in the sequence of the lemma is equivalent to (¬S 1 ∧ Q i-1 ) ∨ Q i-1 , which is equivalent to Q i-1 ," }, { "formula_coordinates": [ 23, 174.36, 278.91, 261.6, 12.39 ], "formula_id": "formula_34", "formula_text": "R = [S 1 ∧ Q c , Q 1 , . . . , Q c-1 , ¬S 1 ∧ Q c , Q c+1 , . . . , Q k ]" }, { "formula_coordinates": [ 23, 206.4, 558.39, 226.78, 13.34 ], "formula_id": "formula_35", "formula_text": "S = [] or (I ≤ S 1 J and (J ≤ S 1 I or I ≤ R J))" }, { "formula_coordinates": [ 24, 128.4, 170.67, 148.11, 61.11 ], "formula_id": "formula_36", "formula_text": "• true if I |= S 1 and J |= S 1 • false if I |= S 1 and J |= S 1 • same as I ≤ Q J otherwise" }, { "formula_coordinates": [ 24, 191.76, 275.55, 226.78, 13.34 ], "formula_id": "formula_37", "formula_text": "S = [] or (I ≤ S 1 J and (J ≤ S 1 I or I ≤ R J))" }, { "formula_coordinates": [ 24, 220.44, 345.75, 169.3, 13.34 ], "formula_id": "formula_38", "formula_text": "I ≤ S 1 J and (J ≤ S 1 I or I ≤ R J)" }, { "formula_coordinates": [ 24, 110.88, 568.95, 339.48, 13.34 ], "formula_id": "formula_39", "formula_text": "I |= S 1 and J |= S 1 The first assumption I |= S 1 implies I ≤ S 1 J." }, { "formula_coordinates": [ 25, 140.16, 215.55, 39.72, 12.39 ], "formula_id": "formula_40", "formula_text": "I ≤ R J." }, { "formula_coordinates": [ 25, 110.88, 303.15, 388.55, 69.15 ], "formula_id": "formula_41", "formula_text": "Lemma 12 If the lexicographic order [S 2 , . . . , S m ] is equivalent to the level order Q = [Q 1 , . . . , Q k ], then the lexicographic order [S 1 , S 2 , . . . , S m ] is equiv- alent to the following level order. R = [S 1 ∧ Q 1 , . . . , S 1 ∧ Q k , ¬S 1 ∧ Q 1 , . . . , ¬S 1 ∧ Q k ]" }, { "formula_coordinates": [ 25, 128.4, 484.95, 146.67, 60.75 ], "formula_id": "formula_42", "formula_text": "• true if I |= S 1 and J |= S 1 • false if I |= S 1 and J |= S 1 • same as I ≤ Q J otherwise" }, { "formula_coordinates": [ 27, 374.04, 610.83, 41.14, 12.39 ], "formula_id": "formula_43", "formula_text": "I ≤ R J)" }, { "formula_coordinates": [ 28, 140.16, 418.71, 165.36, 12.39 ], "formula_id": "formula_44", "formula_text": "I ≤ R J, where R = [S 2 , . . . , S m ]." }, { "formula_coordinates": [ 29, 174.84, 282.15, 280.42, 47.31 ], "formula_id": "formula_45", "formula_text": "S = [] or (I ∈ Mod(S 1 ) and ∀K ∈ Mod(S 1 ).I ≤ R K) or I ≤ R J and (J ∈ Mod(S 1 ) or ∃K ∈ Mod(S 1 ).J ≤ R K))" }, { "formula_coordinates": [ 31, 191.76, 154.83, 226.78, 13.34 ], "formula_id": "formula_46", "formula_text": "S = [] or (I ≤ S 1 J and (J ≤ S 1 I or I ≤ R J))" }, { "formula_coordinates": [ 31, 220.44, 234.03, 169.3, 13.34 ], "formula_id": "formula_47", "formula_text": "I ≤ S 1 J and (J ≤ S 1 I or I ≤ R J)" }, { "formula_coordinates": [ 32, 225.6, 233.43, 251.52, 13.34 ], "formula_id": "formula_48", "formula_text": "J ≤ S 1 I from J |= S 1 or I |= S 1 into just J |= S 1 ." }, { "formula_coordinates": [ 32, 231.84, 298.59, 175.78, 47.31 ], "formula_id": "formula_49", "formula_text": "I ≤ S 1 J and (J ≤ S 1 I or I ≤ R J) ≡ true and (J |= S 1 or I ≤ R J) ≡ J |= S 1 or I ≤ R J" }, { "formula_coordinates": [ 34, 174.36, 185.31, 261.6, 12.39 ], "formula_id": "formula_50", "formula_text": "R = [S 1 ∧ Q c , Q 1 , . . . , Q c-1 , ¬S 1 ∧ Q c , Q c+1 , . . . , Q k ]" }, { "formula_coordinates": [ 34, 219.84, 580.11, 170.5, 13.34 ], "formula_id": "formula_51", "formula_text": "I ≤ S 1 J and (J ≤ R 1 I or I ≤ R J)" } ]
10.18653/v1/2021.acl-long.224
2023-05-22
[ { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b9", "b3", "b4", "b15", "b17", "b5", "b1", "b0", "b18", "b13", "b10" ], "table_ref": [], "text": "In this global era, it is becoming increasingly important for people from different countries/regions to interact with each other and have a mutual understanding. Recent advancements in machine translation (MT) technologies have enabled us to communicate with people worldwide, especially in text. Chat translation or dialogue machine translation (Liu et al., 2021) supports such communications, which enables people who use different languages to have cross-language chats. Speech translation (ST) has also recently shown success (e.g., Chen et al., 2022), especially in monologue translation (e.g., Di Gangi et al., 2019). However, to the best of our knowledge, no study has focused on ST of dialogues, which is an important aspect of language usage.\nIn this study, we propose a new task: speech dialogue translation (SDT) aiming to mediate speakers of different languages. We consider bilingual dialogues where several people who speak in different languages talk with each other mediated by an ST system.\nIt is important to consider context in SDT because we need to consider context in different languages, which cannot be readily handled by current ST systems that mainly focus on one translation direction. Figure 1 shows an example of an STmediated dialogue between an English speaker and a Japanese speaker. They are discussing some ideas, and the English speaker says, \"What do you think about it?\" The Japanese speaker responds by saying the idea is naive, but without context it can be translated as \"I think it's a bit sweet\" because \"甘い\" has two meanings, sweet and naive. By utilizing dialogue context, the meaning of \"甘い\" becomes clear so that the utterance can be translated properly.\nFor the proposed task, we construct the SpeechBSD dataset1 based on an existing text dialogue corpus, BSD (Bussiness Scene Dialogue) corpus (Rikters et al., 2019). We collect audio of the BSD corpus through crowdsourcing along with speaker attributes.\nWe conduct speech-to-text cascaded ST experiments on the dataset. There are two mainstream methods for ST, the cascade method (Stentiford and Steer, 1988) where automatic speech recognition (ASR) and MT are chained together, and the end-to-end method (Duong et al., 2016;Berard et al., 2016), where translations are directly predicted from speech. Recent study (Bentivogli et al., 2021;Tran et al., 2022) suggests that the two methods are on par. We conduct cascade ST experiments using Whisper (Radford et al., 2022) for ASR and mBART (Liu et al., 2020) for MT.\nWe consider three settings for translation: without context, with monolingual context, and with bilingual context. The monolingual context is composed in the language the utterance to be translated is spoken, whereas the bilingual context is composed in the original language of the spoken utterances (see examples in Figure 1). We show that translation with bilingual context performs better compared to the one without context by up to 1.9 BLEU points in MT and 1.7 BLEU points in cascade ST with our settings. We also conduct a manual evaluation focusing on zero anaphora, a grammatical phenomenon where arguments of verbs are omitted when they are apparent from the context in Japanese. We show that with bilingual context, the MT models can often predict zero pronouns correctly." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b9", "b9", "b19", "b8", "b20", "b6", "b9" ], "table_ref": [], "text": "Although neural MT has greatly improved over the past few years, the translation of dialogues remains a challenging task because of its characteristics. Liu et al. (2021) summarizes the recent progress of dialogue MT and categorizes its issue into four categories, coherence, consistency, cohesion, and personality. The main approaches to address these problems include document MT (e.g., Liu et al., 2021), usage of pretrained models (e.g., Wang et al., 2020), and auxiliary task learning utilizing speaker information (e.g., Liang et al., 2021).\nConsidering context in ST is recently studied for the end-to-end approach (Zhang et al., 2021). We point out that although not addressed in this work, considering context for ASR is also an active research area (e.g., Inaguma and Kawahara, 2021).\nIn this work, we focus on the translation of speech dialogue. We use mBART, which performed best in a previous work of chat translation (Liu et al., 2021), and also consider utilizing context." }, { "figure_ref": [], "heading": "Speech Dialogue Translation (SDT)", "publication_ref": [], "table_ref": [], "text": "In SDT, there are several speakers who speak different languages with the help of a translation system. In this work, we consider M speak-\ners {S m | m = 1, 2, • • • , M } and 2 languages {L n | n = 1, 2}. We consider a dialogue with T utterances D = (U 1 , • • • , U T )\n, where an utterance is U t = (S m t , L n t , X t ). Here, S m t is the speaker, L n t is the language spoken, and X t is the speech signal of t-th utterance. Let Y n t (n = 1, 2) be text that has the same meaning as X t in language L n . The task of SDT is to generate translation Y 2 t from speech signal X t when the source language is L 1 (or translation Y 1 t from X t when the source language is L 2 ) for every utterance U t ." }, { "figure_ref": [], "heading": "SpeechBSD Dataset", "publication_ref": [ "b15", "b16" ], "table_ref": [], "text": "We construct the SpeechBSD dataset to study SDT. It is based on the existing dialogue dataset in text, BSD corpus (Rikters et al., 2019(Rikters et al., , 2021)). We collect audio of all the sentences in the dataset along with speaker attributes (gender and homeplace) through crowdsourcing." }, { "figure_ref": [], "heading": "BSD Corpus", "publication_ref": [], "table_ref": [], "text": "BSD corpus is a parallel corpus of English and Japanese composed of manually designed business scene dialogues. Each dialogue called scenario contains 30 sentences on average spoken by 2-5 speakers. The original language the scenarios were written in is half English and half Japanese so that the expressions are not biased toward one language." }, { "figure_ref": [ "fig_0" ], "heading": "Dataset Construction", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "First, we divided each scenario by speaker. For example in Figure 1, the original BSD corpus con- andY 2 3 ). In this way, we can compose two crosslanguage dialogues (Y\ntains text of Y 1 1 , Y 2 1 , Y 1 2 , Y 2 2 , Y\n1 1 → Y 2 2 → Y 1 3 and Y 2 1 → Y 1 2 → Y 2 3\n) from one scenario of the BSD corpus. We collected audio through crowdsourcing so that each part is spoken by a different worker. 2 We designed a web application to record audio and collected English speech from the US using Amazon Mechanical Turk3 and Japanese speech from Japan using Yahoo! crowdsourcing. 4 We also collected the gender and homeplace (the US states or Japanese prefecture) of the speakers as they may affect translation performance. The instructions given to the workers are shown in Appendix A.1." }, { "figure_ref": [ "fig_4" ], "heading": "Statistics of the SpeechBSD Dataset", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "The collected audio was 24.3 hours for English speech and 30.7 hours for Japanese speech in total. Details are provided in Appendix B Table 2. Regarding speaker gender, English speech was balanced, whereas there were more male speakers in Japanese. As for homeplace, in Japanese, the speakers were distributed roughly according to the population distribution. In English, it was less diverse (Appendix B Figure 3)." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Considering Context for SDT", "publication_ref": [], "table_ref": [], "text": "We propose two ways to consider context in SDT: monolingual context and bilingual context.\nFirst, for every utterance U t , an ASR system is used to obtain transcripts Y n t . The monolingual context is composed in the source language of the utterance to be translated. For example, in Figure 1, when translating the third utterance U 3 from Japanese to English, as the source language of the utterance is Japanese (L 1 ), the context (Y 1 1 and Y 1 2 ) is also composed in Japanese. Let the context composed in this way be Y n <t .\nFor monolingual context experiments, we use two translation models for each translation direction. The training objective of the MT model that translates from L 1 to L 2 is to maximize the follow-ing log likelihood5 :\nL 1→2 = t log P(Y 2 t , Y 2 <t | Y 1 t , Y 1 <t ).(1)\nSimilar objective L 2→1 can be derived when L 2 is the source language and L 1 is the target language.\nPostprocessing is applied to extract Y 2 t from the output that contains both Y 2 <t and Y 2 t . The bilingual context is composed of the original language of the spoken utterances. For example, in Figure 1, when translating the third utterance U 3 from Japanese to English, the bilingual context on the source side is Y 1 1 and Y 2 2 , which involves both languages. The bilingual context on the target side is Y 2 1 and Y 1 2 . Because there is no concept of source or target language in this case, let the source side utterance be Y t , source side context be Y <t , target side utterance be Y t , and target side context be Y <t . The MT model is trained with the following objective:\nL = t log P(Y t , Y <t | Y t , Y <t ).\n(2)\nPostprocessing is applied to extract Y t from the output.\nWe consider constrained context with context size c in practice, which shows the number of previous utterances used for translation in addition to the utterance to be translated. More formal definitions of monolingual, bilingual, and constrained context are provided in Appendix C." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Automatic Speech Recognition", "publication_ref": [ "b13" ], "table_ref": [], "text": "In SDT, ASR has to handle bilingual inputs. We used a multilingual ASR model Whisper (Radford et al., 2022). The medium model with 12 encoder and decoder layers was used without finetuning. Further details are provided in Appendix D.1. We evaluated the performance of the SpeechBSD test set. For English the word error rate was 8.3 %, and for Japanese the character error rate was 13.2 %." }, { "figure_ref": [], "heading": "Machine Translation", "publication_ref": [ "b10", "b12" ], "table_ref": [], "text": "MT model also needs to handle bilingual inputs in SDT. We used mBART (Liu et al., 2020) and finetuned the model with SpeechBSD for MT. The large model with 12 encoder and decoder layers was used. Although the dialogues are regarded as bilingual ones in this study, the predictions were recomposed to the monolingual dialogue form for evaluation because usually performance of MT models is evaluated on a single language pair. SacreBLEU (Post, 2018) was used for calculating BLEU scores. Further details are provided in Appendix D.2." }, { "figure_ref": [], "heading": "Context Settings", "publication_ref": [], "table_ref": [], "text": "Three settings were considered: translation without context, with monolingual context, and with bilingual context.\nWithout Context Each utterance in a scenario was treated as a separate sentence in this setting. Finetuning was performed separately for each translation direction." }, { "figure_ref": [], "heading": "Monolingual Context", "publication_ref": [], "table_ref": [], "text": "For each utterance in a scenario, monolingual context with context width c = 5 was composed in the way described in section 5. The context utterances and the utterance to translate were concatenated with the end of sentence token </s>. Finetuning was performed separately for each translation direction.\nBilingual Context For each utterance in a scenario, bilingual context with context width c = 5 was composed in the way described in section 5. The context utterances and the utterance to translate were concatenated with the end of sentence token </s>. As there is no concept of source language or target language in this setting, a single model was finetuned in this setting." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "Table 1 (upper part) shows the results of the MT experiments. Comparing \"Without\" with \"Monolingual,\" more than 0.9 points of improvement were observed using monolingual context. Comparing \"Monolingual\" with \"Bilingual,\" the latter performed better, especially in Ja-En." }, { "figure_ref": [], "heading": "Manual Evaluation", "publication_ref": [ "b15", "b15" ], "table_ref": [], "text": "To verify how context can help improve translations, we conducted a manual evaluation focusing on a grammatical phenomenon called zero anaphora, as discussed in Rikters et al. (2019). Similarly to Rikters et al. (2019), we counted the number of sentences with pronouns I, you, he, she, it, and they in English6 and observed that 63 % of the test sentences included them. We sampled 50 of those sentences from the test set. First, we checked if the subjects of the Japanese sentences were zero pronouns by comparing Japanese and English gold references. Then we checked if the zero pronouns were translated into English correctly for the predictions of each Ja-En system. Out of the 50 sentences, 29 were sentences with zero pronoun subjects. The number of sentences that the missing pronoun was translated correctly was 19, 20, and 24 for without context, monolingual context, and bilingual context settings, respectively. This shows that context can help disambiguate zero pronouns, and using bilingual context can help generate correct pronouns. Examples of the sentences are shown in Appendix E." }, { "figure_ref": [], "heading": "Cascade Speech Translation", "publication_ref": [], "table_ref": [], "text": "Cascade ST experiments were performed by using Whisper recognition results as input to the MT models described in section 6.2.\nTable 1 (lower part) shows the results. Similarly to MT, BLEU score improved by more than 0.7 points by using monolingual context. Further improvements by more than 0.5 points were observed using bilingual context.\nWe also performed manual evaluation as in Section 6.2.3. The number of sentences that the missing pronoun was translated correctly was 16, 18, and 22 for without context, monolingual context, and bilingual context settings, respectively. It showed a similar trend to the results of section 6.2.3 with lower translation accuracy. Examples of the sentences are shown in Appendix E." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We presented a new task, SDT aiming to mediate speakers of different languages. We constructed the SpeechBSD dataset via crowdsourcing. We performed MT experiments utilizing context and showed its effectiveness. In the future, we plan to perform experiments in end-to-end ST settings and SDT utilizing speaker attributes." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "The experiments were performed only on Japanese and English bilingual dialogue collected from a limited number of native speakers. Although the methods proposed in this work can work on any language pair, drawing conclusions for other language pairs should be avoided. The experiments were performed using existing pretrained models, Whisper and mBART, and the method used to pretrain those models would have affected the translation performances in this work. The dialogues in the SpeechBSD dataset are the read speech of pre-composed text dialogues, and further research is required for more realistic settings such as spontaneous dialogues." }, { "figure_ref": [], "heading": "A Crowdsourcing Details", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2" ], "heading": "A.1 Crowdsourcing Instructions Given to the Workers", "publication_ref": [], "table_ref": [], "text": "Figure 2 shows the instructions given to the crowdsourcing workers and the interface used to record audio. We asked the workers to speak clearly and formally and to check that the audio was properly recorded. With the interface, we made sure that the workers agreed that their voices would be released and that the utterances were properly recorded." }, { "figure_ref": [], "heading": "A.2 Crowdsourcing Payment", "publication_ref": [], "table_ref": [], "text": "The crowdsourcing tasks were divided according to the number of utterances to record. The authors performed preliminary crowdsourcing tasks and estimated how long the tasks would take for each case. We paid the workers according to the estimated time and predefined wage per hour determined for each country." }, { "figure_ref": [ "fig_4" ], "heading": "B Statistics of the SpeechBSD Dataset", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "Table 2 shows the statistics of the SpeechBSD dataset. Figure 3 shows the homeplace distribution of the speakers of the SpeechBSD dataset. The Japanese one (3(b)) roughly reflects Japan's demographics (concentrated around Tokyo, Osaka, and Nagoya), whereas the English one (3(a)) is more biased (concentrated too much on California and Virginia). We believe these biases are caused by the differences in the crowdsourcing platforms used." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "C Formal Definition of Context", "publication_ref": [], "table_ref": [], "text": "Here, we formally formulate monolingual, bilingual, and constrained contexts introduced in Section 5.\nFor simplicity, we consider the case where M = 2 and m = n (i.e., speaker S i speaks in language L i (i = 1, 2)). In addition, we suppose the speakers speak interchangeably, and speaker S 1 starts the conversation. 7 In other words, defining a map L :\nU t → L i , ∀ U ∈ {U t | t ≡ i (mod2)}, L(U ) = L i .\nThe monolingual context is composed of previous utterances in a single language. In other words, 7 In the experiments, consecutive utterances by the same speaker are treated as separate utterances. If there are more than three speakers, we number speakers in the order of appearance and regard speakers with the same parity speak in the same language.\nmonolingual context text of utterance U t in language L i is\nY i <t = {Y i τ | τ < t}.\nFor example in Figure 1, when translating the third utterance U 3 from Japanese to English, the monolingual context of the source side is \"彼は良い考 えだと言ってました。あなたはどう思います か?\", and that of the target side is \"He said it's a good idea. What do you think?\" Using this formulation, we can formally define the training objective of Equation 1. During inference, for the source language of the current utterance, ASR transcripts are used, and for the target language of the current utterance, the translations of ASR transcripts are used to compose context. During training, the corresponding gold text is used. The Bilingual context is composed of transcripts of the two languages. ASR transcripts are used during inference, and gold transcripts are used for training. The bilingual context of utterance U t is\nY <t = Ỹ 1 <t ∪ Ỹ 2 <t ,\nwhere\nỸ i <t = {Y i τ | τ < t ∧ τ ≡ i (mod2)}.\nFor example in Figure 1, when translating the third utterance U 3 from Japanese to English, the bilingual context of the source side is \"彼は良い考え だと言ってました。What do you think about it?\", and that of the target side is \"He said it's a good idea. あなたはどう思いますか?\" For bilingual context experiments, the MT system has to be able to handle two translation directions. Let the translation of Y <t be Y <t = Ỹ 1 <t ∪ Ỹ 2 <t , where\nỸ i <t = {Y j τ | τ < t ∧ τ ≡ i (mod2)}, (i, j) = (1, 2), (2, 1). Y t is Y 2 t when L(U t ) = L 1 and Y 1 t when L(U t ) = L 2\n. By setting Y <t as source side context and target side context as Y <t , we can formally define the training objective of Equation 2.\nIn practice, we consider context width c for context U <t = {U τ | τ < t} because the maximum length the MT models can handle is limited. The constrained context of utterance U t with context width c is \nU <t = {U τ | τ = t -1, • • • , t -c ∧ τ > 0}." }, { "figure_ref": [], "heading": "D Experimental Settings D.1 ASR", "publication_ref": [], "table_ref": [], "text": "Whisper is a Transformer-based model that uses 80-channel log-Mel spectrograms converted from audio sampled with 16, 000 Hz as input. As it is trained with 680, 000 hours of data in various domains the model is robust enough to be able to work without any finetuning. We used the byte-level BPE vocabulary (size 50, 257) of the pretrained model. We assumed the language of the utterances was given beforehand and fed the language tag to the model as a prefix token. We evaluated the development set of the SpeechBSD dataset using the base, small, medium, and large models with either greedy decoding or beam search decoding with beam size 5. We observed that the medium model with greedy decoding performed the best for both English and Japanese, which are the settings used for further experiments." }, { "figure_ref": [ "fig_5" ], "heading": "D.2 MT", "publication_ref": [ "b7", "b11", "b10", "b10", "b14", "b10" ], "table_ref": [], "text": "We used mBART trained with 25 languages for the experiments. BPE vocabulary of size 25, 001 was used. As a preprocessing step, BPE was applied to all utterances with the sentencepiece (Kudo and Richardson, 2018) toolkit. Fairseq (Ott et al., 2019) was used for training and inference. The same hyperparameters as in Liu et al. (2020) were used, except that the training epochs were determined according to early stopping with patience 10 on validation loss. We did not use different random seeds for the experiments because Liu et al. (2020) reported that the finetuning process was stable with different seeds. When evaluating the model, the averaged weights of the last 10 checkpoints were used. The SacreBLEU signatures were nrefs:1|case:mixed|eff:no|tok:ja-mecab-0.996-IPA|smooth:exp|version:2.0.0 for En-Ja and nrefs:1|case:mixed|eff:no|tok:13a| smooth:exp|version:2.0.0 for Ja-En. We conducted significance tests with paired approximate randomization (Riezler and Maxwell, 2005) with 10, 000 approximate randomization trials and a p-value threshold of 0.05 to compare the BLEU scores of \"without context\" with the others, and \"monolingual context\" with \"bilingual context.\"\nFor bilingual context MT experiments, in order to match the finetuning style of mBART, language tags like ja_XX or en_XX have to be appended at the last of each translation unit. However, in bilingual context settings, both the source and the target side contain both languages, which does not comply with the finetuning style described in the original mBART paper (Liu et al., 2020). We conducted two kinds of experiments, appending ja_XX to the input and en_XX to the output and the other way around. The statistical significance test showed that they were not significantly different. We report the results of the systems where the language pair of the utterance to be translated matches the language pair specified by the appended language tags.\nAs to the context size c, we changed it from 1 to 8 in the bilingual context setting and evaluated the models with BLEU score on the validation set. The results are shown in Figure 4. In the bilingual context setting, 5 was the best for both En-Ja and Ja-En. For the monolingual context setting, 5 and 6 were the best for En-Ja and 3 for Ja-En. The difference between setting 3 and 5 as context width did not show a statistically significant difference in the BLEU scores for Ja-En. Therefore, for a consistent comparison, we reported the results on the test set with c = 5 in Table 1.\nWe used 4 Tesla V100 or Titan RTX GPUs for the experiments. The total computation hours, including hyperparameter searching, were 278 hours. " }, { "figure_ref": [], "heading": "E Example Sentences from Manual Evaluation", "publication_ref": [], "table_ref": [], "text": "Table 3 shows examples from manual evaluation described in Section 6.2.3. In the first example, it is observed that the zero pronoun (She) is predicted correctly when monolingual or bilingual context is used in both MT and cascade ST experiments. In the second example, the zero pronoun (They) could not be correctly predicted by any system." }, { "figure_ref": [], "heading": "Acknowledgegments", "publication_ref": [], "table_ref": [], "text": "This work was supported by JSPS KAKENHI Grant Numbers JP23H03454 and JP23KJ1356." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "Consent was obtained from the crowdsourcing workers when collecting audio, gender, and homeplace. The SpeechBSD dataset is made public under the Creative Commons Attribution-NonCommercial-ShareAlike (CC BY-NC-SA) 4.0 license, which is the same as the license of the BSD corpus, and shall be used only for research purposes. Caution should be exercised when using gender or homeplace information included in the dataset so that the identities of the speakers are not revealed. (a) An example where the \"monolingual\" and \"bilingual\" context predictions were better than the \"without context\" one. In this scenario, Patrick complains to Gary that he does not want to go to his company's drinking party. Gary asks what Patrick's wife thinks about it, and this is Patrick's response. The pronoun She is omitted in the Japanese utterance. Word-by-word translation of the Japanese utterance with omitted words is: \"(彼女は)-she / もう-already / 諦めて-give up / それが-it's / 仕事-work / な ら-if / (それは)-it / 仕方ない-can't be helped / わね-I think / って(言ってる)-says .\"" }, { "figure_ref": [], "heading": "Context", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Context", "publication_ref": [], "table_ref": [], "text": "Ja reference -いつ在庫が入るか、でしょう? En reference -They all want to know when it will be restocked, don't they?\nMT Without When will the inventory start? Monolingual So when will the inventory be available?\nBilingual I wonder when it will be in stock?\nCascade ST Without When will the inventory arrive? Monolingual I wonder when it will be in stock.\nBilingual I wonder when it will be in stock. (b) An example where all systems failed to predict the correct pronoun. In this scenario, Mr. Ogawa and Ms. Pace are talking about their company's stock of a product. The previous utterances by Mr. Ogawa are, \"We have 28 backorders for this product. I have been receiving many inquiries from the customers lately.\" This is the subsequent Ms. Pace's response. The pronoun They is omitted in the Japanese utterance. Word-by-word translation of the Japanese utterance with omitted words is: \"(彼らは)-they / いつ-when / 在庫-stock / が入るか-becomes avaiable / (を聞くの)-ask / でしょう-don't they.\" The translation is difficult because the word corresponding to \"ask\" is also omitted. " } ]
We present a new task, speech dialogue translation mediating speakers of different languages. We construct the SpeechBSD dataset for the task and conduct baseline experiments. Furthermore, we consider context to be an important aspect that needs to be addressed in this task and propose two ways of utilizing context, namely monolingual context and bilingual context. We conduct cascaded speech translation experiments using Whisper and mBART, and show that bilingual context performs better in our settings.
Towards Speech Dialogue Translation Mediating Speakers of Different Languages
[ { "figure_caption": "Figure 1 :1Figure1: The importance of considering context in SDT. \"甘い\" can be translated into either \"sweet\" or \"naive,\" which can be disambiguated with the context. We consider two types of context for translation, monolingual context and bilingual context.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Crowdsourcing interface used to record audio. The upper part shows the instructions given to the workers.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Homeplace distribution of the speakers of the SpeechBSD dataset by the number of utterances.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: BLEU score on the development set when changing the context size c.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Statistics of the SpeechBSD dataset. The number of sentences is the same as the number of utterances in this dataset as it originally was in the BSD corpus.", "figure_data": "TrainDev.Test# of scenarios6706969# of sentences20,0002,0512,120English speech (h)20.12.12.1Japanese speech (h)25.32.72.7English gender (M / F %) 47.2 / 52.8 50.1 / 49.9 44.4 / 55.6Japanese gender (M / F %) 68.0 / 32.0 62.3 / 37.7 69.0 / 31.0", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" } ]
Shuichiro Shimizu; Chenhui Chu; Sheng Li; Sadao Kurohashi
[ { "authors": "Luisa Bentivogli; Mauro Cettolo; Marco Gaido; Alina Karakanta; Alberto Martinelli; Matteo Negri; Marco Turchi", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Cascade versus direct speech translation: Do the differences still make a difference", "year": "2021" }, { "authors": "Alexandre Berard; Olivier Pietquin; Christophe Servan; Laurent Besacier", "journal": "", "ref_id": "b1", "title": "Listen and Translate: A Proof of Concept for End-to-End Speech-to-Text Translation", "year": "2016" }, { "authors": "Steven Bird; Edward Loper; Ewan Klein", "journal": "O'Reilly Media Inc", "ref_id": "b2", "title": "Natural Language Processing with Python", "year": "2009" }, { "authors": "Zhehuai Chen; Yu Zhang; Andrew Rosenberg; Bhuvana Ramabhadran; Pedro J Moreno; Ankur Bapna; Heiga Zen", "journal": "", "ref_id": "b3", "title": "MAESTRO: Matched Speech Text Representations through Modality Matching", "year": "2022" }, { "authors": "A Di Mattia; Roldano Gangi; Luisa Cattoni; Matteo Bentivogli; Marco Negri; Turchi", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "MuST-C: a Multilingual Speech Translation Corpus", "year": "2019" }, { "authors": "Long Duong; Antonios Anastasopoulos; David Chiang; Steven Bird; Trevor Cohn", "journal": "", "ref_id": "b5", "title": "An Attentional Model for Speech Translation Without Transcription", "year": "2016" }, { "authors": "Hirofumi Inaguma; Tatsuya Kawahara", "journal": "", "ref_id": "b6", "title": "VAD-Free Streaming Hybrid CTC/Attention ASR for Unsegmented Recording", "year": "2021" }, { "authors": "Taku Kudo; John Richardson", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "year": "2018" }, { "authors": "Yunlong Liang; Chulun Zhou; Fandong Meng; Jinan Xu; Yufeng Chen; Jinsong Su; Jie Zhou", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "Towards making the most of dialogue characteristics for neural chat translation", "year": "2021" }, { "authors": "Siyou Liu; Yuqi Sun; Longyue Wang", "journal": "formation", "ref_id": "b9", "title": "Recent advances in dialogue machine translation", "year": "2021" }, { "authors": "Yinhan Liu; Jiatao Gu; Naman Goyal; Xian Li; Sergey Edunov; Marjan Ghazvininejad; Mike Lewis; Luke Zettlemoyer", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b10", "title": "Multilingual denoising pre-training for neural machine translation", "year": "2020" }, { "authors": "Myle Ott; Sergey Edunov; Alexei Baevski; Angela Fan; Sam Gross; Nathan Ng; David Grangier; Michael Auli", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "fairseq: A fast, extensible toolkit for sequence modeling", "year": "2019" }, { "authors": "Matt Post", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "A call for clarity in reporting BLEU scores", "year": "2018" }, { "authors": "Alec Radford; Jong Wook Kim; Tao Xu; Greg Brockman; Chiristine Mcleavey; Ilya Sutskever", "journal": "", "ref_id": "b13", "title": "Robust Speech Recognition via Large-Scale Weak Supervision", "year": "2022" }, { "authors": "Stefan Riezler; John T Maxwell", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "On some pitfalls in automatic evaluation and significance testing for MT", "year": "2005" }, { "authors": "Matīss Rikters; Ryokan Ri; Tong Li; Toshiaki Nakazawa", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "Designing the business conversation corpus", "year": "2019" }, { "authors": "Matīss Rikters; Ryokan Ri; Tong Li; Toshiaki Nakazawa", "journal": "Journal of Natural Language Processing", "ref_id": "b16", "title": "Japanese-english conversation parallel corpus for promoting context-aware machine translation research", "year": "2021" }, { "authors": "F W M Stentiford; M G Steer", "journal": "British Telecom technology journal", "ref_id": "b17", "title": "Machine translation of speech", "year": "1988" }, { "authors": "Anh Khoa Viet; David Tran; Yingbo Thulke; Christian Gao; Hermann Herold; Ney", "journal": "", "ref_id": "b18", "title": "Does Joint Training Really Help Cascaded Speech Translation?", "year": "2022" }, { "authors": "Longyue Wang; Zhaopeng Tu; Xing Wang; Li Ding; Liang Ding; Shuming Shi", "journal": "", "ref_id": "b19", "title": "Tencent AI lab machine translation systems for WMT20 chat translation task", "year": "2020" }, { "authors": "Biao Zhang; Ivan Titov; Barry Haddow; Rico Sennrich", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Beyond sentence-level end-to-end speech translation: Context helps", "year": "2021" } ]
[ { "formula_coordinates": [ 2, 306.14, 285.94, 218.27, 39.74 ], "formula_id": "formula_0", "formula_text": "ers {S m | m = 1, 2, • • • , M } and 2 languages {L n | n = 1, 2}. We consider a dialogue with T utterances D = (U 1 , • • • , U T )" }, { "formula_coordinates": [ 2, 306.14, 720.97, 136.11, 13.87 ], "formula_id": "formula_1", "formula_text": "tains text of Y 1 1 , Y 2 1 , Y 1 2 , Y 2 2 , Y" }, { "formula_coordinates": [ 3, 70.87, 99.52, 218.27, 27.42 ], "formula_id": "formula_2", "formula_text": "1 1 → Y 2 2 → Y 1 3 and Y 2 1 → Y 1 2 → Y 2 3" }, { "formula_coordinates": [ 3, 323.78, 97.9, 200.63, 24.21 ], "formula_id": "formula_3", "formula_text": "L 1→2 = t log P(Y 2 t , Y 2 <t | Y 1 t , Y 1 <t ).(1)" }, { "formula_coordinates": [ 3, 342.46, 364.83, 145.64, 21.75 ], "formula_id": "formula_4", "formula_text": "L = t log P(Y t , Y <t | Y t , Y <t )." }, { "formula_coordinates": [ 7, 70.87, 637.83, 202.63, 37.73 ], "formula_id": "formula_5", "formula_text": "U t → L i , ∀ U ∈ {U t | t ≡ i (mod2)}, L(U ) = L i ." }, { "formula_coordinates": [ 7, 370.64, 112.53, 89.28, 14.19 ], "formula_id": "formula_6", "formula_text": "Y i <t = {Y i τ | τ < t}." }, { "formula_coordinates": [ 7, 306.14, 370.55, 82.2, 14.48 ], "formula_id": "formula_7", "formula_text": "Y <t = Ỹ 1 <t ∪ Ỹ 2 <t ," }, { "formula_coordinates": [ 7, 332.8, 397.66, 167.3, 14.48 ], "formula_id": "formula_8", "formula_text": "Ỹ i <t = {Y i τ | τ < t ∧ τ ≡ i (mod2)}." }, { "formula_coordinates": [ 7, 306.14, 579.62, 218.27, 70.7 ], "formula_id": "formula_9", "formula_text": "Ỹ i <t = {Y j τ | τ < t ∧ τ ≡ i (mod2)}, (i, j) = (1, 2), (2, 1). Y t is Y 2 t when L(U t ) = L 1 and Y 1 t when L(U t ) = L 2" }, { "formula_coordinates": [ 7, 315.41, 763.54, 199.74, 10.67 ], "formula_id": "formula_10", "formula_text": "U <t = {U τ | τ = t -1, • • • , t -c ∧ τ > 0}." } ]
10.1016/j.inffus.2021.05.008
2023-07-08
[ { "figure_ref": [ "fig_7", "fig_7", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b19", "b45", "b21", "b40", "b35", "b9", "b8", "b4", "b44", "b1", "b22" ], "table_ref": [], "text": "Data is the foundation of most science. Recent advances in deep generative modelling have seen a steep rise in methods that aim to replace real data with synthetic data. The general idea is that synthetic data resembles the real data, while guaranteeing privacy (Ho et al., 2021;Yoon et al., 2020;Jordon et al., 2019;van Breugel et al., 2023), improving fairness (Xu et al., 2018;2019a;van Breugel et al., 2021), augmenting the dataset size (Antoniou et al., 2017;Dina et al., 2022;Das et al., 2022;Bing et al., 2022), or simulating distributional shifts (Yoon et al., 2018). Often the aim is to be able to use the synthetic data in place of the real data for Figure 1. Synthetic data is not perfect, which affects downstream ML tasks, e.g. training a prediction model. The naive synthetic data approach generates one synthetic dataset and treats it like it is real. We propose using an ensemble of generative models for capturing the generative uncertainty, implicitly encoded into different synthetic data distributions. some downstream task, e.g. statistical analyses or training an ML supervised model. The hope is that downstream results are equally valid in the real-world-e.g. a prediction model trained on synthetic data will do well on real data. Evidently, whether this is true will rely entirely on how well the synthetic data describes the real data. This brings us to the focus of this work: how do we do good ML on synthetic data, given that the generative process underlying the synthetic data is not perfect. If we are the data publisher, how should we create and publish synthetic data for it to be most useful to downstream users? And if we are the downstream user, can we create models that are more robust to potential errors, evaluate models reliably using synthetic data, and how do we estimate uncertainty for the downstream results? If we envision a future where synthetic data plays a significant role in research, these are pertinent questions.\nLet us first highlight why this is an important topic in practice. First, synthetic data is not perfect. Deep generative models may fail in numerable ways, with consequences such as mode collapse, noisy data, memorisation of training data, and poor coverage (van Breugel & van der Schaar, 2023). Second, in addition to synthetic data being imperfect, even just quantifying the quality of generative models is hard, because this requires comparing distributions as a whole-a notoriously difficult task (Alaa et al., 2022). These two points make it hard to guarantee the data is 'good' or 'bad'. Third, even if we would be able to measure the quality of synthetic data accurately, in general is not at all trivial how we would use this information for estimating the influence of the generative process on the downstream result-e.g. when training a neural network, the influence of individual training samples is highly complex. Since the downstream user usually has no access to real data (see Figure 1), they cannot verify results on real data. Let us give a real example of what could you go wrong when using synthetic data. Example. We take the SEER prostate cancer dataset and generate synthetic data using CTGAN with different numbers of hidden layers. Subsequently, we use a train-test-split on the synthetic data and train a random forest for classification. We compare the train-on-synthetictest-on-synthetic (TSTS) accuracy and the trainon-synthetic-test-on-real (TSTR) accuracy, the latter measured on a real hold-out test set (Jordon et al., 2021). Figure 2 displays the results over 10 runs. We see that the real performance of the downstream models is comparable across different generative model depths-an indicator that the utility of the synthetic data is similar. On the other hand, the TSTS performance indicates a large preference for the data generated using the deeper generator, with the corresponding TSTS vastly overestimating the TSTR. Also note the TSTS estimates have an undesirable high variance, due to the added randomness in the generative process.\nContributions. Through this work we explore how-and how notto perform basic ML tasks when using synthetic data. The contributions are as follows.\n1. We investigate how the standard, naive way of using synthetic data-using it like it is real data-yields poor downstream models, poor downstream model evaluation, and poor uncertainty quantification, due to it ignoring errors in the generation process itself.\n2. We introduce Deep Generative Ensemble (DGE) as a simple synthetic data framework for alleviating these concerns through generating multiple synthetic datasets. An advantage of DGE is its flexibility and compatibility with different deep generative base model (e.g. VAE, GAN, diffusion models).\n3. We investigate why and when DGE provides better downstream models, evaluation, selection, and better downstream uncertainty quantification.\n4. Furthermore, we explore how DGE improves upon the naive approach especially in low-density regions. This is important, since these regions may correspond to underrepresented groups in the population.\nSection 3 is mostly targeted at data publishers, focusing on synthetic data issues and how DGE aims to fix these. Section 4 explores experimentally how the naive approach fails, and describes how DGE-generated synthetic data can be used by data users for better downstream ML. In Section 5 we highlight takeaways for both groups." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b33", "b16", "b14", "b20", "b33", "b16", "b14", "b20", "b27", "b14", "b26", "b0", "b24" ], "table_ref": [], "text": "Generative ensembles and dropout. Deep generative models-notoriously GANs (Goodfellow et al., 2014)often lack diversity in their generated samples, i.e. a GAN that aims to generate patient hospital records may produce a lot of old people but hardly any young people. A large body of work (Tolstikhin et al., 2017;Grover & Ermon, 2017;Ghosh et al., 2017;Hoang et al., 2018) aims to fix diversity issues by ensembling GANs. Some of these approaches use boosting (Tolstikhin et al., 2017;Grover & Ermon, 2017), others use multiple generators (Ghosh et al., 2017;Hoang et al., 2018), discriminators (Nguyen et al., 2017;Ghosh et al., 2017), or dropout (Mordido et al., 2018). These approaches are entirely generative performance focused, and do not consider any application for or insight into improving some downstream ML task. Most importantly, they do not consider how data should be published, and these methods result still in a single synthetic dataset. In Section 4 we explore why publishing a single synthetic dataset does not suffice, even if it is generated by an ensemble.\nUncertainty quantification in ML. Uncertainty quantification has gained significant attention in recent deep learning literature, see (Abdar et al., 2021) for an overview. One of the more popular methods is Deep Ensembles (Lakshminarayanan et al., 2016), which provides a straightforward approach to supervised model uncertainty estimation: train multiple networks, create predictions using each network and consider the variance between the different networks.\nEven though this approach is simple, Deep Ensembles have been shown to perform very positively in comparison to fully bayesian methods, possibly due to their ability to capture uncertainty at a more global level of the weight space " }, { "figure_ref": [], "heading": "Method Works", "publication_ref": [ "b33", "b16", "b14", "b20", "b26", "b24", "b13", "b5", "b28", "b30", "b25", "b12", "b13", "b5", "b28", "b30", "b25" ], "table_ref": [ "tab_0" ], "text": "(i) (ii) (iii) (iv)\nEnsembles of generative models (Tolstikhin et al., 2017;Grover & Ermon, 2017;Ghosh et al., 2017;Hoang et al., 2018) ✓ × × × Dropout-GAN (Mordido et al., 2018) ✓ × × × Deep Ensembles (Lakshminarayanan et al., 2016) × ✓ × × MC dropout (Gal & Ghahramani, 2015) × ✓ × × Generative models for UQ (Böhm et al., 2019;Phan et al., 2019;Sensoy et al., 2020;Liu et al., 2022) Fort et al., 2019). Note that the other very popular UQ method of Monte Carlo dropout (Gal & Ghahramani, 2015) can also be seen as an ensemble method, but where the networks share most of the parameters. To the best of our knowledge, we are the first to apply UQ to generative models and their downstream task. We note that there are works that consider the opposite: applying generative models to UQ (Böhm et al., 2019;Phan et al., 2019;Sensoy et al., 2020;Liu et al., 2022). The aim of these methods is to improve UQ using some form of synthetic data, which is entirely tangential to our work that is interested in the uncertainty in the generative process itself. See Table 1 for an overview.\n✓ ✓ × × Deep Generative Ensemble (DGE) ✓ ✓ ✓ ✓(" }, { "figure_ref": [], "heading": "Modelling Generative Uncertainty", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Set-up", "publication_ref": [], "table_ref": [], "text": "Let X, Y be random variables on X , Y denoting features and label, respectively, and let us be given real data D r = (x (i) , y (i) ) n R i=1 from distribution p r (X, Y ). Let G θ be a generator parameterised by θ that outputs samples with distribution p θ (X, Y ). We denote samples from G θ by D s (x (i) , y (i) ) n S i=1 . In the typical generative modelling setting, the objective is to minimise:\nθ = arg min θ D(p θ , p r )(1)\nfor some divergence metric D (e.g. KL-divergence or Wasserstein distance). Though in the limit of infinite training data and capacity some generative models may be guaranteed to achieve p θ = p r (e.g. for GANs (Goodfellow et al., 2014)), in practice p θ is only an approximation. Evidently, inaccuracies in p θ imply that D s has a different distribution as D r , and hence this affects any downstream task T we may perform using D s . This task T can depend directly on the synthetic data-e.g. estimating the density at some point-or indirectly-e.g. estimating treatment effects or making prediction by first training some supervised ML model g. Thus, the variable T is a random variable itself, due to it depending on random D s , as well as possible training randomness (e.g. if it is a prediction of a downstream neural network). In any case, we want to take into account the uncertainty in θ when performing this task." }, { "figure_ref": [], "heading": "Influence of Data on Downstream Task", "publication_ref": [], "table_ref": [], "text": "To account for the synthetic data generation process when computing downstream T , let us consider the distribution of T . Let us denote the distribution of downstream T w.r.t. the real data D r as p(T |D r ), we can write:\np(T |D r ) = p(T |D s )p(D s |θ)p(θ|D r )dD s dθ. (2)\nLet us look at the right-hand-side terms in turn. p(T |D s ) is the distribution of T conditional on the synthetic data, which is entirely dependent on the downstream task and something we have no control over as a data publisher. The term p(D s |θ) is the output of the synthetic data generator for some θ, which we can sample from, but usually have no explicit expression for (in case of deep generative models). At last, p(θ|D r ) is the distribution over the generative modelling parameters given the real data. This is the term we are most interested in; it reflects the uncertainty we have in the generative model parameters themselves.\nComputing the integral in Eq. 2 exactly is intractable for most settings of interest (e.g. if the synthetic data is generated using a GAN). However if a data user would have expressions for all of the terms in Eq. 2, they could use Monte Carlo integration for computing any statistic of interest (e.g. the variance). \n[T ] = 1 K k\nT k and variance\nVar T ∼ p(T |Dr) (T ) = 1 K-1 k ( T k -E T ∼ p(T |Dr) [T ]\n) 2 Evidently, there is a trade-off when choosing K: a larger K will give more accurate estimates, but also larger computational costs. We will study this further in the experiments." }, { "figure_ref": [], "heading": "Modelling the Posterior over θ", "publication_ref": [], "table_ref": [], "text": "So how do we model p(θ|D r )? The Bayesian approach could motivate us to parameterise the forward generative process, giving p(D r |θ) = i p θ (z), and some prior p(θ), which would allow computing the posterior over θ: p(θ|D r ) = p(θ)p (Dr|θ) p(θ)p(Dr|θ)dθ . Computing the denominator is intractable for deep generative models. Consequently, we need to approximate this further. We draw inspiration from the supervised uncertainty quantification (UQ) literature, which aims to estimate p(ϕ|D) for some predictive model parameterised by ϕ trained on data D. We borrow a popular technique: Deep Ensembles. " }, { "figure_ref": [], "heading": "Approximating the", "publication_ref": [ "b39", "b12", "b6", "b31", "b23", "b32", "b24" ], "table_ref": [], "text": "p DGE (θ|D r ) = 1 K k δ(θ = θk )\n, after which we can use this distribution for computing any downstream statistic of interest. This is a strong assumption and indeed a crude Bayesian approximation of the true posterior-see (Wilson & Izmailov, 2021) for an in-depth discussion. Nonetheless, Deep Ensembles have a solid track record in predictive UQ, often outperforming more traditional Bayesian methods (Fort et al., 2019).\nChoosing the Baselines. An advantage of DGE is that it allows for different generative model classes. In this paper we focus on tabular data, because many high-stake applications of synthetic data are predominantly tabular, e.g. credit scoring and medical forecasting (Borisov et al., 2021;Shwartz-Ziv & Armon, 2022). Additionally, nearly 79% of data scientists work with tabular data on a daily basis, compared to only 14% who work with modalities such as images (Kaggle, 2017). We choose a GAN architecture, specifically CTGAN (Xu et al., 2019b), for its widespread use, and its high expected diversity between individually trained models-cf. VAEs, which tend to learn fuzzier distributions (Theis et al., 2015). We use (i) random initialization for each generative model, and (ii) the same real data for training each base model, as this has been shown to perform well in the supervised domain (Lakshminarayanan et al., 2016)." }, { "figure_ref": [], "heading": "Empirical Study: the Effect of DGE on Downstream ML", "publication_ref": [], "table_ref": [], "text": "In this section we consider fundamental supervised learning tasks, and how these can be applied in conjunction with synthetic data. We consider using DGE versus the naive synthetic data approach. All experimental details can be found in Appendix A.1 " }, { "figure_ref": [], "heading": "Synthetic Data for Model Training", "publication_ref": [ "b10" ], "table_ref": [], "text": "Let us start by considering how we can train better predictive models on synthetic data. We define \"better\", in terms of a lower generalization error. Choose some predictive loss function L : R 2 → R and predictor g ϕ : X → Y parameterised by ϕ. The generalization error is defined as Err(g ϕ , p r ) = E pr L(g ϕ (X), Y ). In the case of classification and a proper scoring rule L, the optimal solution is\ng ϕ (x) = p r (Y |X = x).\nBecause we do not have data from the real distribution, we cannot minimise the error w.r.t. the real distribution directly. Instead, we aim to choose ϕ that minimises:\nE θ [Err(g ϕ , p θ )] = E θ [E (X,Y )∼p θ (X,Y ) L(g ϕ (X), Y ))].\n(3) The typical synthetic data approach for model training uses a single synthetic dataset. This will yield high variance in the trained model, because we effectively minimise w.r.t. p θ1 (Y |X) for a single θ 1 ∼ p(θ|D r ). Instead, we use an empirical estimate as given by Eq. 2: we train a predictive model on each synthetic dataset individually, and average predictions. Let us show how this improves performance.\nDatasets. We use a range of datasets with different characteristics of interest: Scikit-learn's Two-moons and Circles toy datasets-simple two-dimensional datasets that we will later use for visualising the learnt models; UCI's Breast Cancer and Adult Census Income (Asuncion & Newman, 2007)-the former a very small dataset such that synthetic data generation is hard, the latter a large dataset with mixed categorical and numerical features; and SEER (Duggan et al., 2016) and Kaggle's Covid-19 dataset (Ministry of Health of Mexico, 2020), two large medical datasets with some highly unbalanced categorical features.\nSet-up. We first show that downstream models trained on DGE synthetic data, perform better on real data than baselines. We compare against a classifier trained on a single synthetic dataset (Naive (S)) and a pseudo-oracle trained on the real, generative training data (D r -model). For fair evaluation, we also include the use of an ensemble of classifiers (Naive (E)) and use the same MLP architecture for all predictive models. At last, we also include a naive generative ensemble approach that concatenates all synthetic datasets (Naive (C)).\nWe consider the TSTR AUC performance, computed on a hold-out dataset of real data that has not been used for training the generative model. We use CTGAN (Xu et al., 2019b) with the same hyperparameters and architecture in all experiments. In Appendix B we include experiments for other models, showing similar results, and so too do the CelebA and CIFAR-10 results in Appendix D. See Appendix A for experimental details.\nResults. See Table 2. Training the downstream models on an ensemble of synthetic datasets achieves almost D r -model performance on real data. In contrast, the naive baseline is often a few percent lower. This performance increase is not merely due to ensembles generally being more robust, since the Naive (ensemble) method does not perform as well as DGE 20 , despite using 20 base models. Note that the performance of DGE with K = 20 is higher on average, but even for K = 5 we find a significant advantage over the naive (i.e. K = 1) baseline.\nThese results are unsurprising. When the generative model is erroneous-e.g. it overfits or underfits-we expect the naive method to perform poorer than the DGE method, since the different models in the DGE are unlikely to make the same mistakes. Inevitably the effect of generative model overfitting is dependent on the downstream task. A simpler downstream task-or simpler downstream model-is less prone to copying the generative overfitting. We will explore this further in Section 4.2.2.\nTakeaway: By generating multiple synthetic datasets (e.g. K = 5), training a prediction model on each, and averaging predictions, one can achieve better performance on real data compared to the naive approach of training a model on a single synthetic dataset. The largest gains are observed when the generative model tends to overfit." }, { "figure_ref": [ "fig_7" ], "heading": "Synthetic Data for Model Evaluation and Selection", "publication_ref": [ "b17" ], "table_ref": [], "text": "Model evaluation and selection (ME/MS) is key to machine learning. ME aims to estimate a model's generalisation error Err(g ϕ , p r ), while MS aims to choose the model (among a list of models) that minimises this error.\nEstimating the generalization error is usually achieved through train-test splits or cross-validation (Hastie et al., 2001). Unfortunately, this approach is not possible when we are in the synthetic data regime where an ML practitioner has no access to real data-see Figure 1 s,test . We use an MLP as downstream model g, trained on a single synthetic training set. We compare the g performance evaluation of the naive approach, our DGE approach and pseudo-oracle evaluation (Oracle)-the latter being the performance of g on a hold-out real dataset. We report results over 20 runs. " }, { "figure_ref": [ "fig_2" ], "heading": "Results.", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "In Table 3 we see that the DGE and naive evaluation approaches perform very differently. We see the naive approach overestimates the model performance. This is due to a synthetic data variant of data leakage: overfitting in the generative model is copied by the trained model, but is also reflected in the synthetic test set. DGE evaluation suffers less from this bias, since the test set is from a different generative model, i.e. different draws from θ k . Conversely, generative errors cause DGE to underestimate the downstream performance often. Figure 3 shows the same story by varying the generator complexity. An underfitted generative model affects both approaches similarly-slightly underestimated performances. On the other hand, an overfitted generative model leads to significantly overestimated performance by the naive approach. Let us explore how the downstream task plays its part, through considering different downstream models.\nTakeaway: Using train-test-splits on single synthetic datasets can lead to significantly overestimated real-world Table 2. Using an ensemble of synthetic datasets for downstream model training improves real-world performance. AUC performance of different approaches on different datasets, when trained on synthetic data and tested on real data. For the naive methods, we report the median performance across 20 synthetic datasets. Naive (S) uses a single classifier, Naive (E) uses an ensemble of classifiers, though both are trained on a single synthetic dataset. Naive (C) uses all 20 synthetic datasets but naively concatenates them before training a classifier. Note that DGEK gives consistently better performance on average, even for K = 5." }, { "figure_ref": [], "heading": "Moons", "publication_ref": [], "table_ref": [], "text": "Circles Adult Income Breast Cancer SEER Covid-19 Mean D r -model 0.996 ± 0.0 0.868 ± 0.0 0.87 ± 0.0 0.993 ± 0.0 0.907 ± 0.0 0.928 ± 0.001 0.927 Naive (S) 0.981 ± 0.006 0.801 ± 0.054 0.821 ± 0.006 0.975 ± 0.008 0.885 ± 0.006 0.869 ± 0.02 0.889 Naive (E) 0.981 ± 0.006 0.802 ± 0.053 0.837 ± 0.004 0.978 ± 0.009 0.888 ± 0.006 0.895 ± 0.015 0.897 Naive (C) 0.985 ± 0.001 0.862 ± 0.005 0.852 ± 0.007 0.974 ± 0.011 0.906 ± 0.001 0.895 ± 0.005 0. Results. We see that DGE ranks models similar to the oracle, whereas the naive approach favours complex models. The latter is again explained by the naive approach's positive bias to a model that captures the generative model's overfitting. Congeniality plays a big role this time; the naive approach is inclined to choose a predictive model that is similar to the generative model's parameterisation. Like most deep generative models, CTGAN uses a neural network architecture for implicitly encoding the label distribution, so it is predictable that this can be best replicated by a deep MLP as downstream model. 2 The naive approach becomes 2 This argument is not entirely straightforward. Since con-even worse when the amount of synthetic data increases, since for more complex models this allows better learning of the generative label distribution-see Appendix C for experiments.\nTakeaway: The naive approach consistently selects more complex models that can copy the generative model's p θ (Y |X), but which generalise poorly to real data. DGE has a more accurate estimate of real-world performance due to evaluating on other synthetic datasets, which leads to it selecting simpler downstream models that generalise better to real-world data." }, { "figure_ref": [ "fig_4", "fig_5" ], "heading": "Model Uncertainty", "publication_ref": [ "b9", "b8", "b4" ], "table_ref": [ "tab_8", "tab_8", "tab_0" ], "text": "Going one step further than evaluation, we look at downstream predictive uncertainty quantification. Using synthetic data like it is real does not account for uncertainty in the generative process itself, which leads to underestimated downstream uncertainty. We focus on classification and define uncertainty in terms of the estimated probability of the predicted label. We show the difference between generative 104 5 7 6 1 2 3 4 DGE 5 0.797 ± 0.189 0.819 ± 0.184 0.831 ± 0.132 0.859 ± 0.07 0.87 ± 0.072 0.873 ± 0.137 0.884 ± 0.088 7 6 5 4 3 2 1 DGE 10 0.798 ± 0.144 0.811 ± 0.171 0.826 ± 0.12 0.85 ± 0.064 0.861 ± 0.063 0.871 ± 0.101 0.878 ± 0. 065 7 6 5 4 3 2 1 DGE 20 0 and predictive uncertainty.\nSet-up. To separate generative and predictive uncertainty, we include a Deep Ensembles predictive model (Lakshminarayanan et al., 2016) that provides uncertainty on the predictive level. Effectively, we compare the sample mean and variance in P (Y = 1|x) of (i) a DGE approach, in which each synthetic dataset is used for training a predictive model, (ii) a naive approach, in which one single synthetic dataset is used to train K predictive models in a Deep Ensembles fashion, (iii) Naive (C), in which all synthetic datasets are concatenated and a Deep Ensembles is trained on this dataset. We add toy dataset Gaussian for visualization-see Appendix A for details-and remove the Breast cancer dataset due to insufficient number of real test samples for accurate assessment.\nResults. First, let us draw the confidence-accuracy curves for different methods on the real-world datasets, see Figure 4. We see that DGE is able to make consistently more confident predictions more successfully than the naive approach. DGE performs en par with the D r -model, and in some cases outperforms it; this is likely due to the generative models effectively augmenting the data, which has been shown to increase downstream robustness (Antoniou et al., 2017;Dina et al., 2022;Das et al., 2022;Bing et al., 2022).\nLet us try to understand why uncertainty quantification on the single synthetic dataset does not suffice, by separating generative and predictive uncertainty. Specifically, let us plot the sample standard deviation of the different classifiers in the naive Deep Ensembles approach versus the DGE 20 approach, see Figure 5 and nota bene the different colorbar scales. We include the Naive (C) baseline, which is a naive approach that simply concatenates all synthetic datasets and runs a Deep Ensembles on this, but does not explicitly take into account generative uncertainty. We include this baseline to show that typical generative ensembles that result in a single dataset (see Table 1), fail for UQ. We see that the naive approaches lead to poor diversity between different models within the ensemble. Since they cannot capture the generative uncertainty, these approaches provide little insight into how the model may do on real data. On the other hand, the diversity between DGE 20 classifiers is much higher and arguably provides more intuitive explanations. We also see that the naive approaches overestimate confidence in low-density regions-even if it is on a decision boundary-whereas DGE does not. This is unsurprising, since generative uncertainty is also highest in these regions.\nTakeaway: DGE provides uncertainty on the generative level, which the naive approaches cannot. It is thus essential to ensure individual synthetic datasets in the DGE are published separately (cf. concatenated and shuffled)" }, { "figure_ref": [ "fig_6" ], "heading": "Underrepresented Groups", "publication_ref": [ "b7", "b46", "b4" ], "table_ref": [], "text": "The generative process is expected to be most inaccurate for regions with few samples. Since low-density regions can correspond to minority groups in the population, this would be disconcerting for the reliability of our synthetic data. In this section we explore the quality of downstream models on underrepresented groups in the dataset.\nSet-up. We investigate the Covid-19 dataset, because it consists of mostly categorical data with unbalanced features. Let us define \"underrepresented groups\" in terms of minority categories of individual features-see Appendix A. We rerun the experiment from 4.1 and evaluate performance on the minority groups, see Figure 6. We plot the performance relative to a D r -model, which is trained on D r and also evaluated on underrepresented groups.\nResults. Note the distinctly different behaviour between the naive and DGE approach. The naive approach performs worse than the D r -model for most underrepresented groups, even though it performs comparably overall. On the other hand, the DGE approach consistently outperforms the D rmodel. The latter is explained by interpreting DGE as a data augmentation method, in which the synthetic datasets (in this case 20 times the size of the real data) replace the real data. Data augmentation can effectively regularize trained model (Chawla et al., 2002;Zhang et al., 2017;Antoniou et al., 2017), and lead to better performance on underrepresented groups (Bing et al., 2022).\nTakeaway: Closer inspection shows that naive downstream model training leads to particularly poor performance on small subgroups in the population. On the other hand, DGE has a regularization effect on the downstream predictor, consistently outperforming the D r -model on minority groups. " }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "DGE. We have aimed to highlight a gap in our understanding of synthetic data: how to do ML on synthetic data in practice, and the validity of downstream results. We have shown that the standard synthetic data methodologytreating the synthetic data like it is real-leads to poor downstream trained models, evaluation, and uncertainty, with a tendency to do worst for underrepresented groups. We have shown that DGE, which provides multiple synthetic datasets, is a simple and scalable way of avoiding these problems partly. We hope this work will result in a significant change in how synthetic datasets are published, create more interest in synthetic data's use and limitations, and contribute to the trustworthiness of synthetic data. Table 5 highlights the takeaways for practitioners.\nPractical Considerations. Let us highlight practical considerations. First, DGE is not a perfect approximation for the true posterior of generative model parameters. Further research is required into systemic errors between synthetic datasets, e.g. if the true model is not well approximated by the generator class. Second, the use of ensembles requires extra compute at generation, downstream training, and downstream inference stage. In practice, we have seen that even for K = 5 we can get significant gains compared to the naive approach. Additionally, cost may be further reduced by sharing parameters across models or using MC drop-out in the generative model. Third, each synthetic dataset is derived from the same real data, hence there is some data leakage between train and test sets. At last, if privacy is key, the ensemble approach of DGE requires scaling the privacy budget for each synthetic data generator to account for the multiple generators. Please see Appendix E for a longer discussion. Synthetic data publishers (1) Generate multiple synthetic datasets with different random seeds. This allows better downstream ML, even if the number of generated datasets is small (e.g. K = 5).\n(2) These datasets need to be distinguishable-i.e. publishing each set in separate files or create a variable that denotes the set-since for some downstream tasks (e.g. UQ) it does not suffice to simply concatenate datasets without specifying the source of each sample. (3) Include meta-data on the generative model, e.g. the amount of training data. Publishing the model class itself is also advisable, since the generative class can have implication on downstream model selection through congeniality.\nSynthetic data users (1) Do not treat synthetic data like it is real. (2) Train models an ensemble of synthetic datasets, to improve model performance on real data. (3) For model evaluation and selection to reflect real-world performance, use train-test splits on the synthetic dataset level: train on some synthetic datasets, and evaluate on others. (4) Be careful with UQ in the synthetic data regime, as naive UQ on a single dataset does not consider uncertainty in the generative process itself. By ensembling models trained on different synthetic datasets, one can quantify generative uncertainty. (5) All of the above are extra relevant for ML on underrepresented groups, since the generative model is more likely to be inaccurate in these regions.\nExtensions to Other Downstream Tasks. We have explored using DGE for downstream prediction tasks, but future work could consider other domains (e.g. unsupervised learning, reinforcement learning, statistical analyses)." }, { "figure_ref": [], "heading": "A. Experimental Details", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.1. Data", "publication_ref": [ "b3", "b10" ], "table_ref": [], "text": "Throughout the paper we have used Scikit-learn's Twomoons and Circles toy datasets, UCI's Breast Cancer and Adult Census Income (Asuncion & Newman, 2007), and SEER (Duggan et al., 2016) and Mexican Covid-19 dataset (Ministry of Health of Mexico, 2020), two large medical datasets with some highly unbalanced categorical features. For the uncertainty experiment, we have added a third toy dataset Gaussian. The dataset was generated using:\nX 1 , X 2 iid ∼ N (0, 1) Y ∼ Bern(t(X 1 + 1)) t(x) =      0 , x < 0 1 , x > 2 1 2 x , otherwise\nThis dataset was chosen because the optimal decision boundary is exact, ŷ = 1(x 1 > 0)." }, { "figure_ref": [], "heading": "A.2. Experimental Settings", "publication_ref": [], "table_ref": [], "text": "Throughout, we have used CTGAN (Xu et al., 2019c) as the generative model, with hidden layer size 500 for both discriminator and generator. Unless otherwise stated, as downstream model we use Scikit-learn's MLP with 2 hidden layers of size 100, cross-entropy loss. We use learning rate 10 -3 , L 2 regularization 10 -4 , and optimization using Adam for all neural networks. 5. We take a grid of test data and create predictions for each approach.\n6. For each approach, we take the mean and standard deviation between members of the ensemble.\n7. We plot the contour where the mean equals 0.5 (i.e. the decision boundary), and plot the standard deviations over the full grid.\nSection 4.4. We use the same set-up as Section 4.1-2000 generative training samples with two hidden layers for both the CTGAN and the downstream MLP-however we use a different evaluation set (Step 5 in the pipeline). We take an agnostic approach in defining minorities: we loop through all features and for each feature we choose a minority subset based on this feature alone. For categorical features, we choose the smallest category with less than 20% and more than 0.5% of the population (and skip the feature if no such category exists). For continuous variable age, we choose the 10% oldest patients. Categories are given in the figure behind the feature name." }, { "figure_ref": [], "heading": "B. Type of Generative Model", "publication_ref": [ "b36", "b45" ], "table_ref": [ "tab_8" ], "text": "In the main paper we have used CTGAN for all experiments, but in this section we show main results generalise to other deep generative models. We use the Synthcity library (Qian et al., 2023) and generate data use different architectures, TVAE, a normalizing flow and ADS-GAN (Yoon et al., 2020)-a privacy GAN. We use default settings for each. We repeat experiment 4.1-see Table 6. We see the DGE K approaches perform significantly better on average, even for K = 5." }, { "figure_ref": [], "heading": "C. Effect of Synthetic Dataset Size", "publication_ref": [], "table_ref": [], "text": "In the model selection experiments, we have seen that the naive approach's failure lies not only in the generative process; it is also dependent on the downstream model having the capacity to pick up on generative errors. We have seen that this implies that larger models are more likely to be chosen, since these may learn and copy overfitted correlations in the generated distribution best. In this appendix we explore another actor in the naive approach's failure: the synthetic dataset size.\nIn Figure C we repeat experiment 4.2.2, but vary the synthetic dataset size from 500 to 20000. The oracle evaluation shows that the downstream trained models become slightly more stable (e.g. deep MLP) when more data is used, but their ranking does not change. The naive approach starts to significantly overestimate the complex models (random forest and deep MLP), giving a very poor and highly datasetsize variable ranking. The DGE approach to model evaluation and selection is much more stable, and follows the oracle closely." }, { "figure_ref": [], "heading": "D. Image Experiment", "publication_ref": [ "b18" ], "table_ref": [], "text": "In the main paper, we focused on tabular data-see Section 3.4 for a motivation-and a GAN-based generator. Evidently, errors in the generative process are not constrained to tabular data. We include preliminary results for CIFAR-10 and CelebA (64× 64), generated using a small vanilla conditional Denoising Diffusion Probabilistic Model (Ho et al., 2020). For both cases, we follow the same set-up as in Section 4. " }, { "figure_ref": [], "heading": "E. Limitations", "publication_ref": [ "b13", "b38", "b13", "b12", "b37", "b11", "b24", "b24" ], "table_ref": [], "text": "Ensembles. A limitation of DGE is the cost of training and inference scaling linearly with the number of models. In practice, this need not always be a problem. First, we have shown that even for K = 5, synthetic datasets in the ensemble vastly outperform the currently-standard naive baseline. Second, generative models can be trained in parallel and so can downstream models. Third, the most expensive part of our experimentation pipeline is the generative stage, which in practice would be performed by a data publisher who runs DGE only once. Fourth, many datasets are not as big as high-resolution image datasets, and are thus cheaper to generate and train models on; training a CTGAN on the Adult dataset and generating data takes less than four minutes on a consumer PC with an RTX 3080 GPU. Fifth, though we propose a deep generative ensemble with unshared weights across generative models, this can be extended to partly shared weights--e.g. future work could consider a single network with MC-dropout as UQ method (Gal & Ghahramani, 2015).\nBesides cost, a deep ensembles are a crude approximation for the true posterior of generative model parameters. The quality of DGE relies partly on how well the base model can approximate the true distribution. In the supervised learning literature, Wenzel et al. (2020) propose an ensemble method based on this idea, that ensembles not just over model parameters, but hyperparameters too. Other methods can be used for approximating the posterior over θ. One can draw inspiration from UQ literature, e.g. use MC dropout UQ (Gal & Ghahramani, 2015), which often performs poorer than DE (Fort et al., 2019), but scales better. Wen et al. (2020) aim to achieve the best of both worlds by sharing parameters across ensemble members, but promoting diversity in model outcome. Future work can extend these methods to the synthetic data generation regime. At last, generating multiple synthetic datasets has privacy implications. If differential privacy guarantees are required, one needs to scale the privacy budget of each synthetic dataset (Dwork et al., 2006).\nData Leakage. In this work we follow the usual Deep Ensembles approach and train each generative model on all real data. Though the original paper (Lakshminarayanan et al., 2016) finds that this performs better for uncertainty estimation, one may wonder whether this does not lead to an underestimate of generalisation error, due to the synthetic test and training sets both being derived from the same real data. This bias can be reduce by training the generators on disjoint subsets of the real data, such that one can train downstream models on some synthetic dataset, and evaluate on another. The downside of this naive approach is that the amount of training data for each generator is just 1/K the total amount, and that that by itself could lead to weaker (or more overfitted) generators. In this work we have decided to use all data for training the generators, following (Lakshminarayanan et al., 2016). As seen in Section 4.2.1, there are no signs for a positive bias in model evaluation when using the DGE method, despite all generative models using the same training data. This is possibly explained the hope that generators may overfit to some data, but that different generators overfit in different ways-hence the variance across downstream results from different synthetic datasets is a correct measure of uncertainty. This could motivate increasing independence of generators, e.g. through different model choices and hyperparameters, but we leave this for further work." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "We would like to thank the ICML reviewers and area chairs for their time and feedback, as well as Nabeel Seedat who reviewed an early draft of the paper. Additionally, we would like to acknowledge the Office of Navel Research UK, who funded this research." } ]
Generating synthetic data through generative models is gaining interest in the ML community and beyond, promising a future where datasets can be tailored to individual needs. Unfortunately, synthetic data is usually not perfect, resulting in potential errors in downstream tasks. In this work we explore how the generative process affects the downstream ML task. We show that the naive synthetic data approach-using synthetic data as if it is real-leads to downstream models and analyses that do not generalize well to real data. As a first step towards better ML in the synthetic data regime, we introduce Deep Generative Ensemble (DGE)-a framework inspired by Deep Ensembles that aims to implicitly approximate the posterior distribution over the generative process model parameters. DGE improves downstream model training, evaluation, and uncertainty quantification, vastly outperforming the naive approach on average. The largest improvements are achieved for minority classes and low-density regions of the original data, for which the generative uncertainty is largest.
Synthetic Data, Real Errors: How (Not) to Publish and Use Synthetic Data
[ { "figure_caption": "Figure 2 .2Figure 2. Conclusions drawn from synthetic data do not always transfer to real data.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "4.2.1. MODEL EVALUATION Set-up. We split the real data into a training and a test set, and as before train K generative models on the training set, providing {D k s } K k=1 . Subsequently, we split up each D k s into a training and a test set for the downstream model, D k s,train and D k", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Varying generative size for SEER dataset, shows model evaluation becomes overestimated for the naive approach when the generative model starts overfitting. DGE is more robust to this.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "ditional generators like CTGAN usually model the conditional feature distribution p(X|Y = y) using an NN, it is not necessarily true that the output p(Y |X) itself falls in the same model class as the generator. Nonetheless, we do expect the generator output to show similar behaviour (e.g. ReLu artifacts) as the underlying NN, since p(Y |X) = p(X|Y )p(Y )/( y p(X|Y = y)p(Y = y)).", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Confidence accuracy curves. Given threshold τ (x-axis), the y-axis shows the accuracy on test sample with confidence P (Y = ŷ|x) > τ . DGE achieves consistently higher accuracy for different confidence thresholds.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Comparison of predictive versus generative uncertainty. We plot the sample std of different ensembles, where columns denote datasets and rows approaches. The Dr decision boundary ( P (Y = 1|x) = 0.5) is drawn in dotted white and other decision boundaries in dashed red.In almost all cases, these decision boundaries are significantly different. Meanwhile, the std of the naive approaches does not reflect this deviation, hence it underestimates the uncertainty-N.B. the different color scales. This is caused by these methods not considering the generative uncertainty. DGE20 is preferred, as it does reflect generative uncertainty.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Accuracy of downstream model relative to Dr-model, evaluated on minority subsets. The naive approach tends to underperform the Dr-model on minority sets, whereas DGE outperforms the Dr-model.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Sections 4. 1 .1For CTGAN we use 2 hidden layers of sizes 500 for all datasets, and we use 2000 examples for training the generative model (except for breast cancer, for which we use 80% of data) and generate 2000 samples. The pipeline consists of the following. 1. Real data is split into D r,train and D r,test 2. Generative model G is trained on D r,train and used to generate. This is repeated K = 20 times, for different random seeds. 3. Each generative model is used to generate a respective dataset D k s . 4. {D k s } K k=1 is used for training the different downstream approaches. (a) Naive single (S): the first synthetic dataset is used to train the downstream classifier. (b) Naive ensemble (E): the first synthetic dataset is used to train 20 classifiers with different seeds. Outputted probabilities are averaged. (c) Naive concatenated (C): synthetic sets {D k s } 2 k=1 0 are concatenated and the full dataset is used for training downstream classifier. (d) DGE K : a classifier is trained on each of the synthetic datasets, and the resulting model is the ensemble that averages the outputted probabilities of the first K classifiers. (e) D r -model: this model is trained on the real data, D r,train , and not the synthetic data.", "figure_data": "", "figure_id": "fig_7", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "(a) Naive ensemble (E): the first synthetic dataset is used to train 20 classifiers with different seeds. (b) Naive concatenated (C): synthetic sets {D k s } 2 k=1 0 are concatenated and the full dataset is used for training 20 downstream classifiers with different seeds. (c) DGE 20 : a classifier is trained on each of the synthetic datasets. (d) D r -model: 29 downstream models are trained on the real data, D r,train .", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure7. Varying synthetic dataset size has a large effect on the naive model evaluation and selection. The naive approach starts overestimating complex models when the dataset size increases, while DGE20 follows the oracle much more closely (±1 difference). Ranking: lower is better", "figure_data": "", "figure_id": "fig_9", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Comparison to related work. (i) Focuses on generative models, (ii) Considers downstream ML tasks, (iii) Considers error in the generative process, (iv) Provides guidelines to synthetic data publishers and users.", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": ". The naive approach is to have a single synthetic dataset and treat it like real data-i.e. one trains a predictive model f on D k This induces bias, since the estimate is taken w.r.t. the same generative model. It also has a high variance, because it is an estimate w.r.t. a single draw of θ k . Can we do better? As a closer alternative to using an independent real test setwhich we do not have-we evaluate w.r.t. other test sets, i.e. ∪ i̸ =k D i s,test . This reduces the bias and variance, due to us not using the same model parameters θ k for training and evaluation. Let us explore in turn how this influences model evaluation and selection.", "figure_data": "", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Naive synthetic data model evaluation overestimates real-world performance. Performance of a fixed model, evaluated using different approaches. The naive approach overestimates performance, whereas the DGE approach slightly underestimates it. In this part, we ask the question: can we decide on which predictive model to use? We use the same set-up as before, but repeat the experiment for the following models: logistic regression, random forest, 5-nearest neighbour, XGBoost, SVM, and a deep MLP-see Appendix A for experimental details. We consider the ranking of models by different approaches (naive, DGE, and oracle).", "figure_data": "912", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Naive evaluation does not preserve real-world model ranking, whereas DGE does. Model selection on SEER, though evaluating different model classes trained on a downstream synthetic dataset. We see that ranking of models is preserved in all three DGE approaches, in contrast to to the naive approach. Results show mean AUC and standard deviation, taken across 20 runs.", "figure_data": "deep MLPSVMkNNXGBoostRFMLPLog. reg.RankingOracle 0.767 ± 0.158 0.781 ± 0.160.8 ± 0.1020.838 ± 0.054 0.844 ± 0.053 0.849 ± 0.116 0.869 ± 0.062 7 6 5 4 3 2 1Naive0.849 ± 0.113 0.804 ± 0.187 0.823 ± 0.118 0.885 ± 0.085 0.873 ± 0.075 0.867 ± 0.143 0.866 ± 0.", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Guidelines for practitioners.", "figure_data": "", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Section 4.3. We use a similar settings as in Section 4.1-2000 generative training samples, two CTGAN hidden layers and one MLP hidden layer. The MLP uses cross-entropy, which is a proper scoring rule, hence this warrants its use for uncertainty quantification(Lakshminarayanan et al., 2016).", "figure_data": "The pipeline is as follows.1. Real data is split into D r,train and D r,test2. Generative model G is trained on D r,train and used to generate. This is repeated K = 20 times, for different5. Each approach's model is tested on D r,test .random seeds.6. Steps 2-5 are repeated for 10 runs with different seeds.3. Each generative model is used to generate a respectiveSection 4.2.1. We increase the generative model and in-dataset D k s .crease the number of training samples. Specifically, we use 5000 samples for training the generative model and gener-4. {D k s } K k=1 is used for training the different downstream Deep Ensembles approaches.ate 5000 samples. We use 3 hidden layers for CTGAN's discriminator and generator, and 2 hidden layers for the downstream model. In the Model Selection experiment, weuse default settings for Scikit-learn classifiers-random for-est has 100 estimators, kNN uses 5 neighbours, SVM usesan RBF kernel-and use 3 hidden layers for deep MLP andjust 1 hiden layer for MLP.The pipeline is similar as before, with the same steps 1-3.1. Real data is split into D r,train and D r,test2. Generative model G is trained on D r,train and used togenerate. This is repeated K = 20 times, for differentrandom seeds.3. Each generative model is used to generate a respectivedataset D k s .4. Each synthetic dataset is split into a train and test splitp train = 0.8, D k s,train and D k s,test .5. Each set D train sis used for training a downstreammodel, g k .6. {D k s,test } K k=1 are used for evaluating the models g k ,using different approaches.(a) Naive: g k is evaluated on D k S,test .(b) DGE K : g k is evaluated on ∪ i̸ =k D i s,test (limitedto K = 5, 10 or 20 other datasets)(c) Oracle: this is a pseudo-oracle that evaluates g kon real test set D test r.7. The previous step is repeated for k = 1, ..., 20 andresults are averaged per approach.8. (Model selection) For the model selection experiment,steps 4-7 are repeated for the aforementioned modelclasses (XGBoost, SVM, etc). The target ranking isgiven by the Oracle model.", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" }, { "figure_caption": "1, though we use the full training split of the original data for training. For CelebA, we use annotation male as label, and generate 10000 samples per class. For CIFAR-10, we generate 6000 samples per class. For the model trained on real data (D r ), we use the full training split. We evaluate on the real test split. See results in Table7Repeating experiment 4.1 for different generative model classes; AUC of downstream model trained on synthetic data, evaluated on real data. DGEK consistently outperforms the naive approaches on all datasets and generative model classes (except Moons for which it performs similarly).", "figure_data": "(a) TVAEMoonsCirclesAdult Income Breast Cancer SEERCOVID-19MeanOracle0.996 ± 0.00.868 ± 0.00.871 ± 0.00.993 ± 0.00.907 ± 0.00.928 ± 0.001 0.927Naive (S) 0.989 ± 0.003 0.856 ± 0.006 0.826 ± 0.012 0.97 ± 0.0210.891 ± 0.005 0.867 ± 0.019 0.9Naive (E) 0.989 ± 0.003 0.856 ± 0.006 0.84 ± 0.0090.979 ± 0.016 0.893 ± 0.005 0.892 ± 0.013 0.908Naive (C) 0.991 ± 0.001 0.867 ± 0.002 0.854 ± 0.004 0.975 ± 0.011 0.907 ± 0.001 0.886 ± 0.005 0.913DGE 50.991 ± 0.001 0.866 ± 0.002 0.873 ± 0.003 0.984 ± 0.006 0.906 ± 0.001 0.917 ± 0.004 0.923DGE 100.991 ± 0.001 0.867 ± 0.001 0.883 ± 0.002 0.986 ± 0.002 0.908 ± 0.001 0.927 ± 0.002 0.927DGE 200.991 ± 0.00.868 ± 0.001 0.89 ± 0.0010.987 ± 0.003 0.909 ± 0.00.934 ± 0.002 0.93(b) ADS-GANMoonsCirclesAdult Income Breast Cancer SEERCOVID-19MeanOracle0.996 ± 0.00.868 ± 0.00.871 ± 0.001 0.993 ± 0.00.907 ± 0.00.928 ± 0.001 0.927Naive (S) 0.981 ± 0.011 0.811 ± 0.042 0.811 ± 0.012 0.961 ± 0.020.886 ± 0.007 0.881 ± 0.021 0.888Naive (E) 0.981 ± 0.012 0.81 ± 0.0430.829 ± 0.009 0.972 ± 0.012 0.888 ± 0.006 0.904 ± 0.012 0.897Naive (C) 0.989 ± 0.001 0.861 ± 0.006 0.846 ± 0.003 0.965 ± 0.013 0.904 ± 0.001 0.897 ± 0.007 0.91DGE 50.988 ± 0.003 0.852 ± 0.011 0.872 ± 0.003 0.985 ± 0.006 0.902 ± 0.003 0.928 ± 0.006 0.921DGE 100.988 ± 0.002 0.863 ± 0.004 0.883 ± 0.002 0.986 ± 0.005 0.905 ± 0.001 0.937 ± 0.003 0.927DGE 200.988 ± 0.001 0.865 ± 0.002 0.889 ± 0.001 0.986 ± 0.003 0.906 ± 0.001 0.943 ± 0.002 0.93(c) Normalizing flowMoonsCirclesAdult Income Breast Cancer SEERCOVID-19MeanOracle0.996 ± 0.00.868 ± 0.00.871 ± 0.00.993 ± 0.00.907 ± 0.00.928 ± 0.001 0.927Naive (S) 0.984 ± 0.007 0.778 ± 0.077 0.719 ± 0.041 0.966 ± 0.013 0.871 ± 0.020.595 ± 0.116 0.819Naive (E) 0.984 ± 0.007 0.781 ± 0.078 0.743 ± 0.038 0.971 ± 0.010.875 ± 0.018 0.678 ± 0.123 0.839Naive (C) 0.986 ± 0.002 0.863 ± 0.002 0.773 ± 0.020.934 ± 0.020.898 ± 0.002 0.786 ± 0.039 0.873DGE 50.984 ± 0.005 0.857 ± 0.005 0.81 ± 0.0120.978 ± 0.009 0.899 ± 0.003 0.852 ± 0.035 0.897DGE 100.984 ± 0.004 0.863 ± 0.003 0.845 ± 0.008 0.983 ± 0.005 0.903 ± 0.002 0.896 ± 0.019 0.912DGE 200.985 ± 0.001 0.865 ± 0.001 0.865 ± 0.006 0.984 ± 0.003 0.905 ± 0.001 0.923 ± 0.007 0.921", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" } ]
Boris Van Breugel; Zhaozhi Qian; Mihaela Van Der Schaar
[ { "authors": "M Abdar; F Pourpanah; S Hussain; D Rezazadegan; L Liu; M Ghavamzadeh; P Fieguth; X Cao; A Khosravi; U R Acharya; V Makarenkov; S Nahavandi", "journal": "Information Fusion", "ref_id": "b0", "title": "A review of uncertainty quantification in deep learning: Techniques, applications and challenges", "year": "2021" }, { "authors": "A M Alaa; B Van Breugel; E Saveliev; M Van Der Schaar", "journal": "", "ref_id": "b1", "title": "How Faithful is your Synthetic Data? Samplelevel Metrics for Evaluating and Auditing Generative Models", "year": "2022-02" }, { "authors": "A Antoniou; A Storkey; H Edwards", "journal": "", "ref_id": "b2", "title": "Data Augmentation Generative Adversarial Networks", "year": "" }, { "authors": "Asuncion ; A Newman; D ", "journal": "", "ref_id": "b3", "title": "UCI machine learning repository", "year": "2007" }, { "authors": "S Bing; A Dittadi; S Bauer; P Schwab", "journal": "", "ref_id": "b4", "title": "Conditional Generation of Medical Time Series for Extrapolation to Underrepresented Populations", "year": "2022" }, { "authors": "V Böhm; F Lanusse; U Seljak", "journal": "", "ref_id": "b5", "title": "Uncertainty Quantification with Generative Models", "year": "2019" }, { "authors": "V Borisov; T Leemann; K Seßler; J Haug; M Pawelczyk; G Kasneci", "journal": "", "ref_id": "b6", "title": "Deep neural networks and tabular data: A survey", "year": "2021" }, { "authors": "N V Chawla; K W Bowyer; L O Hall; W P Kegelmeyer; Smote", "journal": "Journal of Artificial Intelligence Research", "ref_id": "b7", "title": "Synthetic Minority Over-sampling Technique", "year": "2002" }, { "authors": "H P Das; R Tran; J Singh; X Yue; G Tison; A Sangiovanni-Vincentelli; C J Spanos", "journal": "", "ref_id": "b8", "title": "Conditional synthetic data generation for robust machine learning applications with limited pandemic data", "year": "2022" }, { "authors": "A S Dina; A Siddique; D Manivannan", "journal": "", "ref_id": "b9", "title": "Effect of balancing data using synthetic data on the performance of machine learning classifiers for intrusion detection in computer networks", "year": "2022" }, { "authors": "M A Duggan; W F Anderson; S Altekruse; L Penberthy; M E Sherman", "journal": "The American journal of surgical pathology", "ref_id": "b10", "title": "The surveillance, epidemiology and end results (SEER) program and pathology: towards strengthening the critical relationship", "year": "2016" }, { "authors": "C Dwork; K Kenthapadi; F Mcsherry; I Mironov; M Naor", "journal": "", "ref_id": "b11", "title": "Our data, ourselves: Privacy via distributed noise generation", "year": "2006" }, { "authors": "S Fort; H Hu; B Lakshminarayanan", "journal": "", "ref_id": "b12", "title": "Deep Ensembles: A Loss Landscape Perspective", "year": "2019" }, { "authors": "Y Gal; Z Ghahramani", "journal": "", "ref_id": "b13", "title": "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning", "year": "2015" }, { "authors": "A Ghosh; V Kulharia; V Namboodiri; P H Torr; P K Dokania", "journal": "", "ref_id": "b14", "title": "Multi-Agent Diverse Generative Adversarial Networks", "year": "2017" }, { "authors": "I Goodfellow; J Pouget-Abadie; M Mirza; B Xu; D Warde-Farley; S Ozair; A Courville; Y Bengio", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b15", "title": "Generative Adversarial Networks", "year": "" }, { "authors": "A Grover; S Ermon", "journal": "", "ref_id": "b16", "title": "Boosted Generative Models", "year": "2017-02" }, { "authors": "T Hastie; J Friedman; R Tibshirani", "journal": "Springer", "ref_id": "b17", "title": "The Elements of Statistical Learning", "year": "2001" }, { "authors": "J Ho; A Jain; P Abbeel", "journal": "", "ref_id": "b18", "title": "Denoising Diffusion Probabilistic Models", "year": "2020-12-06" }, { "authors": "S Ho; Y Qu; B Gu; L Gao; J Li; Y Xiang; Dp-Gan", "journal": "Journal of Network and Computer Applications", "ref_id": "b19", "title": "Differentially private consecutive data publishing using generative adversarial nets", "year": "2021" }, { "authors": "Q Hoang; T D Nguyen; T Le; D Phung; Mgan", "journal": "", "ref_id": "b20", "title": "Training Generative Adversarial Nets with Multiple Generators", "year": "2018" }, { "authors": "J Jordon; J Yoon; M Schaar; Pate-Gan", "journal": "", "ref_id": "b21", "title": "Generating Synthetic Data with Differential Privacy Guarantees", "year": "2019" }, { "authors": "J Jordon; D Jarrett; E Saveliev; J Yoon; P Elbers; P Thoral; A Ercole; C Zhang; D Belgrave; M Van Der Schaar", "journal": "PMLR", "ref_id": "b22", "title": "Hide-and-Seek Privacy Challenge: Synthetic Data Generation vs. Patient Re-identification", "year": "2021-02" }, { "authors": " Kaggle", "journal": "", "ref_id": "b23", "title": "Kaggle machine learning and data science survey", "year": "2017" }, { "authors": "B Lakshminarayanan; A Pritzel; C Blundell", "journal": "", "ref_id": "b24", "title": "Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles", "year": "2016" }, { "authors": "D Liu; M Jain; B Dossou; Q Shen; S Lahlou; A Goyal; N Malkin; C Emezue; D Zhang; N Hassen; X Ji; K Kawaguchi; Y Bengio; Gflowout", "journal": "", "ref_id": "b25", "title": "Dropout with Generative Flow Networks", "year": "2020" }, { "authors": "G Mordido; H Yang; C Meinel", "journal": "", "ref_id": "b26", "title": "Dropout-GAN: Learning from a Dynamic Ensemble of Discriminators", "year": "2018" }, { "authors": "T D Nguyen; T Le; H Vu; D Phung", "journal": "", "ref_id": "b27", "title": "Dual Discriminator Generative Adversarial Nets", "year": "2017-09" }, { "authors": "B Phan; S Khan; R Salay; K Czarnecki", "journal": "LNCS", "ref_id": "b28", "title": "Bayesian Uncertainty Quantification with Synthetic Data", "year": "2019" }, { "authors": "Z Qian; B.-C Cebere; M Van Der Schaar", "journal": "", "ref_id": "b29", "title": "Synthcity: facilitating innovative use cases of synthetic data in different data modalities", "year": "" }, { "authors": "M Sensoy; L Kaplan; F Cerutti; M Saleki", "journal": "", "ref_id": "b30", "title": "Uncertainty-Aware Deep Classifiers Using Generative Models", "year": "2020" }, { "authors": "R Shwartz-Ziv; A Armon", "journal": "Information Fusion", "ref_id": "b31", "title": "Tabular data: Deep learning is not all you need", "year": "2022" }, { "authors": "L Theis; A Van Den Oord; M Bethge", "journal": "", "ref_id": "b32", "title": "A note on the evaluation of generative models", "year": "2015" }, { "authors": "I Tolstikhin; S Gelly; O Bousquet; C J Simon-Gabriel; B Schölkopf; Adagan", "journal": "", "ref_id": "b33", "title": "Boosting Generative Models", "year": "2017-01" }, { "authors": "B Van Breugel; M Van Der Schaar", "journal": "", "ref_id": "b34", "title": "Beyond privacy: Navigating the opportunities and challenges of synthetic data", "year": "2023" }, { "authors": "B Van Breugel; T Kyono; J Berrevoets; M Van Der Schaar; Decaf", "journal": "", "ref_id": "b35", "title": "Generating Fair Synthetic Data Using Causally-Aware Generative Networks", "year": "2021" }, { "authors": "B Van Breugel; H Sun; Z Qian; M Van Der Schaar", "journal": "PMLR", "ref_id": "b36", "title": "Membership inference attacks against synthetic data through overfitting detection", "year": "2023-04-27" }, { "authors": "Y Wen; D Tran; J Ba", "journal": "", "ref_id": "b37", "title": "BatchEnsemble: An Alternative Approach to Efficient Ensemble and Lifelong Learning", "year": "2020" }, { "authors": "F Wenzel; J Snoek; D Tran; R Jenatton", "journal": "", "ref_id": "b38", "title": "Hyperparameter Ensembles for Robustness and Uncertainty Quantification", "year": "2020-12-06" }, { "authors": "A G Wilson; P Izmailov", "journal": "", "ref_id": "b39", "title": "Deep Ensembles as Approximate Bayesian Inference", "year": "2021" }, { "authors": "D Xu; S Yuan; L Zhang; X Wu", "journal": "", "ref_id": "b40", "title": "Fairgan: Fairnessaware generative adversarial networks", "year": "2018" }, { "authors": "D Xu; Y Wu; S Yuan; L Zhang; X Wu", "journal": "International Joint Conferences on Artificial Intelligence", "ref_id": "b41", "title": "Achieving causal fairness through generative adversarial networks", "year": "2019-08" }, { "authors": "L Xu; M Skoularidou; A Cuesta-Infante; K Veeramachaneni", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b42", "title": "Modeling Tabular data using Conditional GAN", "year": "2019" }, { "authors": "L Xu; M Skoularidou; A Cuesta-Infante; K Veeramachaneni", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b43", "title": "Modeling tabular data using conditional gan", "year": "2019" }, { "authors": "J Yoon; J Jordon; M Van Der Schaar", "journal": "", "ref_id": "b44", "title": "Radial-GAN: Leveraging multiple datasets to improve targetspecific predictive models using Generative Adversarial Networks", "year": "2018-02" }, { "authors": "J Yoon; L N Drumright; M Van Der Schaar", "journal": "IEEE Journal of Biomedical and Health Informatics", "ref_id": "b45", "title": "Anonymization through data synthesis using generative adversarial networks (ADS-GAN)", "year": "2020" }, { "authors": "H Zhang; M Cisse; Y N Dauphin; D Lopez-Paz; Mixup", "journal": "", "ref_id": "b46", "title": "Beyond Empirical Risk Minimization. 6th International Conference on Learning Representations", "year": "2017" } ]
[ { "formula_coordinates": [ 3, 451.42, 102.5, 86.52, 8.64 ], "formula_id": "formula_0", "formula_text": "(i) (ii) (iii) (iv)" }, { "formula_coordinates": [ 3, 55.11, 178.92, 479.93, 69.25 ], "formula_id": "formula_1", "formula_text": "✓ ✓ × × Deep Generative Ensemble (DGE) ✓ ✓ ✓ ✓(" }, { "formula_coordinates": [ 3, 126.12, 552.21, 163.99, 16.6 ], "formula_id": "formula_2", "formula_text": "θ = arg min θ D(p θ , p r )(1)" }, { "formula_coordinates": [ 3, 329.71, 384.03, 212.39, 9.65 ], "formula_id": "formula_3", "formula_text": "p(T |D r ) = p(T |D s )p(D s |θ)p(θ|D r )dD s dθ. (2)" }, { "formula_coordinates": [ 3, 420.38, 656.53, 52.53, 13.47 ], "formula_id": "formula_4", "formula_text": "[T ] = 1 K k" }, { "formula_coordinates": [ 3, 307.08, 669.87, 208.85, 14.11 ], "formula_id": "formula_5", "formula_text": "Var T ∼ p(T |Dr) (T ) = 1 K-1 k ( T k -E T ∼ p(T |Dr) [T ]" }, { "formula_coordinates": [ 4, 55.44, 348.51, 134.73, 14.22 ], "formula_id": "formula_6", "formula_text": "p DGE (θ|D r ) = 1 K k δ(θ = θk )" }, { "formula_coordinates": [ 4, 307.44, 278.83, 95.21, 9.65 ], "formula_id": "formula_7", "formula_text": "g ϕ (x) = p r (Y |X = x)." }, { "formula_coordinates": [ 4, 317.11, 346.92, 214.66, 10.63 ], "formula_id": "formula_8", "formula_text": "E θ [Err(g ϕ , p θ )] = E θ [E (X,Y )∼p θ (X,Y ) L(g ϕ (X), Y ))]." }, { "formula_coordinates": [ 13, 110.75, 213.65, 122.17, 76.45 ], "formula_id": "formula_9", "formula_text": "X 1 , X 2 iid ∼ N (0, 1) Y ∼ Bern(t(X 1 + 1)) t(x) =      0 , x < 0 1 , x > 2 1 2 x , otherwise" } ]
2023-05-16
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b12", "b27", "b8", "b1", "b19", "b20", "b4", "b10", "b27", "b26", "b24", "b13", "b6", "b9", "b10", "b4", "b16", "b18", "b5", "b22", "b17", "b29", "b28", "b21", "b23" ], "table_ref": [], "text": "Given a Boolean formula F , the problem of model counting is to compute the number of models of F . Model counting is a fundamental problem in computer science with a wide range of applications, such as control improvisation [13], network reliability [28,9], neural network verification [2], probabilistic reasoning [20,21,5,11], and the like. In addition to myriad applications, the problem of model counting is a fundamental problem in theoretical computer science. In his seminal paper, Valiant showed that #SAT is #P-complete, where #P is the set of counting problems whose decision versions lie in NP [28]. Subsequently, Toda demonstrated the theoretical hardness of the problem by showing that every problem in the entire polynomial hierarchy can be solved by just one call to a #P oracle; more formally, PH ⊆ P #P [27].\nGiven the computational intractability of #SAT, there has been sustained interest in the development of approximate techniques from theoreticians and practitioners alike. Stockmeyer introduced a randomized hashing-based technique that provides (ε, δ)-guarantees (formally defined in Section 2) given access to an NP oracle [25]. Given the lack of practical solvers that could handle problems in NP satisfactorily, there were no practical implementations of Stockmeyere's hashing-based techniques until the 2000s [14]. Building on the unprecedented advancements in the development of SAT solvers, Chakraborty, Meel, and Vardi extended Stockmeyer's framework to a scalable (ε, δ)-counting algorithm, ApproxMC [7]. The subsequent years have witnessed a sustained interest in further optimizations of the hashing-based techniques for approximate counting [10,11,5,17,19,6,23,18,30,29]. The current state-of-the-art technique for approximate counting is a hashing-based framework called ApproxMC, which is in its fourth version, called ApproxMC4 [22,24].\nThe core theoretical idea behind the hashing-based framework is to use 2universal hash functions to partition the solution space, denoted by sol(F) for a formula F , into roughly equal small cells, wherein a cell is considered small if it contains solutions less than or equal to a pre-computed threshold, thresh. An NP oracle (in practice, an SAT solver) is employed to check if a cell is small by enumerating solutions one-by-one until either there are no more solutions or we have already enumerated thresh + 1 solutions. Then, we randomly pick a cell, enumerate solutions within the cell (if the cell is small), and scale the obtained count by the number of cells to obtain an estimate for |sol(F)|. To amplify the confidence, we rely on the standard median technique: repeat the above process, called ApproxMCCore, multiple times and return the median. Computing the median amplifies the confidence as for the median of t repetitions to be outside the desired range (i.e., |sol(F)| 1+ε , (1 + ε)|sol(F)| ), it should be the case that at least half of the repetitions of ApproxMCCore returned a wrong estimate.\nIn practice, every subsequent repetition of ApproxMCCore takes a similar time, and the overall runtime increases linearly with the number of invocations. The number of repetitions depends logarithmically on δ -1 . As a particular example, for = 0.8, the number of repetitions of ApproxMCCore to attain δ = 0.1 is 21, which increases to 117 for δ = 0.001: a significant increase in the number of repetitions (and accordingly, the time taken). Accordingly, it is no surprise that empirical analysis of tools such as ApproxMC has been presented with a high delta (such as δ = 0.1). On the other hand, for several applications, such as network reliability, and quantitative verification, the end users desire estimates with high confidence. Therefore, the design of efficient counting techniques for small δ is a major challenge that one needs to address to enable the adoption of approximate counting techniques in practice.\nThe primary contribution of our work is to address the above challenge. We introduce a new technique called rounding that enables dramatic reductions in the number of repetitions required to attain a desired value of confidence. The core technical idea behind the design of the rounding technique is based on the following observation: Let L (resp. U ) refer to the event that a given invocation of ApproxMCCore under (resp. over)-estimates |sol(F)|. For a median estimate to be wrong, either the event L happens in half of the invocations of ApproxMCCore or the event U happens in half of the invocations of ApproxMCCore. The resulting algorithm, called RoundMC, follows a similar structure to that of ApproxMC: it repeatedly invokes the underlying core procedure RoundMCCore and returns the median of the estimates. Since a single invocation of RoundMCCore takes as much time as ApproxMCCore, the reduction in the number of repetitions is primarily responsible for the ensuing speedup. As an example, for ε = 0.8, the number of repetitions of RoundMCCore to attain δ = 0.1 and δ = 0.001 is just 5 and 19, respectively; the corresponding numbers for ApproxMC were 21 and 117. An extensive experimental evaluation on 1890 benchmarks shows that the rounding technique provided 4× speedup than the state-of-the-art approximate model counter, ApproxMC4. Furthermore, for a given timeout of 5000 seconds, RoundMC solves 204 more instances than RoundMC and achieves a reduction of 1063 seconds in the PAR-2 score.\nThe rest of the paper is organized as follows. We introduce notation and preliminaries in Section 2. To place our contribution in context, we review related works in Section 3. We identify the weakness of the current technique in Section 4 and present the rounding technique in Section 5 to address this issue. Then, we present our experimental evaluation in Section 6. Finally, we conclude in Section 7." }, { "figure_ref": [], "heading": "Notation and Preliminaries", "publication_ref": [], "table_ref": [], "text": "Let F be a Boolean formula in conjunctive normal form (CNF), and let Vars(F ) be the set of variables appearing in F . The set Vars(F ) is also called the support of F . An assignment σ of truth values to the variables in Vars(F ) is called a satisfying assignment or witness of F if it makes F evaluate to true. We denote the set of all witnesses of F by sol(F). Throughout the paper, we will use n to denote |Vars(F )|.\nThe propositional model counting problem is to compute |sol(F)| for a given CNF formula F . A probably approximately correct (or PAC) counter is a probabilistic algorithm ApproxCount(•, •, •) that takes as inputs a formula F , a tolerance parameter ε > 0, and a confidence parameter δ ∈ (0, 1], and returns an (ε, δ)-\nestimate c, i.e., Pr |sol(F)| 1+ε ≤ c ≤ (1 + ε)|sol(F)| ≥ 1 -δ.\nPAC guarantees are also sometimes referred to as (ε, δ)-guarantees.\nA closely related notion is projected model counting, where we are interested in computing the cardinality of sol(F) projected on a subset of variables P ⊆ Vars(F ). While for clarity of exposition, we describe our algorithm in the context of model counting, the techniques developed in this paper are applicable to projected model counting as well. Our empirical evaluation indeed considers such benchmarks." }, { "figure_ref": [], "heading": "Universal Hash Functions", "publication_ref": [], "table_ref": [], "text": "Let n, m ∈ N and H(n, m) = {h : {0, 1} n → {0, 1} m } be a family of hash functions mapping {0, 1} n to {0, 1} m . We use h R ← H(n, m) to denote the probability space obtained by choosing a function h uniformly at random from H(n, m). To measure the quality of a hash function we are interested in the set of elements of sol(F) mapped to α by h, denoted Cell F,h,α and its cardinality, i.e., |Cell F,h,α |. We write Pr[Z : Ω] to denote the probability of outcome Z when sampling from a probability space Ω. For brevity, we omit Ω when it is clear from the context. The expected value of Z is denoted E [Z] and its variance is denoted\nσ 2 [Z]. Definition 1. A family of hash functions H(n, m) is strongly 2-universal if ∀x, y ∈ {0, 1} n , α ∈ {0, 1} m , h R ← H(n, m), Pr [h(x) = α] = 1 2 m = Pr [h(x) = h(y)] For h R ← H(n, n) and ∀m ∈ {1, ..., n}, the m th prefix-slice of h, denoted h (m) , is a map from {0, 1} n to {0, 1} m , such that h (m) (y)[i] = h(y)[i],\nfor all y ∈ {0, 1} n and for all i ∈ {1, ..., m}. Similarly, the m th prefix-slice of α ∈ {0, 1} n , denoted α (m) , is an element of {0, 1} m such that α (m) [i] = α[i] for all i ∈ {1, ..., m}.\nTo avoid cumbersome terminology, we abuse notation and write Cell F,m (resp. Cnt F,m ) as a short-hand for Cell F,h (m) ,α (m) (resp. |Cell F,h (m) ,α (m) |). The following proposition presents two results that are frequently used throughout this paper. The proof is deferred to Appendix A. Proposition 1. For every 1 ≤ m ≤ n, the following holds:\nE Cnt F,m = |sol(F)| 2 m(1)\nσ 2 Cnt F,m ≤ E Cnt F,m(2)\nThe usage of prefix-slice of h ensures monotonicity of the random variable, Cnt F,m , since from the definition of prefix-slice, we have that for every \n1 ≤ m < n, h (m+1) (y) = α (m+1) ⇒ h (m) (y) = α (m) . Formally, Proposition 2. For every 1 ≤ m < n, Cell F,m+1 ⊆ Cell F,m2\np k (1-p) t-k ≤ t t/2 t k= t 2 p k (1-p) t-k ≤ t t/2 •(p(1-p)) t 2 • 1 1-2p ≤ 1 √ 2π • t ( t 2 -0.5)( t 2 +0.5) • t t-1 t •e 1 12t -1 6t+6 -1 6t-6 •t -1 2 2 t •(p(1-p)) t 2 •(p(1-p)) 1 2 • 1 1-2p\np k (1 -p) t-k ≥ t t/2 p t 2 (1 -p) t-t 2 ≥ 1 √ 2π • t ( t 2 -0.5)( t 2 +0.5) • t t+1 t •e 1 12t -1 6t+6 -1 6t-6 •t -1 2 2 t •(p(1-p)) t 2 •p 1 2 (1-p) -1 2 • 1 1-2p" }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b27", "b26", "b3", "b24", "b6", "b7", "b22", "b21", "b23", "b23" ], "table_ref": [], "text": "The seminal work of Valiant established that #SAT is #P-complete [28]. Toda later showed that every problem in the polynomial hierarchy could be solved by just a polynomial number of calls to a #P oracle [27]. Based on Carter and Wegman's seminal work on universal hash functions [4], Stockmeyer proposed a probabilistic polynomial time procedure, with access to an NP oracle, to obtain an (ε, δ)-approximation of F [25]. Built on top of Stockmeyer's work, the core theoretical idea behind the hashing-based approximate solution counting framework, as presented in Algorithm 1 (ApproxMC [7]), is to use 2-universal hash functions to partition the solution space (denoted by sol(F) for a given formula F ) into small cells of roughly equal size. A cell is considered small if the number of solutions it contains is less than or equal to a pre-determined threshold, thresh. An NP oracle is used to determine if a cell is small by iteratively enumerating its solutions until either there are no more solutions or thresh + 1 solutions have been found. In practice, an SAT solver is used to implement the NP oracle. To ensure a polynomial number of calls to the oracle, the threshold, thresh, is set to be polynomial in the input parameter ε at Line 1. The subroutine ApproxMCCore takes the formula F and thresh as inputs and estimates the number of solutions at Line 7. To determine the appropriate number of cells, i.e., the value of m for H(n, m), ApproxMCCore uses a search procedure at Line 3 of Algorithm 2. The estimate is calculated as the number of solutions in a randomly chosen cell, scaled by the number of cells, i.e., 2 m at Line 5. To improve confidence in the estimate, ApproxMC performs multiple runs of the ApproxMCCore subroutine at Lines 5-9 of Algorithm 1. The final count is computed as the median of the estimates obtained at Line 10.\nAlgorithm 1 ApproxMC(F, ε, δ)\n1: thresh ← 9.84 1 + ε 1+ε 1 + 1 ε 2 ; 2: Y ← BoundedSAT(F, thresh); 3: if (|Y | < thresh) then return |Y |; 4: t ← 17 log 2 (3/δ) ; C ← emptyList; iter ← 0; 5: repeat 6:\niter ← iter + 1; 7:\nnSols ← ApproxMCCore(F, thresh); 8:\nAddToList(C, nSols); 9: until (iter ≥ t); 10: finalEstimate ← FindMedian(C); 11: return finalEstimate;\nAlgorithm 2 ApproxMCCore(F, thresh)\n1: Choose h at random from H(n, n); 2: Choose α at random from {0, 1} n ; 3: m ← LogSATSearch(F, h, α, thresh); 4: Cnt F,m ← BoundedSAT F ∧ h (m) -1 α (m) , thresh ; 5: return (2 m × Cnt F,m );\nIn the second version of ApproxMC [8], two key algorithmic improvements are proposed to improve the practical performance by reducing the number of calls to the SAT solver. The first improvement is using galloping search to more efficiently find the correct number of cells, i.e., LogSATSearch at Line 3 of Algorithm 2. The second is using linear search over a small interval around the previous value of m before resorting to the galloping search. Additionally, the third and fourth versions [23,22] enhance the algorithm's performance by effectively dealing with CNF formulas conjuncted with XOR constraints, commonly used in the hashing-based counting framework. Moreover, an effective preprocessor named Arjun [24] is proposed to enhance ApproxMC's performance by constructing shorter XOR constraints. As a result, the combination of Arjun and ApproxMC solved almost all existing benchmarks [24], making it the current state of the art in this field.\nIn this work, we aim to address the main limitation of the ApproxMC algorithm by focusing on an aspect that still needs to be improved upon by previous developments. Specifically, we aim to improve the core algorithm of ApproxMC, which has remained unchanged." }, { "figure_ref": [], "heading": "Weakness of ApproxMC", "publication_ref": [ "b6" ], "table_ref": [], "text": "As noted above, the core algorithm of ApproxMC has not changed since 2016, and in this work, we aim to address the core limitation of ApproxMC. To put our contribution in context, we first review ApproxMC and its core algorithm, called ApproxMCCore. We present the pseudocode of ApproxMC and ApproxMCCore in Algorithm 1 and 2, respectively. ApproxMCCore may return an estimate that falls outside the PAC range |sol(F)| 1+ε , (1 + ε)|sol(F)| with a certain probability of error. Therefore, ApproxMC repeatedly invokes ApproxMCCore (Lines 5-9) and returns the median of the estimates returned by ApproxMCCore (Line 10), which reduces the error probability to the user-provided parameter δ.\nLet Error t denote the event that the median of t estimates falls outside\n|sol(F)| 1+ε , (1 + ε)|sol(F)| .\nLet L denote the event that an invocation ApproxMCCore returns an estimate less than |sol(F)| 1+ε . Similarly, let U denote the event that an individual estimate of |sol(F)| is greater than (1 + ε)|sol(F)|. For simplicity of exposition, we assume t is odd; the current implementation of t indeed ensures that t is odd by choosing the smallest odd t for which Pr[Error t ] ≤ δ.\nIn the remainder of the section, we will demonstrate that reducing max {Pr [L] , Pr [U ]} can effectively reduce the number of repetitions t, making the small-δ scenarios practical. To this end, we will first demonstrate the existing analysis technique of ApproxMC leads to loose bounds on Pr[Error t ]. We then present a new analysis that leads to tighter bounds on Pr[Error t ].\nThe existing combinatorial analysis in [7] derives the following proposition:\nProposition 3. Pr [Error t ] ≤ η(t, t/2 , Pr [L ∪ U ])\nwhere η(t, m, p) = t k=m t k p k (1 -p) t-k . Proposition 3 follows from the observation that if the median falls outside the PAC range, at least t/2 of the results must also be outside the range. Let η(t, t/2 , Pr [L ∪ U ]) ≤ δ, and we can compute a valid t at Line 4 of ApproxMC.\nProposition 3 raises a question: can we derive a tight upper bound for Pr [Error t ]?\nThe following lemma provides an affirmative answer to this question. Lemma 2. Assuming t is odd, we have:\nPr [Error t ] = η(t, t/2 , Pr [L]) + η(t, t/2 , Pr [U ]) Proof. Let I L\ni be an indicator variable that is 1 when ApproxMCCore returns a nSols less than |sol(F)| 1+ε , indicating the occurrence of event L in the i-th repetition. Let I U i be an indicator variable that is 1 when ApproxMCCore returns a nSols greater than (1+ε)|sol(F)|, indicating the occurrence of event U in the i-th repetition. We aim first to prove that Error t ⇔\nt i=1 I L i ≥ t 2 ∨ t i=1 I U i ≥ t 2 .\nWe will begin by proving the right (⇒) implication. If the median of t estimates violates the PAC guarantee, the median is either less than \nI L i ≥ t 2 ∨ t i=1 I U i ≥ t 2 .\nThen we obtain: \nPr [Error t ] = Pr t i=1 I L i ≥ t/2 ∨ t i=1 I U i ≥ t/2 = Pr t i=1 I L i ≥ t/2 + Pr t i=1 I U i ≥ t/2 -Pr t i=1 I L i ≥ t/2 ∧ t i=1 I U i ≥ t/2 Given I L i + I U i ≤ 1 for i = 1, 2, ..., t, t i=1 (I L i + I U i ) ≤ t is there, but if t i=1 I L i ≥ t/2 ∧ t i=1 I U i ≥ t/2 is also given, we obtain t i=1 (I L i + I U i ) ≥ t + 1 contradicting t i=1 (I L i + I U i ) ≤ t;\n[Error t ] ∈ Θ t -1 2 2 Pr [L] (1 -Pr [L]) t + 2 Pr [U ] (1 -Pr [U ]) t = Θ t -1 2 2 p max (1 -p max ) t\nIn summary, Lemma 3 provides a way to tighten the bound on Pr[Error t ] by designing an algorithm such that we can obtain a tighter bound on p max in contrast to previous approaches that relied on obtaining a tighter bound on Pr[L ∪ U ]." }, { "figure_ref": [], "heading": "Rounding Model Counting", "publication_ref": [], "table_ref": [], "text": "In this section, we present a rounding-based technique that allows us to obtain a tighter bound on p max . On a high-level, instead of returning the estimate from one iteration of the underlying core algorithm as the number of solutions in a randomly chosen cell multiplied by the number of cells, we round each estimate of the model count to a value that is more likely to be within (1 + ε)-bound. While counter-intuitive at first glance, we show that rounding the estimate reduces max {Pr [L] , Pr [U ]}, thereby resulting in a smaller number of repetitions of the underlying algorithm.\nWe present RoundMC, a rounding-based approximate model counting algorithm, in Section 5.1. Section 5.2 will demonstrate how RoundMC decreases max {Pr [L] , Pr [U ]} and the number of estimates. Lastly, in Section 5.3, we will provide proof of the theoretical correctness of the algorithm." }, { "figure_ref": [], "heading": "Algorithm", "publication_ref": [], "table_ref": [], "text": "Algorithm 3 presents the procedure of RoundMC. RoundMC takes as input a formula F , a tolerance parameter ε, and a confidence parameter δ. RoundMC returns an\n(ε, δ)-estimate c of |sol(F)| such that Pr |sol(F)| 1+ε ≤ c ≤ (1 + ε)|sol(F)| ≥ 1 -δ.\nRoundMC is identical to ApproxMC in its initialization of data structures and handling of base cases (Lines 1-4).\nIn Line 5, we pre-compute the rounding type and rounding value to be used in RoundMCCore. configRound is implemented in Algorithm 5; the precise choices arise due to technical analysis, as presented in Section 5.2. Note that, in configRound, Cnt F,m is rounded up to roundValue for ε < 3 (roundUp = 1) but rounded to roundValue for ε ≥ 3 (roundUp = 0). Rounding up means Cnt F,m = roundValue only if Cnt F,m < roundValue. Rounding means Cnt F,m = roundValue in all cases. RoundMC computes the number of repetitions necessary to lower error probability down to δ at Line 6. The implementation of computeIter is presented in Algorithm 6 following Lemma 2. The iterator keeps increasing until the tight error bound is no more than δ. As we will show in Section 5.2, Pr [L] and Pr [U ] depend on ε. In the loop of Lines 7-11, RoundMCCore repeatedly estimates |sol(F)|. Each estimate nSols is stored in List C, and the median of C serves as the final estimate satisfying the (ε, δ)-guarantee.\nAlgorithm 4 shows the pseudo-code of RoundMCCore. A random hash function is chosen at Line 1 to partition sol(F) into roughly equal cells. A random hash value is chosen at Line 2 to randomly pick a cell for estimation. In Line 3, Algorithm 3 RoundMC(F, ε, δ) Algorithm 4 RoundMCCore(F, thresh, roundUp, roundValue)\n1: thresh ← 9.84 1 + ε 1+ε 1 + 1 ε 2 ; 2: Y ← BoundedSAT(F,\n1: Choose h at random from H(n, n); 2: Choose α at random from {0, 1} n ; 3: m ← LogSATSearch(F, h, α, thresh); 4: Cnt F,m ← BoundedSAT F ∧ h (m) -1 α (m)\n, thresh ;\n5: if roundUp = 1 then 6: return (2 m × max{Cnt F,m , roundValue}); 7: else 8:\nreturn (2 m × roundValue);\nwe search for a value m such that the cell picked from 2 m available cells is small enough to enumerate solutions one by one while providing a good estimate of |sol(F)|. In Line 4, a bounded model counting is invoked to compute the size of the picked cell, i.e., Cnt F,m . Finally, if roundUp equals 1, Cnt F,m is rounded up to roundValue at Line 6. Otherwise, roundUp equals 0, and Cnt F,m is rounded to roundValue at Line 8. Note that rounding up returns roundValue only if Cnt F,m is less than roundValue. However, in the case of rounding, roundValue is always returned no matter what value Cnt F,m is." }, { "figure_ref": [ "fig_2" ], "heading": "Repetition Reduction", "publication_ref": [], "table_ref": [], "text": "We will now show that RoundMCCore allows us to obtain a smaller max {Pr [L] , Pr [U ]}. Furthermore, we show the large gap between the error probability of RoundMC and that of ApproxMC both analytically and visually.\nThe following lemma presents the upper bounds of Pr [L] and Pr [U ] for RoundMCCore. Let pivot = 9.84 1 + 1 ε 2 for simplicity.\nAlgorithm 5 configRound(ε) \n1: if (ε < √ 2 -1) then return (1, √ 1+2ε 2 pivot); 2: else if (ε < 1) then return (1, pivot √ 2 ); 3: else if (ε < 3) then return (1, pivot); 4: else if (ε < 4 √ 2-\nPr [L] ≤                0.262 if ε < √ 2 -1 0.157 if √ 2 -1 ≤ ε < 1 0.085 if 1 ≤ ε < 3 0.055 if 3 ≤ ε < 4 √ 2 -1 0.023 if ε ≥ 4 √ 2 -1 Pr [U ] ≤ 0.169 if ε < 3 0.044 if ε ≥ 3\nThe proof of Lemma 4 is deferred to Section 5. The following theorem analytically presents the gap between the error probability of RoundMC and that of ApproxMC1 . \nTheorem 1. For √ 2 -1 ≤ ε < 1, Pr [Error t ] ∈    O t -\nfor √ 2 -1 ≤ ε < 1.\nThe smaller error probability enables RoundMC to repeat fewer repetitions while providing the same level of theoretical guarantee. For example, given δ = 0.001 to ApproxMC, i.e., y = 0.001 in Figure 1, ApproxMC requests 117 repetitions to obtain the given error probability. However, RoundMC claims that 37 repetitions for ε < \n√ 2 -1, 19 repetitions for √ 2 -1 ≤ ε < 1, 17 repetitions for 1 ≤ ε < 3, 7 repetitions for 3 ≤ ε < 4 √ 2 -1," }, { "figure_ref": [], "heading": "Proof of Lemma 4 for case", "publication_ref": [ "b7", "b7" ], "table_ref": [], "text": "√ 2 -1 ≤ ε < 1\nWe provide full proof of Lemma 4 for case √ 2 -1 ≤ ε < 1. We defer the proof of other cases to Appendix D.\nLet T m denote the event Cnt F,m < thresh , and let L m and U m denote the\nevents Cnt F,m < E[Cnt F,m ] 1+ε\nand Cnt F,m > E Cnt F,m (1 + ε) , respectively. To ease the proof, let\nU m denote Cnt F,m > E Cnt F,m (1 + ε 1+ε ) , and thereby U m ⊆ U m . Let m * = log 2 |sol(F)| -log 2 (pivot) + 1 such that m * is the smallest m satisfying |sol(F)| 2 m (1 + ε 1+ε ) ≤ thresh -1.\nLet us first prove the lemmas used in the proof of Lemma 4.\nLemma 5. For every 0 < β < 1, γ > 1, and 1 ≤ m ≤ n, the following holds:\n1. Pr Cnt F,m ≤ βE Cnt F,m ≤ 1 1+(1-β) 2 E[Cnt F,m ] 2. Pr Cnt F,m ≥ γE Cnt F,m ≤ 1 1+(γ-1) 2 E[Cnt F,m ]\nProof. Statement 1 can be proved following the proof of Lemma 1 in [8]. For statement 2, we rewrite the left-hand side and apply Cantelli's inequality:\nPr Cnt F,m -E Cnt F,m ≥ (γ -1)E Cnt F,m ≤ σ 2 [Cnt F,m ] σ 2 [Cnt F,m ]+((γ-1)E[Cnt F,m ]) 2 .\nFinally, applying Equation 2 completes the proof. Lemma 6. Given √ 2 -1 ≤ ε < 1, the following bounds hold: \n1. Pr [T m * -3 ] ≤ 1 62.5 2. Pr [L m * -2 ] ≤ 1\nPr [L] ≤                0.262 if ε < √ 2 -1 0.157 if √ 2 -1 ≤ ε < 1 0.085 if 1 ≤ ε < 3 0.055 if 3 ≤ ε < 4 √ 2 -1 0.023 if ε ≥ 4 √ 2 -1 Pr [U ] ≤ 0.169 if ε < 3 0.044 if ε ≥ 3\nProof. We prove the case of √ 2 -1 ≤ ε < 1. The proof for other ε is deferred to Appendix D. Let us first bound Pr [L]. Following LogSATSearch in [8], we have\nPr [L] =   i∈{1,...,n} T i-1 ∩ T i ∩ L i  (3)\nEquation 3 can be simplified by three observations labeled O1, O2 and O3 below.\nO1 : ∀i ≤ m * -3, T i ⊆ T i+1 . Therefore, i∈{1,...,m * -3} (T i-1 ∩ T i ∩ L i ) ⊆ i∈{1,...,m * -3} T i ⊆ T m * -3 O2 : For i ∈ {m * -2, m * -1}, we have i∈{m * -2,m * -1} (T i-1 ∩ T i ∩ L i ) ⊆ L m * -2 ∪ L m * -1 O3 : ∀i ≥ m * , since rounding Cnt F,i up to pivot √ 2 and m * ≥ log 2 |sol(F)| - log 2 (pivot), we have 2 i × Cnt F,i ≥ 2 m * × pivot √ 2 ≥ |sol(F)| √ 2 ≥ |sol(F)| 1+ε . The last inequality follows from ε ≥ √ 2 -1. Then we have Cnt F,i ≥ E[Cnt F,i ] 1+ε\n. Therefore, L i = ∅ for i ≥ m * and we have i∈{m * ,...,n} \n(T i-1 ∩ T i ∩ L i ) = ∅\nT i-1 ∩ T i ∩ U i  (4)\nWe derive the following observations O4 and O5.\nO4\n: ∀i ≤ m * -1, since m * ≤ log 2 |sol(F)| -log 2 (pivot) + 1, we have 2 i × Cnt F,i ≤ 2 m * -1 × thresh ≤ |sol(F)| 1 + ε 1+ε . Then we obtain Cnt F,i ≤ E Cnt F,i 1 + ε 1+ε . Therefore, T i ∩ U i = ∅ for i ≤ m * -1 and we have i∈{1,...,m * -1} T i-1 ∩ T i ∩ U i ⊆ i∈{1,...,m * -1} T i-1 ∩ T i ∩ U i = ∅\nO5 : ∀i ≥ m * , T i implies Cnt F,i > thresh, and then we have \n2 i × Cnt F,i > 2 m * × thresh ≥ |sol(F)| 1 + ε 1+ε . The second inequality follows from m * ≥ log 2 |sol(F)| -log 2 (pivot). Then we obtain Cnt F,i > E Cnt F,i 1 + ε 1+ε . Therefore, T i ⊆ U i for i ≥ m * . Since ∀i, T i ⊆ T i-1 , we have i∈{m * ,...,n} T i-1 ∩ T i ∩ U i ⊆ i∈{m * +1,...,n} T i-1 ∪ (T m * -1 ∩ T m * ∩ U m * ) ⊆ T m * ∪ (T m * -1 ∩ T m * ∩ U m * ) ⊆ T m * ∪ U m * ⊆ U m * (5) Remark that for √ 2 -1 ≤ ε < 1, we round Cnt F,m * up to pivot √ 2 ," }, { "figure_ref": [], "heading": "Experimental Evaluation", "publication_ref": [ "b21", "b11", "b14", "b15", "b0", "b12", "b25", "b2" ], "table_ref": [], "text": "It is perhaps worth highlighting that both ApproxMCCore and RoundMCCore invoke the underlying SAT solver on identical queries; the only difference between RoundMC and ApproxMC lies in what estimate to return and how often ApproxMCCore and RoundMCCore are invoked. From this viewpoint, one would expect that theoretical improvements would also lead to improved runtime performance. To provide further evidence, we perform extensive empirical evaluation and compare RoundMC's performance against the current state-of-the-art model counter, ApproxMC [22]. We use Arjun as a pre-processing tool. We used the latest version of ApproxMC, called ApproxMC4; an entry based on ApproxMC4 won the Model Counting Competition 2022.\nPrevious comparisons of ApproxMC have been performed on a set of 1896 instances, but the latest version of ApproxMC is able to solve almost all the instances when these instances are pre-processed by Arjun. Therefore, we sought to construct a new comprehensive set of 1890 instances derived from various sources, including Model Counting Competitions 2020-2022 [12,15,16], program synthesis [1], quantitative control improvisation [13], quantification of software properties [26], and adaptive chosen ciphertext attacks [3]. As noted earlier, our technique extends to projected model counting, and our benchmark suite indeed comprises 772 projected model counting instances.\nExperiments were conducted on a high-performance computer cluster, with each node consisting of 2xE5-2690v3 CPUs featuring 2x12 real cores and 96GB of RAM. For each instance, a counter was run on a single core, with a time limit of 5000 seconds and a memory limit of 4GB. To compare runtime performance, we use the PAR-2 score, a standard metric in the SAT community. Each instance is assigned a score that is the number of seconds it takes the corresponding tool to complete execution successfully. In the event of a timeout or memory out, the score is the doubled time limit in seconds. The PAR-2 score is then calculated as the average of all the instance scores. We also report the speedup of RoundMC over ApproxMC4, calculated as the ratio of the runtime of ApproxMC4 to that of RoundMC on instances solved by both counters. We set δ to 0.001 and ε to 0.8.\nSpecifically, we aim to address the following research questions:\nRQ 1 How does the runtime performance of RoundMC compare to that of ApproxMC4? RQ 2 How does the accuracy of the counts computed by RoundMC compare to that of the exact count?\nSummary In summary, RoundMC consistently outperforms ApproxMC4. Specifically, it solved 204 additional instances and reduced the PAR-2 score by 1063 seconds in comparison to ApproxMC4. The average speedup of RoundMC over ApproxMC4 was 4.68. In addition, RoundMC provided a high-quality approximation with an average observed error of 0.1, much smaller than the theoretical error tolerance of 0.8." }, { "figure_ref": [ "fig_5" ], "heading": "RQ1. Overall Performance", "publication_ref": [], "table_ref": [ "tab_10" ], "text": "Figure 2 compares the counting time of RoundMC and ApproxMC4. The x-axis represents the index of the instances, sorted in ascending order of runtime, and the y-axis represents the runtime for each instance. A point (x, y) indicates that a counter can solve x instances within y seconds. Thus, for a given time limit y, a counter whose curve is on the right has solved more instances than a counter on the left. It can be seen in the figure that RoundMC consistently outperforms ApproxMC4. In total, RoundMC solved 204 more instances than ApproxMC4.\nTable 1 provides a detailed comparison between RoundMC and ApproxMC4. The first column lists three measures of interest: the number of solved instances, the PAR-2 score, and the speedup of RoundMC over ApproxMC4. The second and third columns show the results for ApproxMC4 and RoundMC, respectively. The second column indicates that ApproxMC4 solved 998 of the 1890 instances and achieved a PAR-2 score of 4934. The third column shows that RoundMC solved 1202 instances and achieved a PAR-2 score of 3871. In comparison, RoundMC solved 204 more instances and reduced the PAR-2 score by 1063 seconds in comparison to ApproxMC4. The geometric mean of the speedup for RoundMC over ApproxMC4 is 4.68. This speedup was calculated only for instances solved by both counters." }, { "figure_ref": [ "fig_6", "fig_6" ], "heading": "RQ2. Approximation Quality", "publication_ref": [], "table_ref": [], "text": "We used the state-of-the-art probabilistic exact model counter Ganak to compute the exact model count and compare it to the results of RoundMC. We collected statistics on instances solved by both Ganak and RoundMC. Figure 3 presents results for a subset of instances. The x-axis represents the index of instances sorted in ascending order by the number of solutions, and the y-axis represents the number of solutions in a log scale. Theoretically, the approximate count from RoundMC should be within the range of |sol(F)| • 1.8 and |sol(F)|/1.8 with probability 0.999, where |sol(F)| denotes the exact count returned by Ganak. The range is indicated by the upper and lower bounds, represented by the curves y = |sol(F)| • 1.8 and y = |sol(F)|/1.8, respectively. Figure 3 shows that the approximate counts from RoundMC fall within the expected range [|sol(F)|/1.8, |sol(F)| • 1.8] for all instances except for four points slightly above the upper bound. These four outliers are due to a bug in the preprocessor Arjun that probably depends on the version of the C++ compiler and will be fixed in the future. We also calculated the observed error, which is the mean relative difference between the approximate and exact counts in our experiments, i.e., max{finalEstimate/|sol(F)| -1, |sol(F)|/finalEstimate -1}. The overall observed error was 0.1, which is significantly smaller than the theoretical error tolerance of 0.8. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we addressed the scalability challenges faced by ApproxMC in the smaller δ range. To this end, we proposed a rounding-based algorithm, RoundMC, which reduces the number of estimations required by 84% while providing the same (ε, δ)-guarantees. Our empirical evaluation on 1890 instances shows that RoundMC solved 204 more instances and achieved a reduction in PAR-2 score of 1063 seconds. Furthermore, RoundMC achieved a 4× speedup ApproxMC on the instances that both RoundMC and ApproxMC could solve." }, { "figure_ref": [], "heading": "A Proof of Proposition 1", "publication_ref": [], "table_ref": [], "text": "Proof. For ∀y ∈ {0, 1} n , α (m) ∈ {0, 1} m , let γ y,α (m) be an indicator variable that is 1 when h (m) (y) = α (m) . According to the definition of strongly 2-universal function, we obtain ∀x, y ∈ {0, 1} n , E γ y,α (m) = 1 2 m and E γ x,α (m) • γ y,α (m) = 1 2 2m . To prove Equation 1, we obtain\nE Cnt F,m = E   y∈sol(F) γ y,α (m)   = y∈sol(F) E γ y,α (m) = |sol(F)| 2 m\nTo prove Equation 2, we derive\nE Cnt 2 F,m = E   y∈sol(F) γ 2 y,α (m) + x =y∈sol(F) γ x,α (m) • γ y,α (m)   = E   y∈sol(F) γ y,α (m)   + x =y∈sol(F) E γ x,α (m) • γ y,α (m) = E Cnt F,m + |sol(F)|(|sol(F)| -1) 2 2m\nThen, we obtain\nσ 2 Cnt F,m = E Cnt 2 F,m -E Cnt F,m 2 = E Cnt F,m + |sol(F)|(|sol(F)| -1) 2 2m - |sol(F)| 2 m 2 = E Cnt F,m - |sol(F)| 2 2m ≤ E Cnt F,m" }, { "figure_ref": [], "heading": "B Weakness of Proposition 3", "publication_ref": [], "table_ref": [], "text": "The following proposition states that Proposition 3 provides a loose upper bound for Pr [Error t ]. Proposition 4. Assuming t is odd, we have:\nPr [Error t ] < η(t, t/2 , Pr [L ∪ U ])\nProof. We will now construct a case counted by η(t, t/2 , Pr [L ∪ U ]) but not contained within the event Error t . Let I L i be an indicator variable that is 1 when ApproxMCCore returns a nSols less than |sol(F)| 1+ε , indicating the occurrence of event L in the i-th repetition. Let I U i be an indicator variable that is 1 when ApproxMCCore returns a nSols greater than (1 + ε)|sol(F)|, indicating the occurrence of event U in the i-th repetition. Consider a scenario where C Proof of p max ≤ 0.36 for ApproxMC\nI L i = 1 for i = 1,\nProof. We prove the case of √ 2-1 ≤ ε < 1. Similarly to the proof in Section 5.3, we aim to bound Pr [L] by the following equation:\nPr [L] =   i∈{1,...,n} T i-1 ∩ T i ∩ L i   (3 revisited)\nwhich can be simplified by three observations labeled O1, O2 and O3 below.\nO1 : ∀i ≤ m * -3, T i ⊆ T i+1 . Therefore, i∈{1,...,m * -3} (T i-1 ∩ T i ∩ L i ) ⊆ i∈{1,...,m * -3} T i ⊆ T m * -3 O2 : For i ∈ {m * -2, m * -1}, we have i∈{m * -2,m * -1} (T i-1 ∩ T i ∩ L i ) ⊆ L m * -2 ∪ L m * -1\nO3 : ∀i ≥ m * , T i implies Cnt F,i > thresh and then we have\n2 i × Cnt F,i > 2 m * × thresh ≥ |sol(F)| 1 + ε 1+ε . The second inequality follows from m * ≥ log 2 |sol(F)|-log 2 (pivot). Then we obtain Cnt F,i > E Cnt F,i 1 + ε 1+ε . Case 1: E Cnt F,m * < 1+ε 2 thresh Lemma 7. Given ε < √ 2 -1, the following bounds hold: 1. Pr [T m * -2 ] ≤ 1 29.67 2. Pr [L m * -1 ] ≤ 1 10.84\nProof. Let's first prove the statement 1. For ε <\n√ 2 -1, we have thresh < (2 - √2\n2 )pivot and\nE Cnt F,m * -2 ≥ 2pivot. Therefore, Pr [T m * -2 ] ≤ Pr Cnt F,m * -2 ≤ (1 - √2\n4 )E Cnt F,m * -2 . Finally, employing Lemma 5 with β = 1-\n√ 2 4 , we obtain Pr [T m * -2 ] ≤ 1 1+( √ 2 4 ) 2 •2pivot ≤ 1 1+( √2\n4 ) 2 •2•9.84•(1+ 1 √ 2-1 ) 2 ≤ 1\n29.67 . To prove the statement 2, we employ Lemma 5 with\nβ = 1 1+ε and E Cnt F,m * -1 ≥ pivot to obtain Pr [L m * -1 ] ≤ 1 1+(1-1 1+ε ) 2 •E[Cnt F,m * -1 ] ≤ 1 1+(1-1 1+ε ) 2 •9.84•(1+ 1 ε ) 2 = 1 10.84 .\nThen, we prove that Pr [L] ≤ 0.126 for E Cnt F,m * < 1+ε 2 thresh. Proof. We aim to bound Pr [L] by the following equation:\nPr [L] =   i∈{1,...,n} T i-1 ∩ T i ∩ L i   (3 revisited)\nwhich can be simplified by the three observations labeled O1, O2 and O3 below.\nO1 : ∀i ≤ m * -2, T i ⊆ T i+1 . Therefore, i∈{1,...,m * -2} (T i-1 ∩ T i ∩ L i ) ⊆ i∈{1,...,m * -2} T i ⊆ T m * -2 O2 : For i = m * -1, we have T m * -2 ∩ T m * -1 ∩ L m * -1 ⊆ L m * -1 O3 : ∀i ≥ m * , since rounding Cnt F,i up to √ 1+2ε 2 pivot, we have Cnt F,i ≥ √ 1+2ε 2 pivot ≥ thresh 2 > E[Cnt F,m * ] 1+ε ≥ E[Cnt F,i ] 1+ε\n. The second last inequality follows from E Cnt F,m * < 1+ε 2 thresh. Therefore, L i = ∅ for i ≥ m * and we have i∈{m * ,...,n}\n(T i-1 ∩ T i ∩ L i ) = ∅\nFollowing the observations O1, O2 and O3, we simplify Equation 3 and obtain\nPr [L] ≤ Pr [T m * -2 ] + Pr [L m * -1 ] Employing Lemma 7 gives Pr [L] ≤ 0.126. Case 2: E Cnt F,m * ≥ 1+ε 2 thresh Lemma 8. Given E Cnt F,m * ≥ 1+ε\n2 thresh, the following bounds hold:\n1. Pr [T m * -1 ] ≤ 1 10.84 2. Pr [L m * ] ≤ 1 5.92 Proof. Let's first prove the statement 1. From E Cnt F,m * ≥ 1+ε 2 thresh, we can derive E Cnt F,m * -1 ≥ (1+ε)thresh. Therefore, Pr [T m * -1 ] ≤ Pr Cnt F,m * -1 ≤ 1 1+ε E Cnt F,m * -1 . Finally, employing Lemma 5 with β = 1 1+ε , we obtain Pr [T m * -1 ] ≤ 1 1+(1-1 1+ε ) 2 •E[Cnt F,m * -1 ] ≤ 1 1+(1-1 1+ε ) 2 •(1+ε)thresh = 1 1+9.84(1+2ε) ≤ 1\n10.84 . To prove the statement 2, we employ Lemma 5 with\nβ = 1 1+ε and E Cnt F,m * ≥ 1+ε 2 thresh to obtain Pr [L m * ] ≤ 1 1+(1-1 1+ε ) 2 •E[Cnt F,m * ] ≤ 1 1+(1-1 1+ε ) 2 • 1+ε 2 thresh = 1 1+4.92(1+2ε) ≤ 1 5.92 .\nThen, we prove that Pr [L] ≤ 0.262 for E Cnt F,m * ≥ 1+ε 2 thresh. Proof. We aim to bound Pr [L] by the following equation:\nPr [L] =   i∈{1,...,n} T i-1 ∩ T i ∩ L i   (3 revisited)\nwhich can be simplified by the three observations labeled O1, O2 and O3 below.\nO1 : ∀i ≤ m * -1, T i ⊆ T i+1 . Therefore, i∈{1,...,m * -1} (T i-1 ∩ T i ∩ L i ) ⊆ i∈{1,...,m * -1} T i ⊆ T m * -1\nO2 : For i = m * , we have Proof. Let's first prove the statement 1. For ε < 3, we have thresh < \nT m * -1 ∩ T m * ∩ L m * ⊆ L m * O3 : ∀i ≥ m * +1, since rounding Cnt F,i up to √1+2ε" }, { "figure_ref": [], "heading": "", "publication_ref": [ "b7", "b7" ], "table_ref": [], "text": "Therefore, T i ⊆ U i for i ≥ m * . Since ∀i, T i ⊆ T i-1 , we have i∈{m * ,...,n}\nFollowing the observations O1, O2 and O3, we simplify Equation 3 and obtain\nEmploying Lemma 2 in [8] gives Pr [L] ≤ 0.36. Note that U in [8] represents U of our definition.\nThen, following the O4 and O5 in Section 5.3, we obtain\nEmploying Lemma 6 gives Pr [U ] ≤ 0.169. As a result, p max ≤ 0.36." }, { "figure_ref": [], "heading": "D Proof of Lemma 4", "publication_ref": [], "table_ref": [], "text": "We restate the lemma below and prove the statements section by section. The proof for √ 2 -1 ≤ ε < 1 has been shown in Section 5.3." }, { "figure_ref": [], "heading": "Lemma 4.", "publication_ref": [], "table_ref": [], "text": "The following bounds hold for RoundMC:\nWe first consider two cases: E Cnt F,m * < 1+ε 2 thresh and E Cnt F,m * ≥ 1+ε 2 thresh, and then merge the results to complete the proof.\nProof. We have thresh < 2pivot and\nNow let us prove the statement for RoundMC:\nProof. We aim to bound Pr [L] by the following equation:\nwhich can be simplified by the two observations labeled O1 and O2 below.\nTherefore, L i = ∅ for i ≥ m * -3 and we have i∈{m * -3,...,n} Proof. We aim to bound Pr [U ] by the following equation:\nWe derive the following observations O1 and O2.\nO1\n> thresh and then we have\nRemark that for ε < √ 2 -1, we round Cnt F,m * up to\npivot and we\nThe analysis means rounding doesn't affect the event U m * and therefore Inequality 6 still holds. Proof. We aim to bound Pr [U ] by the following equation:\nWe derive the following observations O1 and O2.\nO1\nThen, we obtain Cnt F,i ≤ E Cnt F,i (1 + ε). Therefore, U i = ∅ for i ≤ m * + 1 and we have " } ]
The problem of model counting, also known as #SAT, is to compute the number of models or satisfying assignments of a given Boolean formula F . Model counting is a fundamental problem in computer science with a wide range of applications. In recent years, there has been a growing interest in using hashing-based techniques for approximate model counting that provide (ε, δ)-guarantees: i.e., the count returned is within a (1 + ε)-factor of the exact count with confidence at least 1 -δ. While hashing-based techniques attain reasonable scalability for large enough values of δ, their scalability is severely impacted for smaller values of δ, thereby preventing their adoption in application domains that require estimates with high confidence. The primary contribution of this paper is to address the Achilles heel of hashing-based techniques: we propose a novel approach based on rounding that allows us to achieve a significant reduction in runtime for smaller values of δ. The resulting counter, called RoundMC, achieves a substantial runtime performance improvement over the current state-ofthe-art counter, ApproxMC. In particular, our extensive evaluation over a benchmark suite consisting of 1890 instances shows that RoundMC solves 204 more instances than ApproxMC, and achieves a 4× speedup over ApproxMC.
Rounding Meets Approximate Model Counting
[ { "figure_caption": "The number of repetitions depends on max(Pr[L], Pr[U ]). The current algorithmic design (and ensuing analysis) of ApproxMCCore provides a weak upper bound on max{Pr[L], Pr[U ]}: in particular, the bounds on max{Pr[L], Pr[U ]} and Pr[L∪U ] are almost identical. Our key technical contribution is to design a new procedure, RoundMCCore, based on the rounding technique that allows us to obtain significantly better bounds on max{Pr[L], Pr[U ]}.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "3 . 2 - 1 ,321Observe that Lemma 4 influences the choices in the design of configRound (Algorithm 5). Recall that max {Pr [L] , Pr [U ]} ≤ 0.36 for ApproxMC (Appendix C), but Lemma 4 ensures max {Pr [L] , Pr [U ]} ≤ 0.262 for RoundMC. For ε ≥ 4 √ Lemma 4 even delivers max {Pr [L] , Pr [U ]} ≤ 0.044.", "figure_data": "", "figure_id": "fig_1", "figure_label": "321", "figure_type": "figure" }, { "figure_caption": "Fig. 1 .1Fig. 1. Comparison of error bounds for RoundMC and ApproxMC.", "figure_data": "", "figure_id": "fig_2", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Following the proof of Lemma 2 in [8], we can prove statements 1, 2, and 3. To prove statement 4, replacing γ with (1 + ε 1+ε ) in Lemma 5 and employing E Cnt F,m * ≥ pivot/2, we obtain Pr [U m * ] the upper bounds of Pr [L] and Pr [U ] in Lemma 4 for √ 2-1 ≤ ε < 1. The proof for other ε is deferred to Appendix D due to the page limit. Lemma 4. The following bounds hold for RoundMC:", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Followingthe observations O1, O2, and O3, we simplify Equation 3 and obtainPr [L] ≤ Pr [T m * -3 ] + Pr [L m * -2 ] + Pr [L m * -1 ] Employing Lemma 6 gives Pr [L] ≤ 0.157. Now let us bound Pr [U ]. Similarly, following LogSATSearch in [8], we have Pr [", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Comparison of counting times for RoundMC and ApproxMC4.", "figure_data": "", "figure_id": "fig_5", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Comparison of approximate counts from RoundMC to exact counts from Ganak.", "figure_data": "", "figure_id": "fig_6", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "2 pivot- 1 .D. 2 3 Lemma 9 .21239and m * ≥ log 2 |sol(F)|log 2 (pivot), we have2 i × Cnt F,i ≥ 2 m * +1 × (F)| ≥ |sol(F)| 1+ε . Then we have Cnt F,i ≥ E[Cnt F,i ] 1+ε . Therefore, L i = ∅ for i ≥ m * + 1 and we have i∈{m * +1,...,n} (T i-1 ∩ T i ∩ L i ) = ∅Following the observations O1, O2 and O3, we simplify Equation 3 and obtain Pr [L] ≤ Pr [T m * -1 ] + Pr [L m * ] Employing Lemma 8 gives Pr [L] ≤ 0.262. Combining the Case 1 and 2, we obtain Pr [L] ≤ max{0.126, 0.262} = 0.262. Therefore, we prove the statement for RoundMC: Pr [L] ≤ 0.262 for ε < √ 2 Proof of Pr [L] ≤ 0.085 for 1 ≤ ε < Given 1 ≤ ε < 3, the following bounds hold: 1. Pr [T m * -4 ] ≤ 1 86.41 2. Pr [L m * -3 ] ≤ 1 40.36 3. Pr [L m * -2 ] ≤ 1 20.68", "figure_data": "", "figure_id": "fig_7", "figure_label": "21239", "figure_type": "figure" }, { "figure_caption": "7 4 4 O2: 2 ≥ 2 - 1 Lemma 10 . 2 - 1 , 19 Proof. For ε < 4 √ 2 - 1 , 2 8- √ 2 32 2 - 1 ) 2 ≤ 1 18. 19 . 2 - 1 . 4 ≥ 4 √ 2 - 1 Lemma 11 . 4 √ 2 - 1 ,442211021194212221211921442111421pivot and E Cnt F,m * -4 ≥ 8pivot. Therefore, Pr [T m * -4 ] ≤ Pr Cnt F,m * -4 ≤ 7 32 E Cnt F,m * -4 . Finally, employing Lemma 5 with β = 7 32 , we obtain Pr [T m * -4 ] . To prove the statement 2, we employ Lemma 5 with β = 11+ε andE Cnt F,m * -3 ≥ 4pivot to obtain Pr [L m * -3 ] ≤ 1 1+(1-1 1+ε ) 2 •E[Cnt F,m * -3 ] ≤ 1 1+(1-1 1+ε ) 2 •4•9.84•(1+ 1 ε ) 2 = 1 40.36. Following the proof of Lemma 2 in[8] we can prove the statement 3. Now let us prove the statement for RoundMC: Pr [L] ≤ 0.085 for 1 ≤ ε < 3 .Proof. We aim to bound Pr [L] by the following equation:Pr [L] =   i∈{1,...,n} T i-1 ∩ T i ∩ L i   (3 revisited)which can be simplified by the three observations labeled O1, O2 and O3 below.O1 : ∀i ≤ m * -4, T i ⊆ T i+1 . Therefore, i∈{1,...,m * -4} (T i-1 ∩ T i ∩ L i ) ⊆ i∈{1,...,m * -4} T i ⊆ T m * -For i ∈ {m * -3, m * -2}, we have i∈{m * -3,m * -2} (T i-1 ∩ T i ∩ L i ) ⊆ L m * -3 ∪ L m * -2 O3 : ∀i ≥ m * -1, since rounding Cnt F,i up to pivot and m * ≥ log 2 |sol(F)|log 2 (pivot), we have 2 i × Cnt F,i ≥ 2 m * -1 × pivot ≥ |sol(F)| |sol(F)| 1+ε . The last inequality follows from ε ≥ 1. Then we have Cnt F,i ≥ E[Cnt F,i ] 1+ε .Therefore, L i = ∅ for i ≥ m * -1 and we have i∈{m * -1,...,n}(T i-1 ∩ T i ∩ L i ) = ∅Following the observations O1, O2 and O3, we simplify Equation 3 and obtainPr [L] ≤ Pr [T m * -4 ] + Pr [L m * -3 ] + Pr [L m * -2 ]Employing Lemma 9 gives Pr [L] ≤ 0.085.D.3 Proof of Pr[L] ≤ 0.055 for 3 ≤ ε < 4 √ Given 3 ≤ ε < 4 √ the following bound hold: Pr [T m * -3 ] ≤ 1 18.we have thresh < (2 -√ )pivot and E Cnt F,m * -3 ≥ 4pivot. Therefore, Pr [T m * -3 ] ≤ Pr Cnt F,m * -3 ≤ ( Cnt F,m * -3 . Finally, employing Lemma 5 with β = 1 2 , we obtain Pr [T m * -3 ]Now let us prove the statement for RoundMC:Pr [L] ≤ 0.055 for 3 ≤ ε < 4 √Proof. We aim to bound Pr [L] by the following equation:Pr [L] =   i∈{1,...,n} T i-1 ∩ T i ∩ L i   (3 revisited)which can be simplified by the two observations labeled O1 and O2 below.O1 : ∀i ≤ m * -3, T i ⊆ T i+1 . Therefore, i∈{1,...,m * -3} (T i-1 ∩ T i ∩ L i ) ⊆ i∈{1,...,m * -3} T i ⊆ T m * -3O2 : ∀i ≥ m * -2, since rounding Cnt F,i to pivot and m * ≥ log 2 |sol(F)|log 2 (pivot), we have2 i × Cnt F,i ≥ 2 m * -2 × pivot ≥ |sol(F)| |sol(F)| 1+ε . The last inequality follows from ε ≥ 3. Then we have Cnt F,i ≥ E[Cnt F,i ] 1+ε .Therefore, L i = ∅ for i ≥ m * -2 and we have i∈{m * -2,...,n}(T i-1 ∩ T i ∩ L i ) = ∅Following the observations O1 and O2, we simplify Equation 3 and obtain Pr [L] ≤ Pr [T m * -3 ] Employing Lemma 10 gives Pr [L] ≤ 0.055. D.4 Proof of Pr [L] ≤ 0.023 for ε ≥ Given ε ≥ the following bound hold: Pr [T m * -4 ] ≤ 1 45.28", "figure_data": "", "figure_id": "fig_8", "figure_label": "442211021194212221211921442111421", "figure_type": "figure" }, { "figure_caption": "Hence, we can conclude that Pr can decrease the error probability, it is still uncertain to what extent Pr [L] and Pr [U ] affect the error probability. To further understand this impact, the following lemma is presented to establish a correlation between the error probability and t depending on Pr [L] and Pr [U ]. Lemma 3. Let p max = max {Pr [L] , Pr [U ]} and p max < 0.5, we have Pr [Error t ] ∈ Θ t -1 2 2 p max (1 -p max )", "figure_data": "t i=1 I L i ≥ t/2 ∧t i=1 I U i ≥ t/2= 0. From this, we can deduce:ttPr [Error t ] = PrI L i ≥ t/2+ PrI U i ≥ t/2i=1i=1tProof. Applying Lemma 1 and 2, we havePr", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Lemma 4. The following bounds hold for RoundMC:", "figure_data": "1) then return (0, pivot);5: else 6: return (0,√ 2pivot);Algorithm 6 computeIter(ε, δ)1: iter ← 1;2: while (η(iter, iter/2 , Prε[L]) + η(iter, iter/2 , Prε[U ]) > δ) do3:iter ← iter + 2;4: return iter;", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Figure1visualizes the large gap between the error probability of RoundMC and that of ApproxMC. The x-axis represents the number of repetitions (t) in RoundMC or ApproxMC. The y-axis represents the upper bound of error probability in the log scale. For example, as t = 117, ApproxMC guarantees that with a probability of 10 -3 , the median over 117 estimates violates the PAC guarantee. However, RoundMC allows a much smaller error probability that is at most 10 -15", "figure_data": "For ApproxMC, combining p max ≤ 0.36 (Appendix C) and Lemma 3, we obtainPr [Error t ] ∈ O t -1 22 0.36(1 -0.36)t= O t -1 2 0.96 t1 2 0.75 tfor RoundMCO t -1 2 0.96 tfor ApproxMCProof. From Lemma 4, we obtain p max ≤ 0.169 for RoundMC. Applying Lemma 3,we havePr [Error t ] ∈ O t -1 22 0.169(1 -0.169)t⊆ O t -1 2 0.75 t", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" }, { "figure_caption": "and we have 2 m * × pivot √ 2 ≤ |sol(F)|(1 + ε), which means rounding doesn't affect the event U m * ; therefore, Inequality 5 still holds. Following the observations O4 and O5, we simplify Equation 4 and obtain Pr [U ] ≤ Pr [U m", "figure_data": "", "figure_id": "tab_9", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The number of solved instances and PAR-2 score for RoundMC versus ApproxMC4 on 1890 instances. The geometric mean of the speedup of RoundMC over ApproxMC4 is also reported.", "figure_data": "ApproxMC4 RoundMC#Solved9981202PAR-2 score49343871Speedup-4.68", "figure_id": "tab_10", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "2, ..., t Pr [L ∪ U ]) since there are t 2 estimates outside the PAC range. However, this case means that t 4 estimates fall within the range less than |sol(F)| RoundMC returns a correct estimate since the median falls within the PAC range |sol(F)| 1+ε , (1 + ε)|sol(F)| . In other words, this case is out of the event Error t . In conclusion, there is a scenario that is out of the event Error t , undesirably included in expression", "figure_data": "4 2 . η(t, t/2 , Pr [L ∪ U ]) represents , I U j = 1 for j = t 4 + 1, ..., t 2 for k > t t i=1 (I L i ∨ I U , and I L k = I U k = 0 i ) ≥ t 2 . We can see that this case is included in t i=1 (I L i ∨ I U i ) ≥ t 2 and therefore countedby η(t, t/2 , 1+εand t 2 -t 4 estimates fall within the range greater than (1+ε)|sol(F)|, while theremaining t 2 estimates correctly fall within the range |sol(F)| 1+ε , (1 + ε)|sol(F)| .Therefore, after sorting all the estimates, t i=1 (I L i ∨I U i ) ≥ t", "figure_id": "tab_11", "figure_label": "", "figure_type": "table" } ]
Jiong Yang; Kuldeep S Meel
[ { "authors": "R Alur; R Bodik; G Juniwal; M M K Martin; M Raghothaman; S A Seshia; R Singh; A Solar-Lezama; E Torlak; A Udupa", "journal": "", "ref_id": "b0", "title": "Syntax-guided synthesis", "year": "2013" }, { "authors": "T Baluta; S Shen; S Shine; K S Meel; P Saxena", "journal": "", "ref_id": "b1", "title": "Quantitative verification of neural networks and its security applications", "year": "2019" }, { "authors": "G Beck; M Zinkus; M Green", "journal": "", "ref_id": "b2", "title": "Automating the development of chosen ciphertext attacks", "year": "2020" }, { "authors": "J L Carter; M N Wegman", "journal": "", "ref_id": "b3", "title": "Universal classes of hash functions", "year": "1977" }, { "authors": "S Chakraborty; D J Fremont; K S Meel; S A Seshia; M Y Vardi", "journal": "", "ref_id": "b4", "title": "Distributionaware sampling and weighted model counting for SAT", "year": "2014" }, { "authors": "S Chakraborty; K S Meel; R Mistry; M Y Vardi", "journal": "", "ref_id": "b5", "title": "Approximate probabilistic inference via word-level counting", "year": "2016" }, { "authors": "S Chakraborty; K S Meel; M Y Vardi", "journal": "", "ref_id": "b6", "title": "A scalable approximate model counter", "year": "2013" }, { "authors": "S Chakraborty; K S Meel; M Y Vardi", "journal": "", "ref_id": "b7", "title": "Algorithmic improvements in approximate counting for probabilistic inference: From linear to logarithmic SAT calls", "year": "2016" }, { "authors": "L Duenas-Osorio; K S Meel; R Paredes; M Y Vardi", "journal": "", "ref_id": "b8", "title": "Counting-based reliability estimation for power-transmission grids", "year": "2017" }, { "authors": "S Ermon; C P Gomes; A Sabharwal; B Selman", "journal": "", "ref_id": "b9", "title": "Embed and project: Discrete sampling with universal hashing", "year": "2013" }, { "authors": "S Ermon; C P Gomes; A Sabharwal; B Selman", "journal": "", "ref_id": "b10", "title": "Taming the curse of dimensionality: Discrete integration by hashing and optimization", "year": "2013" }, { "authors": "J K Fichte; M Hecher; F Hamiti", "journal": "ACM J. Exp. Algorithmics", "ref_id": "b11", "title": "The model counting competition", "year": "2020" }, { "authors": "A Gittis; E Vin; D J Fremont", "journal": "", "ref_id": "b12", "title": "Randomized synthesis for diversity and cost constraints with control improvisation", "year": "2022" }, { "authors": "C P Gomes; A Sabharwal; B Selman", "journal": "", "ref_id": "b13", "title": "Model counting: A new strategy for obtaining good bounds", "year": "2006" }, { "authors": "M Hecher; J K Fichte", "journal": "", "ref_id": "b14", "title": "Model counting competition 2021", "year": "2021" }, { "authors": "M Hecher; J K Fichte", "journal": "", "ref_id": "b15", "title": "Model counting competition 2022", "year": "2022" }, { "authors": "A Ivrii; S Malik; K S Meel; M Y Vardi", "journal": "Constraints", "ref_id": "b16", "title": "On computing minimal independent support and its applications to sampling and counting", "year": "2016" }, { "authors": "K S Meel; S Akshay", "journal": "", "ref_id": "b17", "title": "Sparse hashing for scalable approximate model counting: Theory and practice", "year": "2020" }, { "authors": "K S Meel; M Y Vardi; S Chakraborty; D J Fremont; S A Seshia; D Fried; A Ivrii; S Malik", "journal": "", "ref_id": "b18", "title": "Constrained sampling and counting: Universal hashing meets sat solving", "year": "2016" }, { "authors": "D Roth", "journal": "Artificial Intelligence", "ref_id": "b19", "title": "On the hardness of approximate reasoning", "year": "1996" }, { "authors": "T Sang; P Bearne; H Kautz", "journal": "", "ref_id": "b20", "title": "Performing bayesian inference by weighted model counting", "year": "2005" }, { "authors": "M Soos; S Gocht; K S Meel", "journal": "", "ref_id": "b21", "title": "Tinted, detached, and lazy cnf-xor solving and its applications to counting and sampling", "year": "2020" }, { "authors": "M Soos; K S Meel", "journal": "", "ref_id": "b22", "title": "Bird: Engineering an efficient cnf-xor sat solver and its applications to approximate model counting", "year": "2019" }, { "authors": "M Soos; K S Meel", "journal": "", "ref_id": "b23", "title": "Arjun: An efficient independent support computation technique and its applications to counting and sampling", "year": "2022" }, { "authors": "L Stockmeyer", "journal": "", "ref_id": "b24", "title": "The complexity of approximate counting", "year": "1983" }, { "authors": "S Teuber; A Weigl", "journal": "", "ref_id": "b25", "title": "Quantifying software reliability via model-counting", "year": "2021" }, { "authors": "S Toda", "journal": "", "ref_id": "b26", "title": "On the computational power of pp and (+)p", "year": "1989" }, { "authors": "L G Valiant", "journal": "SIAM Journal on Computing", "ref_id": "b27", "title": "The complexity of enumeration and reliability problems", "year": "1979" }, { "authors": "J Yang; S Chakraborty; K S Meel", "journal": "", "ref_id": "b28", "title": "Projected model counting: Beyond independent support", "year": "2022" }, { "authors": "J Yang; K S Meel", "journal": "", "ref_id": "b29", "title": "Engineering an efficient pb-xor solver", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 134.77, 564.29, 240.41, 14.38 ], "formula_id": "formula_0", "formula_text": "estimate c, i.e., Pr |sol(F)| 1+ε ≤ c ≤ (1 + ε)|sol(F)| ≥ 1 -δ." }, { "formula_coordinates": [ 4, 134.77, 227.53, 345.83, 119.01 ], "formula_id": "formula_1", "formula_text": "σ 2 [Z]. Definition 1. A family of hash functions H(n, m) is strongly 2-universal if ∀x, y ∈ {0, 1} n , α ∈ {0, 1} m , h R ← H(n, m), Pr [h(x) = α] = 1 2 m = Pr [h(x) = h(y)] For h R ← H(n, n) and ∀m ∈ {1, ..., n}, the m th prefix-slice of h, denoted h (m) , is a map from {0, 1} n to {0, 1} m , such that h (m) (y)[i] = h(y)[i]," }, { "formula_coordinates": [ 4, 258.44, 451.89, 222.16, 22.31 ], "formula_id": "formula_2", "formula_text": "E Cnt F,m = |sol(F)| 2 m(1)" }, { "formula_coordinates": [ 4, 246.08, 494.96, 234.51, 12.03 ], "formula_id": "formula_3", "formula_text": "σ 2 Cnt F,m ≤ E Cnt F,m(2)" }, { "formula_coordinates": [ 4, 134.77, 532.44, 345.83, 75.54 ], "formula_id": "formula_4", "formula_text": "1 ≤ m < n, h (m+1) (y) = α (m+1) ⇒ h (m) (y) = α (m) . Formally, Proposition 2. For every 1 ≤ m < n, Cell F,m+1 ⊆ Cell F,m2" }, { "formula_coordinates": [ 5, 141.91, 129.37, 338.68, 54.19 ], "formula_id": "formula_5", "formula_text": "p k (1-p) t-k ≤ t t/2 t k= t 2 p k (1-p) t-k ≤ t t/2 •(p(1-p)) t 2 • 1 1-2p ≤ 1 √ 2π • t ( t 2 -0.5)( t 2 +0.5) • t t-1 t •e 1 12t -1 6t+6 -1 6t-6 •t -1 2 2 t •(p(1-p)) t 2 •(p(1-p)) 1 2 • 1 1-2p" }, { "formula_coordinates": [ 5, 141.91, 220.89, 338.68, 39.76 ], "formula_id": "formula_6", "formula_text": "p k (1 -p) t-k ≥ t t/2 p t 2 (1 -p) t-t 2 ≥ 1 √ 2π • t ( t 2 -0.5)( t 2 +0.5) • t t+1 t •e 1 12t -1 6t+6 -1 6t-6 •t -1 2 2 t •(p(1-p)) t 2 •p 1 2 (1-p) -1 2 • 1 1-2p" }, { "formula_coordinates": [ 6, 138.66, 134.04, 190.03, 70.61 ], "formula_id": "formula_7", "formula_text": "1: thresh ← 9.84 1 + ε 1+ε 1 + 1 ε 2 ; 2: Y ← BoundedSAT(F, thresh); 3: if (|Y | < thresh) then return |Y |; 4: t ← 17 log 2 (3/δ) ; C ← emptyList; iter ← 0; 5: repeat 6:" }, { "formula_coordinates": [ 6, 138.66, 293.07, 247.12, 63.13 ], "formula_id": "formula_8", "formula_text": "1: Choose h at random from H(n, n); 2: Choose α at random from {0, 1} n ; 3: m ← LogSATSearch(F, h, α, thresh); 4: Cnt F,m ← BoundedSAT F ∧ h (m) -1 α (m) , thresh ; 5: return (2 m × Cnt F,m );" }, { "formula_coordinates": [ 7, 140.67, 207.43, 96.29, 14.38 ], "formula_id": "formula_9", "formula_text": "|sol(F)| 1+ε , (1 + ε)|sol(F)| ." }, { "formula_coordinates": [ 7, 134.77, 354.94, 246.79, 31.88 ], "formula_id": "formula_10", "formula_text": "Proposition 3. Pr [Error t ] ≤ η(t, t/2 , Pr [L ∪ U ])" }, { "formula_coordinates": [ 7, 134.77, 510.02, 280.38, 32.09 ], "formula_id": "formula_11", "formula_text": "Pr [Error t ] = η(t, t/2 , Pr [L]) + η(t, t/2 , Pr [U ]) Proof. Let I L" }, { "formula_coordinates": [ 7, 329.01, 584.59, 151.59, 14.56 ], "formula_id": "formula_12", "formula_text": "t i=1 I L i ≥ t 2 ∨ t i=1 I U i ≥ t 2 ." }, { "formula_coordinates": [ 8, 304.06, 212.31, 136.68, 14.56 ], "formula_id": "formula_13", "formula_text": "I L i ≥ t 2 ∨ t i=1 I U i ≥ t 2 ." }, { "formula_coordinates": [ 8, 134.77, 243.06, 345.83, 150.89 ], "formula_id": "formula_14", "formula_text": "Pr [Error t ] = Pr t i=1 I L i ≥ t/2 ∨ t i=1 I U i ≥ t/2 = Pr t i=1 I L i ≥ t/2 + Pr t i=1 I U i ≥ t/2 -Pr t i=1 I L i ≥ t/2 ∧ t i=1 I U i ≥ t/2 Given I L i + I U i ≤ 1 for i = 1, 2, ..., t, t i=1 (I L i + I U i ) ≤ t is there, but if t i=1 I L i ≥ t/2 ∧ t i=1 I U i ≥ t/2 is also given, we obtain t i=1 (I L i + I U i ) ≥ t + 1 contradicting t i=1 (I L i + I U i ) ≤ t;" }, { "formula_coordinates": [ 8, 150.99, 611.8, 310.54, 44.5 ], "formula_id": "formula_15", "formula_text": "[Error t ] ∈ Θ t -1 2 2 Pr [L] (1 -Pr [L]) t + 2 Pr [U ] (1 -Pr [U ]) t = Θ t -1 2 2 p max (1 -p max ) t" }, { "formula_coordinates": [ 9, 148.16, 435.7, 332.43, 14.38 ], "formula_id": "formula_16", "formula_text": "(ε, δ)-estimate c of |sol(F)| such that Pr |sol(F)| 1+ε ≤ c ≤ (1 + ε)|sol(F)| ≥ 1 -δ." }, { "formula_coordinates": [ 10, 138.66, 134.04, 149.81, 24.53 ], "formula_id": "formula_17", "formula_text": "1: thresh ← 9.84 1 + ε 1+ε 1 + 1 ε 2 ; 2: Y ← BoundedSAT(F," }, { "formula_coordinates": [ 10, 138.66, 317.66, 200.99, 46.89 ], "formula_id": "formula_18", "formula_text": "1: Choose h at random from H(n, n); 2: Choose α at random from {0, 1} n ; 3: m ← LogSATSearch(F, h, α, thresh); 4: Cnt F,m ← BoundedSAT F ∧ h (m) -1 α (m)" }, { "formula_coordinates": [ 10, 138.66, 371.51, 198.9, 40.84 ], "formula_id": "formula_19", "formula_text": "5: if roundUp = 1 then 6: return (2 m × max{Cnt F,m , roundValue}); 7: else 8:" }, { "formula_coordinates": [ 11, 138.66, 128.52, 195.8, 50.68 ], "formula_id": "formula_20", "formula_text": "1: if (ε < √ 2 -1) then return (1, √ 1+2ε 2 pivot); 2: else if (ε < 1) then return (1, pivot √ 2 ); 3: else if (ε < 3) then return (1, pivot); 4: else if (ε < 4 √ 2-" }, { "formula_coordinates": [ 11, 226.46, 313.57, 161.23, 116.73 ], "formula_id": "formula_21", "formula_text": "Pr [L] ≤                0.262 if ε < √ 2 -1 0.157 if √ 2 -1 ≤ ε < 1 0.085 if 1 ≤ ε < 3 0.055 if 3 ≤ ε < 4 √ 2 -1 0.023 if ε ≥ 4 √ 2 -1 Pr [U ] ≤ 0.169 if ε < 3 0.044 if ε ≥ 3" }, { "formula_coordinates": [ 11, 134.77, 519.65, 164.99, 56.9 ], "formula_id": "formula_22", "formula_text": "Theorem 1. For √ 2 -1 ≤ ε < 1, Pr [Error t ] ∈    O t -" }, { "formula_coordinates": [ 12, 134.77, 263.53, 88.81, 16.98 ], "formula_id": "formula_23", "formula_text": "for √ 2 -1 ≤ ε < 1." }, { "formula_coordinates": [ 12, 134.77, 311.35, 345.83, 28.93 ], "formula_id": "formula_24", "formula_text": "√ 2 -1, 19 repetitions for √ 2 -1 ≤ ε < 1, 17 repetitions for 1 ≤ ε < 3, 7 repetitions for 3 ≤ ε < 4 √ 2 -1," }, { "formula_coordinates": [ 12, 297.71, 612.15, 76.56, 16.91 ], "formula_id": "formula_25", "formula_text": "√ 2 -1 ≤ ε < 1" }, { "formula_coordinates": [ 13, 134.77, 132.4, 130.81, 17.17 ], "formula_id": "formula_26", "formula_text": "events Cnt F,m < E[Cnt F,m ] 1+ε" }, { "formula_coordinates": [ 13, 134.77, 158.01, 345.83, 41.99 ], "formula_id": "formula_27", "formula_text": "U m denote Cnt F,m > E Cnt F,m (1 + ε 1+ε ) , and thereby U m ⊆ U m . Let m * = log 2 |sol(F)| -log 2 (pivot) + 1 such that m * is the smallest m satisfying |sol(F)| 2 m (1 + ε 1+ε ) ≤ thresh -1." }, { "formula_coordinates": [ 13, 138.58, 242.34, 233.23, 34.03 ], "formula_id": "formula_28", "formula_text": "1. Pr Cnt F,m ≤ βE Cnt F,m ≤ 1 1+(1-β) 2 E[Cnt F,m ] 2. Pr Cnt F,m ≥ γE Cnt F,m ≤ 1 1+(γ-1) 2 E[Cnt F,m ]" }, { "formula_coordinates": [ 13, 134.77, 310.78, 359.96, 20.23 ], "formula_id": "formula_29", "formula_text": "Pr Cnt F,m -E Cnt F,m ≥ (γ -1)E Cnt F,m ≤ σ 2 [Cnt F,m ] σ 2 [Cnt F,m ]+((γ-1)E[Cnt F,m ]) 2 ." }, { "formula_coordinates": [ 13, 138.58, 374, 87.79, 23.79 ], "formula_id": "formula_30", "formula_text": "1. Pr [T m * -3 ] ≤ 1 62.5 2. Pr [L m * -2 ] ≤ 1" }, { "formula_coordinates": [ 13, 226.46, 539.16, 161.23, 125.78 ], "formula_id": "formula_31", "formula_text": "Pr [L] ≤                0.262 if ε < √ 2 -1 0.157 if √ 2 -1 ≤ ε < 1 0.085 if 1 ≤ ε < 3 0.055 if 3 ≤ ε < 4 √ 2 -1 0.023 if ε ≥ 4 √ 2 -1 Pr [U ] ≤ 0.169 if ε < 3 0.044 if ε ≥ 3" }, { "formula_coordinates": [ 14, 227.78, 151.34, 252.82, 34.15 ], "formula_id": "formula_32", "formula_text": "Pr [L] =   i∈{1,...,n} T i-1 ∩ T i ∩ L i  (3)" }, { "formula_coordinates": [ 14, 134.77, 217, 345.83, 166.31 ], "formula_id": "formula_33", "formula_text": "O1 : ∀i ≤ m * -3, T i ⊆ T i+1 . Therefore, i∈{1,...,m * -3} (T i-1 ∩ T i ∩ L i ) ⊆ i∈{1,...,m * -3} T i ⊆ T m * -3 O2 : For i ∈ {m * -2, m * -1}, we have i∈{m * -2,m * -1} (T i-1 ∩ T i ∩ L i ) ⊆ L m * -2 ∪ L m * -1 O3 : ∀i ≥ m * , since rounding Cnt F,i up to pivot √ 2 and m * ≥ log 2 |sol(F)| - log 2 (pivot), we have 2 i × Cnt F,i ≥ 2 m * × pivot √ 2 ≥ |sol(F)| √ 2 ≥ |sol(F)| 1+ε . The last inequality follows from ε ≥ √ 2 -1. Then we have Cnt F,i ≥ E[Cnt F,i ] 1+ε" }, { "formula_coordinates": [ 14, 294.95, 407.17, 86.73, 9.65 ], "formula_id": "formula_34", "formula_text": "(T i-1 ∩ T i ∩ L i ) = ∅" }, { "formula_coordinates": [ 14, 316.2, 516.85, 164.39, 24.31 ], "formula_id": "formula_35", "formula_text": "T i-1 ∩ T i ∩ U i  (4)" }, { "formula_coordinates": [ 14, 150.39, 582.5, 330.2, 84.01 ], "formula_id": "formula_36", "formula_text": ": ∀i ≤ m * -1, since m * ≤ log 2 |sol(F)| -log 2 (pivot) + 1, we have 2 i × Cnt F,i ≤ 2 m * -1 × thresh ≤ |sol(F)| 1 + ε 1+ε . Then we obtain Cnt F,i ≤ E Cnt F,i 1 + ε 1+ε . Therefore, T i ∩ U i = ∅ for i ≤ m * -1 and we have i∈{1,...,m * -1} T i-1 ∩ T i ∩ U i ⊆ i∈{1,...,m * -1} T i-1 ∩ T i ∩ U i = ∅" }, { "formula_coordinates": [ 15, 151.7, 117.42, 328.89, 169.78 ], "formula_id": "formula_37", "formula_text": "2 i × Cnt F,i > 2 m * × thresh ≥ |sol(F)| 1 + ε 1+ε . The second inequality follows from m * ≥ log 2 |sol(F)| -log 2 (pivot). Then we obtain Cnt F,i > E Cnt F,i 1 + ε 1+ε . Therefore, T i ⊆ U i for i ≥ m * . Since ∀i, T i ⊆ T i-1 , we have i∈{m * ,...,n} T i-1 ∩ T i ∩ U i ⊆ i∈{m * +1,...,n} T i-1 ∪ (T m * -1 ∩ T m * ∩ U m * ) ⊆ T m * ∪ (T m * -1 ∩ T m * ∩ U m * ) ⊆ T m * ∪ U m * ⊆ U m * (5) Remark that for √ 2 -1 ≤ ε < 1, we round Cnt F,m * up to pivot √ 2 ," }, { "formula_coordinates": [ 20, 167.15, 202.44, 279.86, 34.15 ], "formula_id": "formula_38", "formula_text": "E Cnt F,m = E   y∈sol(F) γ y,α (m)   = y∈sol(F) E γ y,α (m) = |sol(F)| 2 m" }, { "formula_coordinates": [ 20, 166.27, 268.83, 277.66, 102.68 ], "formula_id": "formula_39", "formula_text": "E Cnt 2 F,m = E   y∈sol(F) γ 2 y,α (m) + x =y∈sol(F) γ x,α (m) • γ y,α (m)   = E   y∈sol(F) γ y,α (m)   + x =y∈sol(F) E γ x,α (m) • γ y,α (m) = E Cnt F,m + |sol(F)|(|sol(F)| -1) 2 2m" }, { "formula_coordinates": [ 20, 164.29, 400.78, 286.29, 87.81 ], "formula_id": "formula_40", "formula_text": "σ 2 Cnt F,m = E Cnt 2 F,m -E Cnt F,m 2 = E Cnt F,m + |sol(F)|(|sol(F)| -1) 2 2m - |sol(F)| 2 m 2 = E Cnt F,m - |sol(F)| 2 2m ≤ E Cnt F,m" }, { "formula_coordinates": [ 20, 233.8, 608.63, 147.76, 9.71 ], "formula_id": "formula_41", "formula_text": "Pr [Error t ] < η(t, t/2 , Pr [L ∪ U ])" }, { "formula_coordinates": [ 21, 134.77, 141.33, 345.83, 22.27 ], "formula_id": "formula_42", "formula_text": "I L i = 1 for i = 1," }, { "formula_coordinates": [ 21, 201.31, 419.26, 279.28, 34.15 ], "formula_id": "formula_43", "formula_text": "Pr [L] =   i∈{1,...,n} T i-1 ∩ T i ∩ L i   (3 revisited)" }, { "formula_coordinates": [ 21, 134.77, 491.08, 301.26, 110.98 ], "formula_id": "formula_44", "formula_text": "O1 : ∀i ≤ m * -3, T i ⊆ T i+1 . Therefore, i∈{1,...,m * -3} (T i-1 ∩ T i ∩ L i ) ⊆ i∈{1,...,m * -3} T i ⊆ T m * -3 O2 : For i ∈ {m * -2, m * -1}, we have i∈{m * -2,m * -1} (T i-1 ∩ T i ∩ L i ) ⊆ L m * -2 ∪ L m * -1" }, { "formula_coordinates": [ 21, 151.7, 618.12, 333.49, 48.09 ], "formula_id": "formula_45", "formula_text": "2 i × Cnt F,i > 2 m * × thresh ≥ |sol(F)| 1 + ε 1+ε . The second inequality follows from m * ≥ log 2 |sol(F)|-log 2 (pivot). Then we obtain Cnt F,i > E Cnt F,i 1 + ε 1+ε . Case 1: E Cnt F,m * < 1+ε 2 thresh Lemma 7. Given ε < √ 2 -1, the following bounds hold: 1. Pr [T m * -2 ] ≤ 1 29.67 2. Pr [L m * -1 ] ≤ 1 10.84" }, { "formula_coordinates": [ 23, 135.96, 185.42, 344.63, 27.4 ], "formula_id": "formula_46", "formula_text": "√ 2 -1, we have thresh < (2 - √2" }, { "formula_coordinates": [ 23, 192.72, 200.92, 330.9, 17.64 ], "formula_id": "formula_47", "formula_text": "E Cnt F,m * -2 ≥ 2pivot. Therefore, Pr [T m * -2 ] ≤ Pr Cnt F,m * -2 ≤ (1 - √2" }, { "formula_coordinates": [ 23, 135.96, 218.8, 361.72, 38.89 ], "formula_id": "formula_48", "formula_text": "√ 2 4 , we obtain Pr [T m * -2 ] ≤ 1 1+( √ 2 4 ) 2 •2pivot ≤ 1 1+( √2" }, { "formula_coordinates": [ 23, 153.09, 242.03, 106.93, 18.7 ], "formula_id": "formula_49", "formula_text": "4 ) 2 •2•9.84•(1+ 1 √ 2-1 ) 2 ≤ 1" }, { "formula_coordinates": [ 23, 135.96, 261.37, 394.04, 33.53 ], "formula_id": "formula_50", "formula_text": "β = 1 1+ε and E Cnt F,m * -1 ≥ pivot to obtain Pr [L m * -1 ] ≤ 1 1+(1-1 1+ε ) 2 •E[Cnt F,m * -1 ] ≤ 1 1+(1-1 1+ε ) 2 •9.84•(1+ 1 ε ) 2 = 1 10.84 ." }, { "formula_coordinates": [ 23, 201.31, 345.21, 279.28, 34.15 ], "formula_id": "formula_51", "formula_text": "Pr [L] =   i∈{1,...,n} T i-1 ∩ T i ∩ L i   (3 revisited)" }, { "formula_coordinates": [ 23, 134.77, 410.18, 345.83, 133.72 ], "formula_id": "formula_52", "formula_text": "O1 : ∀i ≤ m * -2, T i ⊆ T i+1 . Therefore, i∈{1,...,m * -2} (T i-1 ∩ T i ∩ L i ) ⊆ i∈{1,...,m * -2} T i ⊆ T m * -2 O2 : For i = m * -1, we have T m * -2 ∩ T m * -1 ∩ L m * -1 ⊆ L m * -1 O3 : ∀i ≥ m * , since rounding Cnt F,i up to √ 1+2ε 2 pivot, we have Cnt F,i ≥ √ 1+2ε 2 pivot ≥ thresh 2 > E[Cnt F,m * ] 1+ε ≥ E[Cnt F,i ] 1+ε" }, { "formula_coordinates": [ 23, 294.95, 579.77, 86.73, 9.65 ], "formula_id": "formula_53", "formula_text": "(T i-1 ∩ T i ∩ L i ) = ∅" }, { "formula_coordinates": [ 23, 134.77, 634.27, 244.08, 30.58 ], "formula_id": "formula_54", "formula_text": "Pr [L] ≤ Pr [T m * -2 ] + Pr [L m * -1 ] Employing Lemma 7 gives Pr [L] ≤ 0.126. Case 2: E Cnt F,m * ≥ 1+ε 2 thresh Lemma 8. Given E Cnt F,m * ≥ 1+ε" }, { "formula_coordinates": [ 24, 134.77, 154.71, 442.13, 95.96 ], "formula_id": "formula_55", "formula_text": "1. Pr [T m * -1 ] ≤ 1 10.84 2. Pr [L m * ] ≤ 1 5.92 Proof. Let's first prove the statement 1. From E Cnt F,m * ≥ 1+ε 2 thresh, we can derive E Cnt F,m * -1 ≥ (1+ε)thresh. Therefore, Pr [T m * -1 ] ≤ Pr Cnt F,m * -1 ≤ 1 1+ε E Cnt F,m * -1 . Finally, employing Lemma 5 with β = 1 1+ε , we obtain Pr [T m * -1 ] ≤ 1 1+(1-1 1+ε ) 2 •E[Cnt F,m * -1 ] ≤ 1 1+(1-1 1+ε ) 2 •(1+ε)thresh = 1 1+9.84(1+2ε) ≤ 1" }, { "formula_coordinates": [ 24, 135.96, 251.32, 344.63, 30.56 ], "formula_id": "formula_56", "formula_text": "β = 1 1+ε and E Cnt F,m * ≥ 1+ε 2 thresh to obtain Pr [L m * ] ≤ 1 1+(1-1 1+ε ) 2 •E[Cnt F,m * ] ≤ 1 1+(1-1 1+ε ) 2 • 1+ε 2 thresh = 1 1+4.92(1+2ε) ≤ 1 5.92 ." }, { "formula_coordinates": [ 24, 201.31, 327.12, 279.28, 34.15 ], "formula_id": "formula_57", "formula_text": "Pr [L] =   i∈{1,...,n} T i-1 ∩ T i ∩ L i   (3 revisited)" }, { "formula_coordinates": [ 24, 134.77, 388.46, 301.26, 43.9 ], "formula_id": "formula_58", "formula_text": "O1 : ∀i ≤ m * -1, T i ⊆ T i+1 . Therefore, i∈{1,...,m * -1} (T i-1 ∩ T i ∩ L i ) ⊆ i∈{1,...,m * -1} T i ⊆ T m * -1" }, { "formula_coordinates": [ 24, 134.77, 462.41, 239.26, 32.57 ], "formula_id": "formula_59", "formula_text": "T m * -1 ∩ T m * ∩ L m * ⊆ L m * O3 : ∀i ≥ m * +1, since rounding Cnt F,i up to √1+2ε" } ]
10.18653/v1/2020.acl-main.421
2023-05-16
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "Product question answering (PQA) is a key technology in e-commerce applications. Given a question about a product, a PQA system searches the product webpage and provides an instant answer, so that customers do not need to traverse the page by या कपड़ा 'सला ह ु आ 'मले गा (Will the cloth be available stitched) oomph! women's unstitched georgette salwar suit dupatta material -navy blue नह1ं । यह कपडा 'सला ह ु आ नह1ं 'मले गा। (No. This cloth will not be available stitched.)" }, { "figure_ref": [], "heading": "Answer:", "publication_ref": [ "b22", "b5" ], "table_ref": [], "text": "Figure 1: Cross-lingual PQA: The user asks questions about a product in their language (such as Hindi), then the system searches for product information in English and generates an answer in the same language as the question.\nthemselves or seek help from humans (Li et al., 2017;Carmel et al., 2018). In our globalized world, it is essential to enable this technology for customers from different backgrounds. However, existing research focuses predominantly on English and leaves aside other language users. One of the biggest obstacles is the lack of datasets, which prevents us from training, evaluating and developing non-English PQA systems. Despite the growing number of multilingual QA datasets, their main focus is on general domains such as Wikipedia, which generalize poorly when applied to the PQA task, as we show in our experiments.\nTo address this, we present xPQA, the first largescale dataset for cross-lingual PQA enabling non-English questions to be answered from English content. Most comprehensive product information is usually available in a majority language such as English. Therefore, searching for relevant information in English often has a better chance of finding an answer. 2 This paper explores how to effectively train systems that retrieve information from English and generate answers in the question language to allow users to ask questions in any language. Fig 1 shows an example.\nMost existing multilingual QA datasets are cre-ated by translating English questions, introducing translation artifacts and discrepencies from native speakers' real information-seeking behaviors (Clark et al., 2020a). Instead, we collect questions from the original market places as written by native speakers, hire bilingual annotators to check the relevant product information and write the final answers in their target languages. This eliminates the need for translations and ensures that the information-seeking behaviors of native speakers are accurately represented.\nBased on the collected dataset, we report baseline results on two subtasks: (a) candidate ranking, which selects the best English candidate that contains the information to answer the non-English question; (b) answer generation, which generates a natural-sounding non-English answer to present to the user based on the selected English candidate. We find that applying a cross-lingual ranker trained on a Wikipedia-based QA dataset generalizes poorly to the product domain. The performance is even worse than training a multilingual ranker on the English in-domain data, suggesting that domain transferability is even more crucial than language transferability. The translation-based approach is the most effective for candidate ranking while the multilingual-finetuning works the best for answer generation. Nonetheless, on both tasks, there is a substantial gap between the English-based and cross-lingual performances. In the following, we first elaborate on the problem formulation for the cross-lingual PQA task ( §2), then explain the xPQA data collection process ( §3), and present experiment results ( §5.2) and conclusions ( §6)." }, { "figure_ref": [], "heading": "Problem Formulation", "publication_ref": [ "b33" ], "table_ref": [], "text": "Task There are two important tasks for a crosslingual PQA system: candidate ranking and answer generation. In candidate ranking, given a question in a target language and a list of candidates in English, the ranker predicts a relevance score for every candidate and selects the top one. Candidate ranking is necessary because a given product webpage may contain hundreds of information pieces about the product, so as a practical matter we select the top candidate to use in generation. After getting the top candidate, an answer generator takes it as input together with the question and produces an answer in the question language. This step is crucial in order to deploy a user-friendly PQA system since the candidate is neither in the user language nor written specifically to answer the question.\nScenario We consider two scenarios for both tasks: zero-shot and fine-tuned. Zero-shot assumes that we do not have any labeled data and must rely on transfer learning from the English-based PQA dataset3 . Fine-tuned assumes that we can further finetune models on a limited number of crosslingual PQA annotations. Both are realistic scenarios as annotations are usually more abundant in English than in other languages (Shen et al., 2023).\nIn our experiments, we use ePQA as the Englishbased PQA dataset, which is an extension of the dataset in Shen et al. (2022a) with coverage and quality improvements. Details are in Appendix A." }, { "figure_ref": [], "heading": "xPQA Dataset Collection", "publication_ref": [], "table_ref": [], "text": "To train and evaluate our two tasks, the xPQA dataset contains annotations for (1) questioncandidate relevance to label whether every candidate is relevant to the question or not, and (2) answers where a natural-sounding answer is manually written if the candidate contains enough information to address the question. The collection process follows the steps below:\n1. Question Collection For our question set, we crawl publicly-available community questions from Amazon.com product pages in 11 markets, obtaining questions in 12 different languages. For each language, we choose the corresponding market, then sample 2,500 unique questions. From these sampled questions, we select 1500 questions for each language that are manually verified by our annotators as being in the target language, information seeking, and containing no offensive content." }, { "figure_ref": [], "heading": "Candidate Collection", "publication_ref": [], "table_ref": [], "text": "For every valid question, we link its corresponding product page in the US market (except for Hindi and Tamil which directly use the India market) and extract all English candidates from product information sources (details in Appendix B.2). Then, we translate every question into English with AWS translate,4 feed the translated question into an English-based ranker5 and obtain top-5 candidates from its candidate set." }, { "figure_ref": [], "heading": "Relevance Annotation", "publication_ref": [], "table_ref": [], "text": "The top-5 English candidates and the non-English original questions are passed to annotators to judge their relevance. Each candidate is marked with one of three labels: \"fully answering\" (contains enough information to address the question), \"partially answering\" (contains useful information to partially address the question), and \"irrelevant\" (does not provide any helpful information). Guidelines are available in Appendix B.3." }, { "figure_ref": [], "heading": "Answer Search", "publication_ref": [], "table_ref": [], "text": "To increase the answer coverage, questions for which none of the top-5 candidates are marked as \"fully answering\" are given to annotators who are asked to actively search for the answer on the Amazon product page. If they find candidates fully answering the question, these are included with the label \"fully answering\"." }, { "figure_ref": [], "heading": "Answer Generation", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "For candidates marked as \"fully answering\", annotators are then asked to write natural, direct answers based on them. All annotators are bilingual, hired through the centific platform6 . The constructed xPQA dataset is split into 1000/400/100 questions as the test/train/dev sets for each language. Table 1 shows all languages included in the xPQA dataset. The detailed annotation process, payment, and statistics are explained in Appendix B." }, { "figure_ref": [], "heading": "Approaches", "publication_ref": [ "b16" ], "table_ref": [], "text": "For each task, we experiment with three types of baseline approaches: translate-test, translatetrain, and multilingual (Hu et al., 2020). Fig 2 provides a summary of these approaches." }, { "figure_ref": [], "heading": "Translate-test", "publication_ref": [ "b7", "b40" ], "table_ref": [], "text": "The essential idea here is to rely exclusively on English-centric models and datasets. In the zero-shot scenario, models are trained on the ePQA dataset. In the fine-tuned scenario, we must translate questions and answers in the xPQA dataset into English as this is an English-centric model. This translated dataset, termed xPQA_MT is used to further fine-tune the zero-shot models. At runtime, we use an external machine translation model to translate the question into English and apply the ranker to select the best candidate. Afterwards, an English-based generator produces an answer in English, which is then post-translated to the target language. Translate-test is a common approach in industry as it uses well-trained English-based models and off-the-shelf translation tools without further modifications. However, such a pipelined process introduces runtime latency and can lead to error propagation if translation quality is not perfect.\nTranslate-train In contrast to the above, here we apply all translation processes in training, or offline, so that no additional latency is added at runtime. In the zero-shot scenario, we machine-translate all questions and answers in the ePQA dataset into each of the 12 languages we consider. The resulting dataset, termed ePQA_MT, is used to train a multilingual model. In the fine-tuned scenario, we further finetune the model on the xPQA dataset. As the model is defined to be multilingual, it can directly take input questions in their original languages and output answers in the target languages without any translation process.\nMultilingual Finally, this approach is similar to the translate-train one in that both use multilingual models rather than an English-only model, but the difference is that the multilingual approach requires no translations at training time. In the zeroshot scenario, it trains a multilingual pretrained model directly on the English-only ePQA dataset and relies only on its own pretrained multilingual knowledge to adapt to other languages. In the finetuned scenario, we further fine-tune the model on the xPQA dataset. Note that this approach still requires runtime post-translation of the generated English answer into the target language. This is because we find that multilingual models can only generate English answers when trained only on English datasets. Although special decoding constraints could be use to restrict output vocabulary to that of the target language, zero-shot multilingual adaptation in generation tasks is still an open challenge (Chen et al., 2022;Zhang et al., 2022). It is worth mentioning that the three types of approaches can be combined. For example, we could follow the translate-train approach to train the candidate ranker and follow the multilingual approach to train the answer generator. Details of the model implementation are in Appendix C." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [ "b26" ], "table_ref": [], "text": "Although many QA works report end-to-end performances, we chose not to report them because (1) Most product questions, as well as the information sources such as reviews and customer answers, are subjective. The correctness of answers depends on the specific candidates for which there is no universal ground truth (McAuley and Yang, 2016); (2) Only providing answers grounded on references is a critical requirement for an online PQA deployment. When candidate ranking fails to provide suitable candidates in the first stage, even if the answer generator manages to make a good guess, 7 it is still considered a failure. Therefore, end-to-end evaluations are not suitable and the evaluation of answer generation has to be candidate-dependent.\nWe evaluate the ranker with Precision of the top-1 candidate, P@1, as the generated answer is based on the top-1 candidate. To remove the effects of language-specific answering ratios, we report P@1 scores only on the answerable questions where at least one candidate is marked as \"fully answering\". The generator is evaluated with the sacreBLEU 7 As the answer depends on the information of the specific product, the chance of guessing the correct answer without proper candidates is close to random and fully unreliable. score8 . The generations are produced and evaluated only from candidates marked as \"fully answering\" since otherwise, the ground truth is undefined." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [ "b20", "b0" ], "table_ref": [ "tab_1", "tab_2" ], "text": "Task 1: Candidate Ranking Table 2 shows P@1 of different candidate ranking approaches and their average scores. Translate-test performs the best, and its advantage is particularly prominent in the zero-shot scenario. In the fine-tuned scenario, however, the other two approaches can also perform similarly. The translate-train approach outperforms the multilingual approach mainly for languages that do not use Latin scripts. Even for low-resource languages, such as Tamil whose translation quality is far from satisfactory, translating the training corpus still helps the multilingual model adapt to the target language. This implies existing pre-trained multilingual models are already good at adapting to new languages with Latin scripts. Translating the training corpus is mainly helpful to adapt the model into new scripts (Lauscher et al., 2020). Fine-tuning an English BERT on the ePQA training set leads to a P@1 of 70.7% on the monolingual English test set, which is significantly higher than all other languages except Polish, suggesting scope for substantial improvement.9 \nTask 2: Answer Generation Table 3 shows the BLEU score of different answer generation approaches and their average scores. In the zero- shot scenario, the translate-test approach often performs the best on languages with non-Latin scripts and the translate-train approach performs the best on languages with Latin scripts. The translate-train approach outperforms the multilingual approach with a few exceptions. Interestingly, all the exceptions happen in languages using non-Latin scripts, which contradicts the findings in candidate ranking. We hypothesize that the used pre-trained multilingual model is better at understanding non-Latin scripts than actually generating them because generating the text requires more advanced knowledge of grammar, which cannot be easily distilled from imperfect machine translators (Adelani et al., 2022). Fine-tuning models on the xPQA training data leads to big improvements across approaches, especially for multilingual and translate-train which do not rely on machine translators at runtime. The translate-test approach, due to the error propagation from two machine translation steps, significantly underperforms the other two. Fine-tuning an English T5 model on the ePQA training set leads to a BLEU score of 49.7%; although BLEU scores are related to language-specific tokenizers and questions, we believe this consistent gap implies large opportunities for improvement." }, { "figure_ref": [ "fig_2", "fig_3" ], "heading": "Analysis Domain vs Language Transferability", "publication_ref": [], "table_ref": [], "text": "There are cross-lingual QA datasets in other domains. When building a system for xPQA, is it better to use an English-only in-domain QA dataset or a crosslingual out-of-domain QA dataset? To answer this question, we train a new multilingual ranker on the XOR-TyDi dataset (Asai et al., 2021a), which is a representative cross-lingual QA dataset with real questions from the Wikipedia domain. We treat the gold passage containing the correct answer span as positive and randomly sample 5 other passages as negative. The comparison with our existing multilingual approach trained on the ePQA dataset is shown in Figure 3. We can see that fine-tuning mod- els on the ePQA dataset leads to significantly better performance on all languages with few exceptions, suggesting domain differences are even more crucial than language differences for the candidate ranking task in xPQA. It is necessary to collect in-domain annotations for good performance.\nAnswerability Prediction As the amount of information differs among products, it is very likely that many questions are not answerable with existing candidates and the model should not attempt to answer given the available information. A common practice is to use the model score as a predictor for the answerability confidence. To see how effective this is, we visualize the change of precision and recall with varying model score thresholds in Fig- ure 4. We can see that in the zero-shot scenario, there is a larger performance variance across languages, especially for the multilingual approach which solely relies on the knowledge from the pretrained model. The multilingual approach is also more sensitive to the threshold and its recall drops much faster than the other two approaches. Finetuning the xQA training data reduces the gaps between the three approaches. The English model, as expected, consistently performs better, especially in the low-confidence region." }, { "figure_ref": [], "heading": "Effects of Translation Quality", "publication_ref": [ "b34", "b35", "b23" ], "table_ref": [ "tab_4" ], "text": "To investigate the effects of the translation quality in the translate-test approach, we select German and Tamil as two languages with very different translation qualities and obtain manual translations of their questions. Comparisons to machine-translated suggesting that question-shape shifts can be even a bigger challenge than language shifts for the candidate ranking task. The problem of language shifts might be crucial only for low-resource languages without decent MT systems such as Tamil.\nRuntime Latency Table 5 shows the runtime latency of every component tested in one AWS P3.16 instance. We feed questions in all languages one by one to simulate an online environment. As seen, the candidate ranker is fast and the computation over multiple candidates can be easily parallelized. The pre/post-translate costs more time, but the main bottleneck is the answer generation step, which is 25× slower than the ranking. This is clearly more than the latency budget of most online applications and can be the focus of future research. Potential improvements could be in non-autoregressive decoding, efficient attention, or distillation into a smaller model (Tang et al., 2021(Tang et al., , 2022;;Li et al., 2022)." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper presents xPQA, a dataset for crosslingual PQA supporting non-English questions to be answered from English product content. We report baseline results and findings for three approaches: translate-test, multilingual, and translatetrain. Experiments show that the translate-test ap-proach performs the best for the candidate ranking task while the translate-train approach performs the best for the answer generation task. However, there remains significant room for improvement relative to an English-based monolingual PQA system. We hope that future research can benefit from our work to improve cross-lingual PQA systems." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b28" ], "table_ref": [], "text": "While the xPQA dataset is created to be as close to the real-world scenario as possible, it has two major drawbacks. Firstly, the candidate set in the dataset does not include the full candidates for a given product because annotating all candidates is prohibitively expensive. The subjectivity of product questions and candidates also makes it hard to get ground-truth short-span answers, which prevents a straightforward end-to-end evaluation over the full candidate set. A potential fix is to run human evaluations on the top-1 candidate over the full candidate set from each model, but it'd be costly to do so. A more realistic solution is to have an online evaluation for the best model only, which we leave for future work. Secondly, the answer annotation is based only on a single candidate because handling information from multiple candidates requires careful instructions on conflicting information and summarization skills. This might limit the model in answering complex questions that require inference over multiple candidates. However, we find this case to be very rare in real customer questions. Furthermore, as we do not summarize multiple candidates, the returned answer can be biased toward the opinion of a single customer. Our evaluation also has potential limitations in that (1) We did not extensively evaluate the quality of generated answers with manual annotation. It is known that BLEU scores might not correlate well with human evaluations on generation tasks, and they can be misleading in certain cases;\n(2) We only compared major types of baseline algorithms and did not explore the effects of leveraging existing larger, more powerful pre-trained language models such as mT0 (Muennighoff et al., 2022) and Flan-T5 (Chung et al., 2022). Conclusions might change if we hire annotators to perform more human evaluations or change the model architecture." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "E-commerce has been increasingly popular these years. Nonetheless, a big amount of people cannot benefit much from it because most E-commerce websites only support a few major languages. Deploying an xPQA system can have a broad impact across a wide range of non-English speakers to assist them in their shopping experience. With a welldeveloped xPQA system, we only need to maintain comprehensive product information in one majority language, but allow non-English speakers easily get access to the product information. This can significantly reduce the maintenance cost and benefit the democratization of AI. Nevertheless, there are two major caveats before deploying a safe, reliable xPQA system: (1) The answer generator needs to be fairly evaluated by humans terms of faithfulness. While answer generation can greatly improve user-friendliness, it also brings potential risks of providing false information;\n(2) The users should be well noticed that the provided answer is drawn from the opinion of a single customer or other sources. It cannot reflect the opinion of the vendor, or seller nor imply any trend from the public." }, { "figure_ref": [], "heading": "A Difference with Previous Datasets", "publication_ref": [ "b27", "b39", "b6", "b36", "b14", "b12", "b19", "b37", "b4", "b21", "b1", "b15", "b24", "b25" ], "table_ref": [], "text": "Product Question Answering Product question answering (PQA) differs from general-knowledge QAs in that questions often seek subjective opinions on specific products, so earlier research usually treated it as an opinion mining problem (Moghaddam and Ester, 2011;Yu et al., 2012). Recent advances in neural networks propagated the use of dense retrieval and generation models to provide direct answers. Many relevant datasets are curated to facilitate this study (Chen et al., 2019;Xu et al., 2020;Gao et al., 2021;Deng et al., 2022;Shen et al., 2022b,c). However, they are either based on simulated questions, or community questionanswers where the answers are noisy and have no direct connection with product information candidates (Lai et al., 2018;Xu et al., 2019;Barlacchi et al., 2022). The only exception is Shen et al. (2022a) where exact annotations are provided for both candidate relevance and answer generation, but it focuses only on one product category and the annotation quality is not good enough. Specifically, we sample about 2000 question-candidate pairs then perform an in-house annotation and find around 20% of the annotations are incorrect. As a result, we construct the ePQA dataset with the following main differences from the dataset in Shen et al. ( 2022a): (1) It has higher annotation quality with rounds of verifications. In our in-house annotation, the error rate is less than 5%;\n(2) It does not restrict the product categories, while the original dataset focuses only on the toys and games products;\n(3) It defines finer-grained 3-class labels for each candidate, while the original dataset contains only binary labels; (4) Every candidate is checked with its context (surrounding sentences) to make sure the label is correct.\nTo the best of our knowledge, all existing PQA datasets are monolingual and questions are usually in high-resource languages such as English or Chinese, which leads to our motivation of building a cross-lingual PQA dataset.\nCross-Lingual Question Answering Recently, many non-English question answering (QA) datasets in the general Wikipedia domain have been proposed (Lewis et al., 2020;Artetxe et al., 2020;Clark et al., 2020b;Hardalov et al., 2020). Several datasets focus on the open-retrieval (open-domain) setting, where a gold document or paragraph is not pre-given and a system needs to search documents to answer questions (Liu et al., 2019;Asai et al., 2021a;Longpre et al., 2021). Importantly, all of those prior datasets are created based on Wikipedia or school exams, and there is no prior work on cross-lingual product QA.\nNotably, ePQA contains 131,52/1,000/2,000 questions in the train/dev/test sets respectively, which is significantly larger than xPQA (as in realistic scenarios). It can be used to analyze the performance gap between mono-lingual PQA and cross-lingual PQA." }, { "figure_ref": [], "heading": "B Dataset Collection B.1 Question Collection", "publication_ref": [], "table_ref": [], "text": "In the question collection phase, questions are kept if they fulfill the following criteria: (1) It is identified as the target language through Amazon Comprehend10 ; (2) It contains no URL links; (3) It contains at most one question mark so as to avoid multiple questions; (4) It contains at least 3 words and less than 20 words; (5) Its corresponding product is also available in the US market.11 " }, { "figure_ref": [], "heading": "B.2 Candidate Processing", "publication_ref": [], "table_ref": [], "text": "Our candidates come from 6 information sources:\n(1) product title, (2) semi-structured attributes, (3) product bullet points, (4) product description, (5) community answers (excluding the answer that directly replies to the question); (6) user reviews. Every product title and attribute is treated as a single candidate. For the other product information, we split them into sentences and treat each sentence as the candidate. For candidates from community answers, We further concatenate them with the corresponding community questions to provide more context. All candidates are lower cases and emojis are removed. Numbers from the semi-structured attributes are further normalized to keep at most 2 decimals." }, { "figure_ref": [], "heading": "B.3 Relevance Annotation", "publication_ref": [], "table_ref": [], "text": "Each candidate is marked with one of three labels: \"fully answering\" (it contains enough information to address the question), \"partially answering\" (it contains useful information to partially address the question), and \"irrelevant\" (it's not useful in answering the question at all). To make sure the candidate is properly understood, we also provide its context (surrounding sentences) to the annotators. The exact definitions for the three labels and guidelines used are:\n• Fully answering. Meaning that the response contains clear information to tell about the answer. It can take some inference step to get the answer, but it must contain enough information to help come to the answer. • Partially answering (relevant but not fully answering). Meaning that the response contains useful information that help one understand more, and narrow down the range of the answer, yet not enough to get the exact answer from it.\n• Irrelevant. Meaning that the response does not provide useful relevant information at all, and a customer will not get anything new about their question after reading it.\nNote that in this step, annotators do NOT need to consider factual correctness. For the question \"what color is it?\", it does not matter if the response is saying it is blue or red. Annotators should focus on the content only but not the factual correctness.\nBesides, even if it contains other extra information or the response is not natural, as long as the proper information is included, then it is considered as fully answering. Specifically, Fully answering means the response contains enough information to let one draw the answer. The criteria of fully answering should NOT be overly strict. Annotators can be lenient with the word choice, as long as the response conveys the proper meaning. For example: Question: is it an awesome gift for my girl friend? Response: it is a nice valentine gift for your partner.\nIn this case, the difference between \"awesome\" and \"nice\" is not relevant, as the response is either way saying that it is a good gift for your girl friend or partner, and thereby should be judged as \"fully answering\".\nAnother example: Question: is it comfortable to sleep on for a 6\" tall man? Response: It is comfortable to lie down for tall people.\nAnnotators should not be overly strict about whether 6\" can be considered as \"tall\" and whether \"lie down\" is equivalent to \"sleep on\", etc. Based on common sense, if the immediate impression after reading the response provides the needed information, one should NOT overthink other ways of interpreting this response.\nHelpful but not fully answering means the response contains helpful information, but is not enough to answer the question, or it can fully answers the question but the information is uncertain. \"Helpful\" means it provides useful information to help you know more about the question or narrow down the scope of the answer.\nFor example: -question: Is it good for my 3year-old kid? -response: my 5-year-old son likes it.\nIt cannot fully tell whether a 3-year-old will like it, but knowing that a 5-year-old likes it is helpful information. It helps you narrow down the range of the answer -You know it is for kids but not adults, just not sure if it works exactly for 3-year-old.\n\"irrelevant\" means the response provides zero useful information about the question, and is totally useless. Imagine you are a customer that raises this question, you should only select this option when you cannot find any useful information from the response." }, { "figure_ref": [], "heading": "B.4 Answer Generation", "publication_ref": [], "table_ref": [], "text": "During the answer annotation, annotators are instructed to provide a natural, informative, and complete sentence to directly answer the user questions given the provided information in the response. The provided answer is required to be:\n• natural. It should be a fluent, natural-sounding sentence.\n• informative. It should provide key information or explanations for users to better understand the question. It cannot be a single word like \"Yes\" or \"No\" without further content.\n• complete. It should be a complete sentence that provides more context instead of a short span.\nThere is also a caveat to avoid copying the candidate exactly. Annotators should always extract useful information from it and show the reasoning step in the answer to make it a natural reply. If the candidate is from a customer-provided content, they are further instructed to write from a thirdparty viewpoint. For user-provided contents, the answer will be in the form of \"A customer says he feels ...\" instead of \"I feel ...\"." }, { "figure_ref": [ "fig_5" ], "heading": "B.5 Quality control and annotation cost", "publication_ref": [], "table_ref": [], "text": "Annotations are done through the centific platform12 . The whole annotation process is summarized in Figure 6. From the Home of the webapp, we can see the status of the task (how many hits have been done and how many hits remain to be annotated). In the Quality Assessment mode, the assessor could search and select annotators and then check the completed hits at any time. When the assessor checks the hits, they can correct them directly and give feedback to the annotators, to improve annotation quality. The annotation cost differs among languages and tasks. " }, { "figure_ref": [], "heading": "B.6 Dataset Statistics", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "To increase the number of negative samples, for every question we further randomly sample 5 candidates from the candidate set of corresponding products. These negative candidates, together with the annotated candidates, will form a closed-pool candidate set to evaluate the candidate ranker. Table 7 shows the statistics of the ePQA and xPQA datasets." }, { "figure_ref": [], "heading": "C Experiments", "publication_ref": [ "b13", "b29", "b38", "b18" ], "table_ref": [], "text": "For the candidate ranking task, we initialize our model with Bert-base (Devlin et al., 2019) in translate-test and mBert-base in the other two approaches. Following the common practice, we concatenate the question and candidate (split by the <SEP> token) and then feed it into the encoder. An MLP layer is added on top of the first <CLS> token to output three logits. These logits go through the softmax layer to represent the probability of three labels. At runtime, we use the probability of \"fully answering\" as the score for each candidate.\nFor the answer generation task, we initialize our model with T5-base (Raffel et al., 2020) for the translate-test approach and mT5-base (Xue et al., 2021) for the other two approaches. The input is the question concatenated with the candidate and the output is the ground-truth answer. At runtime, we generate the output with beam search (beam size as 5). Both the ranker and generator are trained with standard cross entropy loss.\nWe implement all models based on the Huggingface Transformers library 13 with PyTorch 14 . Models are optimized with the Adam optimizer (Kingma and Ba, 2014). We truncate the total input length to 128 subword tokens and select the learning rate from [5e-6, 1e-5, 3e-5, 5e-5, 1e-4]. The warm-up step is selected from [5%, 10%, 20%, 50%] of the whole training steps. For the ranker, we choose the best configuration based on the accuracy of the validation set. For the generative model, we choose the best configuration based on the perplexity of the validation set. In the end, we set the learning rate of the ranker as 3e-5 and that of the generator as 1e-5. The warm-up steps are set to 20% for both. The batch size is set as 64. We evaluate the model performance every 1% of the whole training step to select the best checkpoint. All models are trained on one AWS P3.16 instance which includes 8 Nvidia V100 GPUs. The random seed is set as 42.\n13 https://huggingface.co./ 14 https://pytorch.org/" } ]
Product Question Answering (PQA) systems are key in e-commerce applications to provide responses to customers' questions as they shop for products. While existing work on PQA focuses mainly on English, in practice there is need to support multiple customer languages while leveraging product information available in English. To study this practical industrial task, we present xPQA, a large-scale annotated cross-lingual PQA dataset in 12 languages across 9 branches, and report results in (1) candidate ranking, to select the best English candidate containing the information to answer a non-English question; and (2) answer generation, to generate a natural-sounding non-English answer based on the selected English candidate. We evaluate various approaches involving machine translation at runtime or offline, leveraging multilingual pre-trained LMs, and including or excluding xPQA training data. We find that (1) In-domain data is essential as cross-lingual rankers trained on other domains perform poorly on the PQA task; (2) Candidate ranking often prefers runtime-translation approaches while answer generation prefers multilingual approaches; (3) Translating offline to augment multilingual models helps candidate ranking mainly on languages with non-Latin scripts; and helps answer generation mainly on languages with Latin scripts. Still, there remains a significant performance gap between the English and the cross-lingual test sets. 1
xPQA: Cross-Lingual Product Question Answering across 12 Languages
[ { "figure_caption": "Figure 2 :2Figure 2: Summary of experimented approaches. The ePQA_MT (and xPQA_MT) set is the translated version of ePQA (and xPQA) into all non-English languages (and English). **indicates that post-translate is only required for the zero-shot model.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "de it fr es pt pl ar hi ta zh ja ko with XOR-TyDi v.s. ePQA ePQA-Zeroshot XQR-TyDi-Zeroshot ePQA-Finetuned XOR-TyDi-Finetuned", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Comparison of transfer learning datasets. The English-only in-domain ePQA data is more useful than the cross-lingual out-of-domain XOR-Tydi dataset.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Precision and recall with varying thresholds. The red line is for the English model and the other lines are for the average score of the cross-lingual model. Vertical bars are the standard deviations across all languages.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure5: UI of the annotation task. Annotators will be shown a question in one of the 13 languages we considered and a candidate extracted from product information. Annotators can also see the title, and picture of the product, as well as context (surrounding sentences of the candidate with the actual candidate being highlighted), to provide a more accurate annotation.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Annotation process and quality control of the task.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Languages in the xPQA dataset.", "figure_data": "LanguageBranchScriptMarketGerman (DE)GermanicLatinGermanyItalian (IT)RomanceLatinItalyFrench (FR)RomanceLatinFranceSpanish (ES)RomanceLatinSpainPortuguese (PT) RomanceLatinBrazilPolish (PL)Balto-Slavic LatinPolandArabic (AR)SemiticArabicSAHindi (HI)Indo-AryanDevanagari IndiaTamil (TA)DravidianTamilIndiaChinese (ZH)SiniticChineseChinaJapanese (JA)JaponicKanji;Kana JapanKorean (KO)HanHangulUS", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "P@1 of candidate ranking for each language and the averaged score (AVG) on answerable questions in xPQA testset.", "figure_data": "ModelDEITFRESPTPLARHITAZHJAKOAVGZero-shot ScenarioTranslate-test48.7 48.6 59.7 63.8 56.9 63.6 49.2 60.2 44.6 56.1 50.7 48.9 54.2Multilingual48.4 46.2 59.1 59.8 55.5 60.0 45.1 42.7 40.4 53.0 45.0 45.4 50.1Translate-train 47.7 47.8 57.4 60.8 57.0 58.7 48.7 50.9 44.1 55.8 47.8 49.8 52.2Fine-tuned ScenarioTranslate-test51.7 55.1 64.8 66.8 64.0 68.0 57.3 68.4 50.0 61.9 57.9 60.2 60.5Multilingual52.7 53.5 64.8 65.7 63.5 70.6 54.7 67.6 49.0 60.3 51.6 57.8 59.3Translate-train 52.1 54.0 63.4 67.1 62.1 71.6 55.1 67.4 51.3 64.2 54.7 60.6 60.3ModelDEITFRESPTPLARHITAZHJAKOAVGZero-shot ScenarioTranslate-test7.017.1 14.3 11.5 19.4 11.7 18.58.95.119.8 12.98.512.9Multilingual6.014.2 11.6 10.1 18.39.916.37.04.817.8 11.75.911.1Translate-train 16.9 17.1 20.5 14.1 19.5 18.8 15.9 15.84.416.6 12.87.415.0Fine-tuned ScenarioTranslate-test8.925.3 15.4 14.2 21.0 16.6 17.3 16.37.121.7 12.38.815.4Multilingual27.2 27.1 22.5 31.3 20.0 32.3 13.4 26.0 16.7 26.2 31.6 44.0 26.5Translate-train 32.9 31.6 26.6 36.6 24.4 40.1 16.0 28.5 18.5 30.3 33.7 51.6 30.9", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "BLEU score of answer generation for each language and the averaged score (AVG) on the xPQA test set.", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Latency of each component. Generating and trans-", "figure_data": "lating cost much more time than ranking.questions are shown in Table 4. Apart from P@1,we also show AUPC (Area Under PerturbationCurve), MAP (Mean Average Precision) and MRR(Mean Reciprocal Rank) scores. We can see thatthe improvement from using human translationsis negligible in German but substantial in Tamil.Even with human translations, we can still see a biggap between performances on English monolingual(70.7%) and xPQA test sets (48.8% and 54.2%),", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "", "figure_data": "pro-", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Annotation cost per unit for each task (in US dollars). The answer search task for English questions is annotated in-house so there is no external cost. The translation annotation is only conducted for German and Tamil.", "figure_data": "LanguageBranchScriptMarketTrain + Dev #Inst #Ans#InstTest #Ans %Full %RelEnglish (EN)GermanicLatinUS131,520 24,928 20,142 4,39284.195.2German (DE)GermanicLatinGermany5,11080610,201 1,50473.486.8Italian (IT)RomanceLatinItaly5,08157110,168 1,31660.679.9French (FR)RomanceLatinFrance5,04783810,135 1,68471.196.7Spanish (ES)RomanceLatinSpain5,0551,00310,112 1,96178.591.5Portuguese (PT) RomanceLatinBrazil5,06489610,120 1,77578.998.4Polish (PL)Balto-Slavic LatinPoland5,05392510,101 1,87376.790.1Arabic (AR)SemiticArabicSA5,09775210,178 1,54471.384.6Hindi (HI)Indo-AryanDevanagari India5,17592210,319 1,67091.795.3Tamil (TA)DravidianTamilIndia5,07689210,166 1,58473.481.7Chinese (ZH)SiniticChineseChina5,0951,02810,148 1,86581.291.5Japanese (JA)JaponicKanji;Kana Japan5,11193910,201 1,74881.288.5Korean (KO)HanHangulUS5,06064210,116 1,27759.670.5", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Statistics of the ePQA and xPQA Datasets. #Inst/#Ans is the number of question-candidate pairs with relevance labels/manually written answers. %Full/%Rel is the percentage of questions that can be fully/partially answered.", "figure_data": "", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" } ]
Xiaoyu Shen; Akari Asai; Bill Byrne; Adrià De Gispert
[ { "authors": "David Adelani; Jesujoba Alabi; Angela Fan; Julia Kreutzer; Xiaoyu Shen; Machel Reid; Dana Ruiter; Dietrich Klakow; Peter Nabende; Ernie Chang", "journal": "", "ref_id": "b0", "title": "A few thousand translations go a long way! leveraging pre-trained models for african news translation", "year": "2022" }, { "authors": "Mikel Artetxe; Sebastian Ruder; Dani Yogatama", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "On the cross-lingual transferability of monolingual representations", "year": "2020" }, { "authors": "Akari Asai; Jungo Kasai; Jonathan Clark; Kenton Lee; Eunsol Choi; Hannaneh Hajishirzi", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "XOR QA: Cross-lingual open-retrieval question answering", "year": "2021" }, { "authors": "Akari Asai; Jungo Kasai; Jonathan H Clark; Kenton Lee; Eunsol Choi; Hannaneh Hajishirzi", "journal": "", "ref_id": "b3", "title": "Xor qa: Cross-lingual open-retrieval question answering", "year": "2021" }, { "authors": "Gianni Barlacchi; Ivano Lauriola; Alessandro Moschitti; Marco Del Tredici; Xiaoyu Shen; Thuy Vu; Bill Byrne; Adrià De; Gispert ", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "FocusQA: Open-domain question answering with a context in focus", "year": "2022" }, { "authors": "David Carmel; Liane Lewin-Eytan; Yoelle Maarek", "journal": "", "ref_id": "b5", "title": "Product question answering using customer generated content-research challenges", "year": "2018" }, { "authors": "Shiqian Chen; Chenliang Li; Feng Ji; Wei Zhou; Haiqing Chen", "journal": "", "ref_id": "b6", "title": "Driven answer generation for product-related questions in e-commerce", "year": "2019" }, { "authors": "Yiran Chen; Zhenqiao Song; Xianze Wu; Danqing Wang; Jingjing Xu; Jiaze Chen; Hao Zhou; Lei Li", "journal": "", "ref_id": "b7", "title": "Mtg: A benchmark suite for multilingual text generation", "year": "2022" }, { "authors": "Chung Hyung Won; Le Hou; Shayne Longpre; Barret Zoph; Yi Tay; William Fedus; Eric Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma", "journal": "", "ref_id": "b8", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "Jonathan H Clark; Eunsol Choi; Michael Collins; Dan Garrette; Tom Kwiatkowski; Vitaly Nikolaev; Jennimaria Palomaki; ; ", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b9", "title": "Tydi qa: A benchmark for information-seeking question answering in typologically diverse languages", "year": "2020" }, { "authors": "Jonathan H Clark; Eunsol Choi; Michael Collins; Dan Garrette; Tom Kwiatkowski; Vitaly Nikolaev; Jennimaria Palomaki", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b10", "title": "TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages", "year": "2020" }, { "authors": "Kevin Clark; Minh-Thang Luong; Quoc V Le; Christopher D Manning", "journal": "", "ref_id": "b11", "title": "Electra: Pretraining text encoders as discriminators rather than generators", "year": "2020" }, { "authors": "Yang Deng; Yaliang Li; Wenxuan Zhang; Bolin Ding; Wai Lam", "journal": "ACM Transactions on Information Systems (TOIS)", "ref_id": "b12", "title": "Toward personalized answer generation in e-commerce via multi-perspective preference modeling", "year": "2022" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b13", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Shen Gao; Xiuying Chen; Zhaochun Ren; Dongyan Zhao; Rui Yan", "journal": "ACM Transactions on Information Systems (TOIS)", "ref_id": "b14", "title": "Meaningful answer generation of e-commerce question-answering", "year": "2021" }, { "authors": "Momchil Hardalov; Todor Mihaylov; Dimitrina Zlatkova; Yoan Dinkov; Ivan Koychev; Preslav Nakov", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "EXAMS: A multi-subject high school examinations dataset for cross-lingual and multilingual question answering", "year": "2020" }, { "authors": "Junjie Hu; Sebastian Ruder; Aditya Siddhant; Graham Neubig; Orhan Firat; Melvin Johnson", "journal": "", "ref_id": "b16", "title": "Xtreme: A massively multilingual multitask benchmark for evaluating cross-lingual generalisation", "year": "2020" }, { "authors": " Pmlr", "journal": "", "ref_id": "b17", "title": "", "year": "" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b18", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "Tuan Lai; Trung Bui; Sheng Li; Nedim Lipka", "journal": "", "ref_id": "b19", "title": "A simple end-to-end question answering model for product information", "year": "2018" }, { "authors": "Anne Lauscher; Vinit Ravishankar; Ivan Vulić; Goran Glavaš", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "From zero to hero: On the limitations of zero-shot language transfer with multilingual Transformers", "year": "2020" }, { "authors": "Patrick Lewis; Barlas Oguz; Ruty Rinott; Sebastian Riedel; Holger Schwenk", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "MLQA: Evaluating cross-lingual extractive question answering", "year": "2020" }, { "authors": "Feng-Lin Li; Minghui Qiu; Haiqing Chen; Xiongwei Wang; Xing Gao; Jun Huang; Juwei Ren; Zhongzhou Zhao; Weipeng Zhao; Lei Wang", "journal": "", "ref_id": "b22", "title": "Alime assist: An intelligent assistant for creating an innovative e-commerce experience", "year": "2017" }, { "authors": "Zheng Li; Zijian Wang; Ming Tan; Ramesh Nallapati; Parminder Bhatia; Andrew Arnold; Bing Xiang; Dan Roth", "journal": "", "ref_id": "b23", "title": "Dq-bart: Efficient sequence-tosequence model via joint distillation and quantization", "year": "2022" }, { "authors": "Jiahua Liu; Yankai Lin; Zhiyuan Liu; Maosong Sun", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "XQA: A cross-lingual open-domain question answering dataset", "year": "2019" }, { "authors": "Shayne Longpre; Yi Lu; Joachim Daiber", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b25", "title": "MKQA: A linguistically diverse benchmark for multilingual open domain question answering", "year": "2021" }, { "authors": "Julian Mcauley; Alex Yang", "journal": "", "ref_id": "b26", "title": "Addressing complex and subjective product-related queries with customer reviews", "year": "2016" }, { "authors": "Samaneh Moghaddam; Martin Ester", "journal": "IEEE", "ref_id": "b27", "title": "Aqa: aspect-based opinion question answering", "year": "2011" }, { "authors": "Niklas Muennighoff; Thomas Wang; Lintang Sutawika; Adam Roberts; Stella Biderman; Teven Le Scao; M Saiful Bari; Sheng Shen; Zheng-Xin Yong; Hailey Schoelkopf", "journal": "", "ref_id": "b28", "title": "Crosslingual generalization through multitask finetuning", "year": "2022" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "The Journal of Machine Learning Research", "ref_id": "b29", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Xiaoyu Shen; Gianni Barlacchi; Marco Del Tredici; Weiwei Cheng; Bill Byrne; Adrià De; Gispert ", "journal": "", "ref_id": "b30", "title": "a. Product answer generation from heterogeneous sources: A new benchmark and best practices", "year": "2022" }, { "authors": "Xiaoyu Shen; Gianni Barlacchi; Marco Del Tredici; Weiwei Cheng; Adrià De; Gispert ", "journal": "", "ref_id": "b31", "title": "semipqa: A study on product question answering over semi-structured data", "year": "2022" }, { "authors": "Xiaoyu Shen; Svitlana Vakulenko; Marco Del Tredici; Gianni Barlacchi; Bill Byrne; Adrià De; Gispert ", "journal": "", "ref_id": "b32", "title": "Low-resource dense retrieval for opendomain question answering: A comprehensive survey", "year": "2022" }, { "authors": "Xiaoyu Shen; Svitlana Vakulenko; Marco Del Tredici; Gianni Barlacchi; Bill Byrne; Adrià De; Gispert ", "journal": "", "ref_id": "b33", "title": "Neural ranking with weak supervision for open-domain question answering: A survey", "year": "2023" }, { "authors": "Ze Tang; Chuanyi Li; Jidong Ge; Xiaoyu Shen; Zheling Zhu; Bin Luo", "journal": "IEEE", "ref_id": "b34", "title": "Ast-transformer: Encoding abstract syntax trees efficiently for code summarization", "year": "2021" }, { "authors": "Ze Tang; Xiaoyu Shen; Chuanyi Li; Jidong Ge; Liguo Huang; Zhelin Zhu; Bin Luo", "journal": "", "ref_id": "b35", "title": "Ast-trans: code summarization with efficient tree-structured attention", "year": "2022" }, { "authors": "Binxia Xu; Siyuan Qiu; Jie Zhang; Yafang Wang; Xiaoyu Shen; Gerard De; Melo ", "journal": "", "ref_id": "b36", "title": "Data augmentation for multiclass utterance classification-a systematic study", "year": "2020" }, { "authors": "Hu Xu; Bing Liu; Lei Shu; Philip S Yu", "journal": "", "ref_id": "b37", "title": "Review conversational reading comprehension", "year": "2019" }, { "authors": "Linting Xue; Noah Constant; Adam Roberts; Mihir Kale; Rami Al-Rfou; Aditya Siddhant; Aditya Barua; Colin Raffel", "journal": "", "ref_id": "b38", "title": "mt5: A massively multilingual pre-trained text-to-text transformer", "year": "2021" }, { "authors": "Jianxing Yu; Zheng-Jun Zha; Tat-Seng Chua", "journal": "", "ref_id": "b39", "title": "Answering opinion questions on products by exploiting hierarchical organization of consumer reviews", "year": "2012" }, { "authors": "Qingyu Zhang; Xiaoyu Shen; Ernie Chang; Jidong Ge; Pengke Chen", "journal": "", "ref_id": "b40", "title": "Mdia: A benchmark for multilingual dialogue generation in 46 languages", "year": "2022" } ]
[]
2023-05-16
[ { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b37", "b31", "b8", "b33", "b17", "b35", "b24", "b41", "b14", "b13", "b1", "b15", "b31", "b4", "b29", "b24", "b39", "b7", "b39", "b7" ], "table_ref": [], "text": "Building a human-like learning system that has the ability to quickly learn new concepts from scarce experience is one of the targets in modern Artificial Intelligence (AI) communities. Meta-learning or referred to as learning to learn is such a few-shot learning paradigm that aims to mimics human abilities to learn from different small tasks (or episodes) of source classes in the training set and generalize to unseen tasks of target classes in the test set. Meta-learning has been extensively studied in image classification and achieve remarkable successes (Vinyals et al. 2016;Snell, Swersky, and Zemel 2017;Finn, Abbeel, and Levine 2017;Sung et al. 2018;Hou et al. 2019;Tseng et al. 2020;Liu et al. 2021;Gao et al. 2021). The effectiveness in image classification motivates the recent application of meta-learning to few-shot text classification (Yu et al. 2018;Geng et al. 2019Geng et al. , 2020;;Bao et al. 2020;Han et al. 2021).\nOne of the metric-based meta-learning methods that has been widely studied and shown effectiveness in few-shot learning is Prototypical Networks (Snell, Swersky, and Zemel 2017). As shown in Figure 1 (a), at each episode, Prototypical Networks first compute the prototype for each class using the text representations in the support set, then align each text representation in the query set to the prototypes under some measurement, e.g., Euclidean distance. This learning strategy allows the meta-learner to perform few-shot text classification by simply learning the representations of the texts. However, as the model design in Prototypical Networks ignores the relationships among the texts in the query set, the discrimination among the query-text representations is not guaranteed, which may lead to difficulty in prediction when two text representations in the query set are very similar but they belong to different classes. Such similar texts with different classes are common because real-world few-shot text classification tasks may involve fine-grained classes with very similar semantics. For example, in intent classification, the sentences \"who covered the song one more cup of coffee\" with intent music-query and \"play the song one more cup of coffee\" with intent music-play may produce similar text representations but they belong to dif-ferent intents. When these two sentences are sampled in the same query set, they are hard to distinguish from each other and bring about contradiction in prediction because they will obtain similar measurements aligning to each prototype, thus may lead to misclassification. To tackle the above issue caused by similar text representations of similar classes, we propose a few-shot text classification framework ContrastNet that encourages learning discriminative text representations via contrastive learning, motivated by its successful application in few-shot image classification (Gao et al. 2021;Luo et al. 2021b;Chen and Zhang 2021;Majumder et al. 2021;Liu et al. 2021). As shown in Figure 1 (b), in ContrastNet, the text representations are learned free from the prototypes by pulling closer a text representation with text representations belonging to the same class and push away text representations with different classes from both query and support set. In this way, when two texts with similar semantics from different classes are sampled in the same query set, they are forced to produce discriminative representations by the contrastive loss, thus alleviate the contradictions during prediction.\nAnother challenge in few-shot text classification is that the models are prone to overfit the source classes based on the biased distribution formed by a few training examples (Yang, Liu, and Xu 2021;Dopierre, Gravier, and Logerais 2021). The authors of (Yang, Liu, and Xu 2021) propose to tackle the overfitting problem in few-shot image classification by training with additional instances generated from calibrated distributions. In few-shot text classification, PRO-TAUGMENT (Dopierre, Gravier, and Logerais 2021) introduce an unsupervised cross-entropy loss with unlabeled instances to prevent the model from overfitting the source classes. Although successful, these approaches only tackle the instance-level overfitting. In this paper, we argue that the overfitting may also occur at task-level because not only the text instances from target classes but also the way they are combined as tasks are unavailable during training.\nWe incorporate two unsupervised contrastive losses as the regularizers upon the basic supervised contrastive learning model to alleviate the instance-level and task-level overfitting problems. Specifically, the representations of randomly sampled tasks from source classes and the representations of randomly sampled unlabeled texts with their augmentations are taken to form a task-level contrastive loss and an instance-level contrastive loss in an unsupervised manner, respectively. The unsupervised task-level and instance-level contrastive losses force the representations of different tasks and different unlabeled texts to be separated from each other in their representation space. We hope this separation to pull the task and instance representations of target classes away from the task and instance representations of source classes, thus alleviate the overfitting problems.\nTo summarize, our work makes the following contributions. ( 1 For convenience, we use a pair (x s i , y s i ) to denote the i th item of total n × k items in the support set S and x q j denotes the j th text instance of total n × m instances in the query set Q. For the text instance x q j , we denote its class label as y q j . A meta-learner is trained on such small tasks that attempts to classify the texts in the query set Q on the basis of few labeled texts in the support set S." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "Our ContrastNet combines BERT text encoder and supervised contrastive learning to learn discriminative text representations and incorporates the task-level and instance-level unsupervised contrastive regularization to alleviate the overfitting problems. The overall model structure of ContrastNet is shown in Figure 2. All notations in Figure 2 will be defined in the rest of this section." }, { "figure_ref": [], "heading": "Supervised Contrastive Text Representation", "publication_ref": [ "b0", "b7", "b5", "b19" ], "table_ref": [], "text": "Text Encoder In metric-based few-shot text classification, a text encoder is needed to map the raw text onto a vector space where the metrics (or measurements) between texts can be computed. The pre-trained language models, such as BERT, have recently been employed as text encoders to obtain text representations and achieve promising results. Following previous works in few-shot text classification (Bansal, Jha, and McCallum 2020;Luo et al. 2021a;Dopierre, Gravier, and Logerais 2021), we also utilize BERT to represent the texts. Specifically, BERT takes a text x composed of a list of tokens as input, and output a hidden-state vector for each of the tokens; we take the hidden-state vector corresponding to the CLS token as the text representation of x. For later use, we denote the BERT text representation module as f (•) and denote all of its parameters as θ. Supervised Contrastive Learning Our few-shot learning framework is also a metric-based approach, but different from Prototypical Networks that align query texts with prototypes, we optimize the measurement free of prototypes, by learning to align two text representations using supervised contrastive learning. It pulls closer the text representations belonging to the same class and pushes away text representations belonging to different classes among texts from both query and support sets.\nThe model design of our supervised contrastive learning is based on the \"batch contrastive learning\" framework (Chen et al. 2020) and the supervised contrastive learning strategy in (Khosla et al. 2020). Specifically, given the support set S and query set Q in an episode, we combine the n × k text instances {x s i } in S and the n × m text instances {x q j } in Q as a training batch B = {x 1 , x 2 , • • • , x n(k+m) }, where\nx t = x s t , t nk x q t-nk , t > nk(1)\nFor each x t ∈ B, we denote its label as y t and denote its representation transformed by f (•) as z t . The matched textinstance pairs and unmatched text-instance pairs in the batch is identified based on their labels. Let c = k + m -1 be the number of text instances in B which has the same label as x t . The text representations can then be optimized by following supervised contrastive loss\nL con = - xt∈B 1 c log yr=yt exp(z t •z r /τ ) yr=yt exp(z t •z r /τ )+ y r =yt exp(z t •z r /τ ) (2)\nwhere the inner product is used as the similarity measurement of two text representations, and τ is a temperature factor that scales the inner products.\nThe supervised contrastive loss in Equation ( 2) encourages each representation z q of query-text x q ∈ Q to locate near the query-text representations that have the same class label with x q and distant from the query-text representations that have different class labels with x q , thus increase the discrimination of query-text representations between different classes and alleviate the contradictions in label prediction." }, { "figure_ref": [], "heading": "Unsupervised Contrastive Regularization", "publication_ref": [ "b5", "b34", "b18", "b40", "b2", "b11", "b32", "b7", "b7" ], "table_ref": [], "text": "To tackle the overfitting problems caused by a few training examples in few-shot text classification, we propose to train the supervised contrastive representation model under the regularization of a task-level unsupervised contrastive loss and an instance-level unsupervised contrastive loss.\nData Augmentation Data augmentation has shown to be essential in boosting contrastive learning (Chen et al. 2020;Tian et al. 2020;Kalantidis et al. 2020;You et al. 2020;Cai et al. 2020;Gao, Yao, and Chen 2021). However, data augmentation of text is still an open challenge. Among the direction of textual data augmentation, the EDA (Wei and Zou 2019) may alter the text purport (Sun et al. 2021) and the back translation fails to provide diverse augmentations (Dopierre, Gravier, and Logerais 2021). The recent work PROTAUGMENT (Dopierre, Gravier, and Logerais 2021) propose a short-text paraphrasing model that produces diverse paraphrases of the original text as data augmentations. As the data augmentations of PROTAUGMENT have shown to be effective in few-shot text classification, we apply PROTAUGMENT to generate data augmentations of the texts in our unsupervised contrastive learning." }, { "figure_ref": [], "heading": "Task-level Contrastive Regularization", "publication_ref": [], "table_ref": [], "text": "In few-shot text classification, the seen tasks are sampled from the source classes Y train , while the unseen tasks sampled from the target classes Y test are unavailable during training. Therefore, the models tend to overfit the seen tasks if trained without constraint and degrade performance when it generalizes to unseen tasks. Our solution to this problem is to constrain the model with an unsupervised contrastive loss built upon randomly sampled tasks and their data augmentations.\nSpecifically, at each episode, we randomly sample\nN task tasks {(Q 1 , S 1 ), (Q 2 , S 2 ), • • • , (Q N task , S N task )} from\nthe source classes Y train , and we use x s (u,v) , x s (u,v) and z s (u,v) to respectively denote the v th text instance, its text augmenta-tion and its text representation in support set S u of the u th task. The representation zu of the u th task can simply be calculated as the mean embedding of all text instances in S u . To obtain the data augmentation of the u th task, we replace the text instances in S u with their corresponding text augmentations, and similarly, we compute the mean embedding z u of these text augmentations as the data augmentation of the u th task. We combine all zu and z u as a training batch {z u } of 2N task elements and use z u denotes the matched element of zu in {z u }. The task-level contrastive regularization loss is\nL task = - 2N task u=1 log exp(z u • z u /τ ) exp(z u • z u /τ ) + zu =z u exp(z u • zu /τ ) (3)\nThe unsupervised contrastive loss in Equation ( 3) forces the representations of different tasks (or compositions of classes) to be separated from each other. Separation of tasks encourages the separation of classes between tasks. This separation urges the representations of the unseen tasks to locate distant from the seen tasks, thus alleviate the tasklevel overfitting problem." }, { "figure_ref": [], "heading": "Instance-level Contrastive Regularization", "publication_ref": [], "table_ref": [], "text": "The instancelevel overfitting in few-shot text classification is not entirely unknown to the research community. The PROTAUGMENT introduces an unsupervised cross-entropy loss upon Prototypical Networks, which encourages the representation of each unlabeled text being closer to its augmentations' prototype and distant from the prototypes of other unlabeled texts. In this work, we build a different instance-level unsupervised loss that serves as a regularizer of the supervised contrastive text representation model. Our objective is to prevent instance-level overfitting by learning separable text representations between source and target classes. To that end, we introduce the instance-level unsupervised contrastive regularization.\nSpecifically, at each training episode, we randomly sample N inst unlabeled text instances {x 1 , x2 , • • • , xNinst }. Let x w denote the data augmentation of text instance xw ; zw and z w denote the text representation of xw and x w , respectively. We combine all xw and x w as a training batch {x w } of 2N inst elements and use x w to denote the matched element of xw in {x w }. The instance-level contrastive regularization loss is\nL inst = - 2Ninst w=1 log exp(z w • z w /τ ) exp(z w • z w /τ ) + zw =z w exp(z w • zw /τ ) (4)\nThe unsupervised contrastive loss in Equation (4) encourages different text representations locating distant from each other, which prevents the text representations of target classes from being too closer to text representations of source classes, thus alleviate the instance-level overfitting." }, { "figure_ref": [], "heading": "Objective and Prediction", "publication_ref": [], "table_ref": [], "text": "Overall Objective During training, we combine the loss L con of the supervised contrastive text representation learning model with the unsupervised regularization losses L inst at the instance-level and L task at the task-level. The overall objective is\nL = αL con + (1 -α)L inst + βL task (5)\nwhere α and β are hyper-parameters that indicate the weights on the loss of supervised contrastive learning and task-level unsupervised regularization loss, respectively. The overall model can be optimized using stochastic gradient descent (SGD) methods." }, { "figure_ref": [], "heading": "Label prediction", "publication_ref": [], "table_ref": [], "text": "As the text representations in Contrast-Net are learned free of prototypes, the label prediction setup in Prototypical Networks that align the query text to prototypes with the maximum measurement is no longer appropriate to ContrastNet. A natural label prediction setup for ContrastNet is to infer the label of a query text by comparing its representation with text representations from the support set. In this work, we adopt the Nearest Neighbor classifier as such a label prediction setup. Specifically, given a query text x q ∈ Q, we first obtain its representation f (x q ) and representations of all texts in the support set {f (x s i )}, then the label of query text x q is determined as the label y s i of the support-text whose representation f (x s i ) has the maximum inner product with f (x q ). Let y s i * be the predicted label, then the process to find i * can be formulated as\ni * = arg max i f (x q ) • f (x s i )(6)\nExperiments Datasets " }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [ "b15", "b7", "b7", "b20" ], "table_ref": [], "text": "We evaluate our models on typical 5-way 1-shot and 5-way 5-shot settings. Following the setup in (Dopierre, Gravier, and Logerais 2021), we report the average accuracy over 600 episodes sampled from the test set for intent classification datasets; and following (Han et al. 2021), we report the average accuracy over 1000 episodes sampled the test set for news or review classification datasets. We run each experimental setting 5 times. For each run, the training, validation, and testing classes are randomly re-split.\nWe implement the proposed models using Pytorch deep learning framework 1 . On the 4 intent classification datasets, we use their respective pre-trained BERT-based language model provided in (Dopierre, Gravier, and Logerais 2021) as the encoders for text representation. For the news or review classification datasets, we use the pure pre-trained bert-base-uncased model as the encoder for text representation. We use EDA to augment texts in Amazon, Reuters and 20News because they are long sequences unsuitable for PROTAUGMENT. For each episode during training, we randomly sample 10 tasks and 10 unlabeled texts to calculate the task-level contrastive regularization loss and instance-level contrastive regularization loss. The temperature factors of loss L con , L task and L inst are set to 5.0, 7.0 and 7.0, respectively. The loss weight α is initialized to 0.95 and decrease during training using the loss annealing strategy (Dopierre, Gravier, and Logerais 2021), and the loss weight β is set to 0.1. We optimize the models using Adam (Kingma and Ba 2015) with an initialized learning rate of 1 × e -6 . All the hyper-parameters are selected by greedy search on the validation set. All experiments are run on a single NVIDIA Tesla V100 PCIe 32GB GPU.\n1 Our code and data are available at: https://github.com/BDBC-KG-NLP/AAAI2022 ContrastNet." }, { "figure_ref": [], "heading": "Baseline Models", "publication_ref": [ "b31", "b8", "b14", "b10", "b1", "b15", "b7" ], "table_ref": [], "text": "We compare the proposed few-shot text classification models with following baselines: Prototypical Networks This model is a metric-based metalearning method for few-shot classification proposed in (Snell, Swersky, and Zemel 2017), which learns to align query instances with class prototypes. MAML This model is proposed in (Finn, Abbeel, and Levine 2017), which learns to rapidly adapt to new tasks by only few gradient steps. Induction Networks This model is proposed in (Geng et al. 2019), which introduces dynamic routing algorithm to learn the class-level representation. HATT This model is proposed in (Gao et al. 2019), which extends the prototypical networks by incorporating a hybrid attention mechanism. DS-FSL This model is proposed in (Bao et al. 2020), which aims to extract more transferable features by mapping the distribution signatures to attention scores. MLADA This model is proposed in (Han et al. 2021), which adopts adversarial networks to improve the domain adaptation ability of meta-learning. PROTAUGMENT This model is proposed in (Dopierre, Gravier, and Logerais 2021), which utilizes a short-texts paraphrasing model to generate data augmentation of texts and builds an instance-level unsupervised loss upon the prototypical networks. We also report its two improved versions with different word masking strategies, i.e., PROTAUG-MENT (unigram) and PROTAUGMENT (bigram)." }, { "figure_ref": [], "heading": "Few-shot Text Classification Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [ "b7", "b15" ], "table_ref": [ "tab_2", "tab_2" ], "text": "The few-shot text classification results in 5way 1-shot and 5-way 5-shot settings are shown in Table 2 and Table 3. We take the results of baseline models from (Dopierre, Gravier, and Logerais 2021) for the 4 intent classification datasets and from (Han et al. 2021) for the 4 news and review classification datasets. The current state-of-theart (SOTA) models on the 4 intent classification datasets and the 4 news and review classification datasets are PROTAUG-MENT (unigram) and MLADA, respectively. From Table 2 and Table 3, we observe that ContrastNet achieves the best average results in both 5-way 1-shot setting and 5-way 5shot setting on all datasets. ContrastNet builds itself as the new SOTA in both 5-way 1-shot and 5-way 5-shot settings on all datasets, except in 5-way 1-shot setting of Liu and Table 3: The 5-way 1-shot and 5-way 5-shot text classification results on the HuffPost, Amazon, Reuters and 20News datasets.\n5-way 5-shot setting of Clinic150, Amazon, Reuters. Con-trastNet also achieves significantly higher accuracy than the current SOTA models on most of the few-shot text classification datasets in 5-way 1-shot setting. These significant improvements suggest that learning discriminative text representations using the supervised contrastive learning with task-level and instance-level regularization can efficiently raise the few-shot text classification performance. 3. We generate 100 episodes in the 5-way 1-shot setting from the test set of HWU64, in which the text instances of query set are sampled from selected 5 similar classes which all belong to the play domain and may provide texts with similar semantics. From Figure 3 (a), we observe that the text representations of similar classes produced by Prototypical Networks are prone to mix with each other, thus may make them hard to be distinguished by the prediction model. The text representations produced by ContrastNet in Figure 3 (b) are also not clearly separated, but they are much more discriminative than the query-text representations produced by Prototypical Networks. This visualization result demonstrates the power of ContrastNet in learning discriminative representations compared to Prototypical Networks." }, { "figure_ref": [], "heading": "Error Analysis on Similar Classes", "publication_ref": [], "table_ref": [], "text": "To study whether improving the discrimination of text representations help improve few-shot text classification performance on similar classes, we make an error analysis of the prediction results on selected similar classes in the test set of HWU64. Each value in the heat-maps of Figure 4 denotes the pro- " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We propose a contrastive learning framework ContrastNet for few-shot text classification which learns discriminative text representation of similar classes and tackles the task and instance level overfitting problems. ContrastNet learns discriminative text representations belonging to different classes via supervised contrastive learning, while simultaneously introduce unsupervised contrastive regularization at both task and instance level to prevent overfitting. As the discriminative representation and overfitting problems are shared challenges in few-shot learning, we hope Contrast-Net will extend to a broad spectrum of other applications." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This work is supported partly by the National Natural Science Foundation of China (No. 61772059), by the Fundamental Research Funds for the Central Universities by the State Key Laboratory of Software Development Environment (No. SKLSDE-2020ZX-14)." } ]
Few-shot text classification has recently been promoted by the meta-learning paradigm which aims to identify target classes with knowledge transferred from source classes with sets of small tasks named episodes. Despite their success, existing works building their meta-learner based on Prototypical Networks are unsatisfactory in learning discriminative text representations between similar classes, which may lead to contradictions during label prediction. In addition, the tasklevel and instance-level overfitting problems in few-shot text classification caused by a few training examples are not sufficiently tackled. In this work, we propose a contrastive learning framework named ContrastNet to tackle both discriminative representation and overfitting problems in few-shot text classification. ContrastNet learns to pull closer text representations belonging to the same class and push away text representations belonging to different classes, while simultaneously introducing unsupervised contrastive regularization at both task-level and instance-level to prevent overfitting. Experiments on 8 few-shot text classification datasets show that ContrastNet outperforms the current state-of-the-art models.
ContrastNet: A Contrastive Learning Framework for Few-Shot Text Classification
[ { "figure_caption": "Figure 1 :1Figure 1: The learning strategies of Prototypical Network and proposed ContrastNet. Q and S respectively denote the query set and support set. The rectangles with different colors denote text representations from different classes. The green and red dashed arrow lines respectively indicate pulling closer and pushing away the representations. Picture (a) shows that Prototypical Networks learn to align a given query-text representations to prototypes computed by support-text representations. Picture (b) shows that Con-trastNet learns to pull closer the given query-text representation with text representations belonging to the same class and push away text representations with different classes.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The overall model structure of ContrastNet. The DA blocks represent data augmentation.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :Figure 4 :34Figure 3: Visualization of query text representations sampled from similar target classes on HWU64.", "figure_data": "", "figure_id": "fig_2", "figure_label": "34", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Task-representation visualization of Prototypical Networks and ContrastNet on Banking77.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Text-representation of Prototypical Network and ContrastNet on Banking77.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "train , Y val and Y test denote the disjoint set of training classes, validation classes and test classes, i.e., they have no overlapping classes. At each episode, a task composed of a support set S and a query set Q is drawn from the dataset of either Y train , Y val and Y test during training, validation or test. In an episode of a n-way k-shot text classification problem, n classes are sampled from corresponding class set; for each of the n classes, k labeled texts are sampled to compose the support set, and m unlabeled texts are sampled to compose the query set.", "figure_data": "representation model, which alleviate the task-level andinstance-level overfitting in few-shot text classification bylearning separable task representations and instance repre-sentations. (3) We conduct experiments on 8 text classifi-cation datasets and show that ContrastNet outperforms thestart-of-the-arts. Additional analysis on the results compar-ing to Prototypical Networks shows that ContrastNet effec-tively learns discriminative text representations and allevi-ates the task-level and instance-level overfitting problems.Problem FormulationThe meta-learning paradigm of few-shot text classificationaims to transfer knowledge learned from sets of small tasks(or episodes) of source classes to target classes which areunseen during training.Formally, let Y) We propose a few-shot text classification frame-work ContrastNet that learns discriminative text representa-tions via contrastive learning to reduce contradictions dur-ing prediction caused by similar text representations of sim-ilar classes. (2) We introduce two unsupervised contrastivelosses as regularizers upon the basic supervised contrastive", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The statistics of few-shot text classification datasets.to 2018. The Amazon dataset is a product review classification dataset including 142.8 million reviews with 24 product categories from the year 1996 to 2014. We use the subset provided by(Han et al. 2021), in which each class contains 1000 sentences. The Reuters dataset is collected from Reuters newswire in 1987. Following(Bao et al. 2020), we only use 31 classes and remove the multi-labeled articles. The 20News dataset is a news classification dataset, which contains 18, 820 news documents from 20 news groups.", "figure_data": "Intent Classification Datasets The Banking77 dataset isa fine-grained intent classification dataset specific to a sin-gle banking domain, which includes 13, 083 user utterancesdivided into 77 different intents. The HWU64 dataset is alsoa fine-grained intent classification dataset but the classesare across multi-domain, which contains 11, 036 user ut-terances with 64 user intents from 21 different domains.The Clinic150 intent classification dataset contains 22, 500user utterances equally distributed in 150 intents. Following(Mehri, Eric, and Hakkani-Tür 2020; Dopierre, Gravier, andLogerais 2021), we only keep the 150 intent labels and dis-card the out-of-scope intent labels in our experiment. Liu57is a highly imbalanced intent classification dataset collectedon Amazon Mechanical Turk, which is composed of 25, 478user utterances from 54 classes.News or Review Classification Datasets The HuffPostdataset is a news classification dataset with 36, 900 HuffPostnews headlines with 41 classes collected from the year 2012", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": ".94 77.09 89.02 82.76 91.37 96.05 98.61 85.55±2.20 93.24±1.22 PROTAUGMENT 86.94 94.50 82.35 91.68 84.42 92.62 94.85 98.41 87.14±1.36 94.30±0.60 PROTAUGMENT (bigram) 88.14 94.70 84.05 92.14 85.29 93.23 95.77 98.50 88.31±1.43 94.64±0.59 PROTAUGMENT (unigram) 89.56 94.71 84.34 92.55 86.11 93.70 96.49 98.74 89.13±1.13 94.92±0.57 ContrastNet (L task &L inst /o) 88.53 95.22 84.62 91.93 80.53 93.47 94.29 98.09 86.99±1.57 94.68±0.74 The 5-way 1-shot and 5-way 5-shot text classification results on the Banking77, HWU64, Liu and Clinic150 intent classification datasets. The ContrastNet (L task &L inst /o) model denote the ContrastNet only using supervised contrastive text representation without any unsupervised regularization and the ContrastNet (L inst /o) model denotes the ContrastNet with only task-level unsupervised regularization. We compute the mean and the standard deviation over 5 runs with different class splitting. The Average denotes the averaged mean and standard deviation over all datasets for each setting of each model. ContrastNet (L task &L inst /o) 52.74 63.59 74.70 84.47 83.74 93.28 70.61 80.04 70.45±3.28 80.35±3.32 ContrastNet (L inst /o) 52.85 64.88 75.33 84.21 85.10 93.65 70.35 80.19 70.91±3.00 80.73±2.79 ContrastNet 53.06 65.32 76.13 85.17 86.42 95.33 71.74 81.57 71.84±2.81 81.85±2.03", "figure_data": "MethodBanking77HWU64LiuClinic150Average1-shot 5-shot 1-shot 5-shot 1-shot 5-shot 1-shot 5-shot1-shot5-shotPrototypical Networks 86.28 93ContrastNet (L inst /o) 89.75 95.36 85.14 91.69 86.79 93.28 96.32 98.25 89.50±1.30 94.65±0.64ContrastNet91.18 96.40 86.56 92.57 85.89 93.72 96.59 98.46 90.06±1.02 95.29±0.53MethodHuffPostAmazonReuters20NewsAverage1-shot 5-shot 1-shot 5-shot 1-shot 5-shot 1-shot 5-shot1-shot5-shotMAML35.949.339.647.154.662.933.843.740.950.8Prototypical Networks35.741.337.652.159.666.937.845.342.751.4Induction Networks38.749.134.941.359.467.928.733.340.447.9HATT41.156.349.166.043.256.244.255.044.458.4DS-FSL43.063.562.681.181.896.052.168.359.977.2MLADA45.064.968.486.082.396.759.677.863.981.4", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation Study We consider two ablated models of Con-trastNet: ContrastNet (L inst /o) that removes the instancelevel regularization loss from ContrastNet and ContrastNet (L task &L inst /o) that removes both instance-level and tasklevel regularization losses from ContrastNet. From the ablation results in Table2 and Table 3, we observe that Contrast-", "figure_data": "Results Analysis Based on Similar ClassesVisualizing Text Representations of Similar Classes Toinvestigate models' ability in learning discriminative textrepresentations of similar classes, we visualize the query-text representations produced by Prototypical Networks andContrastNet using t-SNE (van der Maaten and Hinton 2008)in FigureNet (L inst /o) improves few-shot text classification perfor-mance upon ContrastNet (L task &L inst /o); ContrastNet fur-ther promotes ContrastNet (L inst /o). These results demon-strate the effectiveness of task-level and instance-level reg-ularization in promoting the basic supervised contrastiverepresentation model. The ContrastNet (L task &L inst /o)with the pure supervised contrastive loss already outper-forms Prototypical Networks on all datasets except Liu andClinic150, which suggests the power of supervised con-trastive learning in producing discriminative text represen-tations tand improving the accuracy.", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" } ]
Junfan Chen; Richong Zhang; Yongyi Mao; Jie Xu
[ { "authors": "T Bansal; R Jha; A Mccallum", "journal": "", "ref_id": "b0", "title": "Learning to Few-Shot Learn Across Diverse Natural Language Classification Tasks", "year": "2020" }, { "authors": "Y Bao; M Wu; S Chang; R Barzilay", "journal": "", "ref_id": "b1", "title": "Few-shot Text Classification with Distributional Signatures", "year": "2020" }, { "authors": "Q Cai; Y Wang; Y Pan; T Yao; T Mei", "journal": "", "ref_id": "b2", "title": "Joint Contrastive Learning with Infinite Possibilities", "year": "2020" }, { "authors": "I Casanueva; T Temcinas; D Gerz; M Henderson; I Vulic", "journal": "", "ref_id": "b3", "title": "Efficient Intent Detection with Dual Sentence Encoders", "year": "2020" }, { "authors": "Q Chen; J Zhang", "journal": "", "ref_id": "b4", "title": "Multi-Level Contrastive Learning for Few-Shot Problems", "year": "2021" }, { "authors": "T Chen; S Kornblith; M Norouzi; G E Hinton", "journal": "", "ref_id": "b5", "title": "A Simple Framework for Contrastive Learning of Visual Representations", "year": "2020" }, { "authors": " Pmlr", "journal": "", "ref_id": "b6", "title": "", "year": "" }, { "authors": "T Dopierre; C Gravier; W Logerais", "journal": "", "ref_id": "b7", "title": "ProtAugment: Unsupervised diverse short-texts paraphrasing for intent detection meta-learning", "year": "2021" }, { "authors": "C Finn; P Abbeel; S Levine", "journal": "", "ref_id": "b8", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks", "year": "2017" }, { "authors": " Pmlr", "journal": "", "ref_id": "b9", "title": "", "year": "" }, { "authors": "T Gao; X Han; Z Liu; M Sun", "journal": "AAAI Press", "ref_id": "b10", "title": "Hybrid Attention-Based Prototypical Networks for Noisy Few-Shot Relation Classification", "year": "2019" }, { "authors": "T Gao; X Yao; D Chen", "journal": "", "ref_id": "b11", "title": "SimCSE: Simple Contrastive Learning of Sentence Embeddings", "year": "2021" }, { "authors": "Y Gao; N Fei; G Liu; Z Lu; T Xiang; S Huang", "journal": "", "ref_id": "b12", "title": "Contrastive Prototype Learning with Augmented Embeddings for Few-Shot Learning", "year": "2021" }, { "authors": "R Geng; B Li; Y Li; J Sun; X Zhu", "journal": "", "ref_id": "b13", "title": "Dynamic Memory Induction Networks for Few-Shot Text Classification", "year": "2020" }, { "authors": "R Geng; B Li; Y Li; X Zhu; P Jian; J Sun", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Induction Networks for Few-Shot Text Classification", "year": "2019" }, { "authors": "C Han; Z Fan; D Zhang; M Qiu; M Gao; A Zhou", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "Meta-Learning Adversarial Domain Adaptation Network for Few-Shot Text Classification", "year": "2021" }, { "authors": "R He; J J Mcauley", "journal": "WWW", "ref_id": "b16", "title": "Ups and Downs: Modeling the Visual Evolution of Fashion Trends with One-Class Collaborative Filtering", "year": "2016" }, { "authors": "R Hou; H Chang; B Ma; S Shan; X Chen", "journal": "NeurIPS", "ref_id": "b17", "title": "Cross Attention Network for Few-shot Classification", "year": "2019" }, { "authors": "Y Kalantidis; M B Sariyildiz; N Pion; P Weinzaepfel; D Larlus", "journal": "", "ref_id": "b18", "title": "Hard Negative Mixing for Contrastive Learning", "year": "2020" }, { "authors": "P Khosla; P Teterwak; C Wang; A Sarna; Y Tian; P Isola; A Maschinot; C Liu; D Krishnan", "journal": "", "ref_id": "b19", "title": "Supervised Contrastive Learning", "year": "2020" }, { "authors": "D P Kingma; J Ba", "journal": "", "ref_id": "b20", "title": "Adam: A Method for Stochastic Optimization", "year": "2015" }, { "authors": "K Lang", "journal": "", "ref_id": "b21", "title": "NewsWeeder: Learning to Filter Netnews", "year": "1995" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b22", "title": "", "year": "" }, { "authors": "S Larson; A Mahendran; J J Peper; C Clarke; A Lee; P Hill; J K Kummerfeld; K Leach; M A Laurenzano; L Tang; J Mars", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "EMNLP-IJCNLP", "year": "2019" }, { "authors": "C Liu; Y Fu; C Xu; S Yang; J Li; C Wang; L Zhang", "journal": "AAAI Press", "ref_id": "b24", "title": "Learning a Few-shot Embedding Model with Contrastive Learning", "year": "2021" }, { "authors": "X Liu; A Eshghi; P Swietojanski; V Rieser", "journal": "Springer", "ref_id": "b25", "title": "Benchmarking Natural Language Understanding Services for Building Conversational Agents", "year": "2019" }, { "authors": "X Liu; A Eshghi; P Swietojanski; V Rieser", "journal": "Springer", "ref_id": "b26", "title": "Benchmarking Natural Language Understanding Services for Building Conversational Agents", "year": "2019" }, { "authors": "Q Luo; L Liu; Y Lin; W Zhang", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Don't Miss the Labels: Label-semantic Augmented Meta-Learner for Few-Shot Text Classification", "year": "2021" }, { "authors": "X Luo; Y Chen; L Wen; L Pan; Z Xu", "journal": "", "ref_id": "b28", "title": "Boosting Few-Shot Classification with View-Learnable Contrastive Learning", "year": "2021" }, { "authors": "O Majumder; A Ravichandran; S Maji; M Polito; R Bhotika; S Soatto", "journal": "", "ref_id": "b29", "title": "Revisiting Contrastive Learning for Few-Shot Classification", "year": "2021" }, { "authors": "S Mehri; M Eric; D Hakkani-Tür", "journal": "", "ref_id": "b30", "title": "DialoGLUE: A Natural Language Understanding Benchmark for Task-Oriented Dialogue", "year": "2020" }, { "authors": "J Snell; K Swersky; R S Zemel", "journal": "", "ref_id": "b31", "title": "Prototypical Networks for Few-shot Learning", "year": "2017" }, { "authors": "P Sun; Y Ouyang; Zhang; X Dai", "journal": "", "ref_id": "b32", "title": "MEDA: Meta-Learning with Data Augmentation for Few-Shot Text Classification", "year": "2021" }, { "authors": "F Sung; Y Yang; L Zhang; T Xiang; P H S Torr; T M Hospedales", "journal": "IEEE Computer Society", "ref_id": "b33", "title": "Learning to Compare: Relation Network for Few-Shot Learning", "year": "2018-06-18" }, { "authors": "Y Tian; C Sun; B Poole; D Krishnan; C Schmid; P Isola", "journal": "", "ref_id": "b34", "title": "What Makes for Good Views for Contrastive Learning? In NeurIPS", "year": "2020" }, { "authors": "H Tseng; H Lee; J Huang; M Yang", "journal": "", "ref_id": "b35", "title": "Cross-Domain Few-Shot Classification via Learned Feature-Wise Transformation", "year": "2020" }, { "authors": "L Van Der Maaten; G Hinton", "journal": "Journal of machine learning research", "ref_id": "b36", "title": "Visualizing data using t-SNE", "year": "2008-11" }, { "authors": "O Vinyals; C Blundell; T Lillicrap; K Kavukcuoglu; D Wierstra", "journal": "", "ref_id": "b37", "title": "Matching Networks for One Shot Learning", "year": "2016" }, { "authors": "J W Wei; K Zou", "journal": "Association for Computational Linguistics", "ref_id": "b38", "title": "EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks", "year": "2019" }, { "authors": "S Yang; L Liu; M Xu", "journal": "", "ref_id": "b39", "title": "Free Lunch for Fewshot Learning: Distribution Calibration", "year": "2021" }, { "authors": "Y You; T Chen; Y Sui; T Chen; Z Wang; Y Shen", "journal": "", "ref_id": "b40", "title": "Graph Contrastive Learning with Augmentations", "year": "2020" }, { "authors": "M Yu; X Guo; J Yi; S Chang; S Potdar; Y Cheng; G Tesauro; H Wang; B Zhou", "journal": "Association for Computational Linguistics", "ref_id": "b41", "title": "Diverse Few-Shot Text Classification with Multiple Metrics", "year": "2018" } ]
[ { "formula_coordinates": [ 3, 124.75, 449.25, 167.75, 23.81 ], "formula_id": "formula_0", "formula_text": "x t = x s t , t nk x q t-nk , t > nk(1)" }, { "formula_coordinates": [ 3, 58.92, 567.55, 233.58, 50.99 ], "formula_id": "formula_1", "formula_text": "L con = - xt∈B 1 c log yr=yt exp(z t •z r /τ ) yr=yt exp(z t •z r /τ )+ y r =yt exp(z t •z r /τ ) (2)" }, { "formula_coordinates": [ 3, 319.5, 656.89, 237.83, 21.28 ], "formula_id": "formula_2", "formula_text": "N task tasks {(Q 1 , S 1 ), (Q 2 , S 2 ), • • • , (Q N task , S N task )} from" }, { "formula_coordinates": [ 4, 58.99, 181.53, 233.52, 36.18 ], "formula_id": "formula_3", "formula_text": "L task = - 2N task u=1 log exp(z u • z u /τ ) exp(z u • z u /τ ) + zu =z u exp(z u • zu /τ ) (3)" }, { "formula_coordinates": [ 4, 59.7, 551.99, 232.8, 36.11 ], "formula_id": "formula_4", "formula_text": "L inst = - 2Ninst w=1 log exp(z w • z w /τ ) exp(z w • z w /τ ) + zw =z w exp(z w • zw /τ ) (4)" }, { "formula_coordinates": [ 4, 362.29, 88.36, 195.72, 9.65 ], "formula_id": "formula_5", "formula_text": "L = αL con + (1 -α)L inst + βL task (5)" }, { "formula_coordinates": [ 4, 382.85, 341.94, 175.15, 16.5 ], "formula_id": "formula_6", "formula_text": "i * = arg max i f (x q ) • f (x s i )(6)" } ]
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1" ], "table_ref": [], "text": "Crowd counting has many applications in video surveillance, social safety and crowd analysis and is an active area of research in the literature [1]. Since most crowd counting applications and datasets use surveillance footage, the input to crowd counting models are high-resolution images, typically Full HD (1,920×1,080 pixels) or even higher. Gigapixel resolutions can capture and process much more detail than previously possible, and are recently becoming more widespread [2]. However, working with gigapixel resolutions presents several unique challenges. Modern high-end GPUs are not capable of fitting gigapixel images in memory or processing such high resolutions in reasonable time. Furthermore, the architectures of deep neural networks are not designed to receive such massive images as input.\nRecently, several methods have been proposed for crowd counting on gigapixel images. However, these methods either use the simplest solution, which is to downsample the input gigapixel to a manageable resolution before processing, or borrow from gigapixel literature in other deep learning tasks. The issue with the latter approach is that gigapixel methods for other deep learning tasks such as object detection or cancer detection do not tackle unique challenges present in crowd counting, such as reliance on global information and sensitivity to perspective. On the other hand, the proposed method called GigaZoom is tailored to crowd counting and is thus able to obtain significantly more accurate results compared to previous methods. GigaZoom works by iteratively zooming into the densest areas of the image and refining the coarser density map with finer details. Our code is publicly available 1 .\nThe rest of this paper is organized as follows. Section II summarizes the related work in crowd counting and gigapixel deep learning literature. Section III presents the proposed method. Section IV describes the experimental setup and provides experimental results as well as ablation studies. Finally, section V concludes the paper by summarizing contribution and results, and providing directions for future research." }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Crowd Counting", "publication_ref": [ "b0", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b6", "b7" ], "table_ref": [], "text": "The goal of crowd counting is to count the total number of people present in a given input image [1]. The input to crowd counting models is an image or a video frame, and the output is a density map showing the crowd density at each location of the image. The values in the density map can be summed up to obtain a single number representing the total number of people in the image. Widely used crowd counting datasets contain high resolution images, for instance, images in Shanghai Tech Part A and Part B datasets [3] have average resolutions of 868×589 and 1,024×768 pixels, respectively, and images in the UCF-QNRF dataset [4] have an average resolution of 2902×2013 pixels. However, these resolutions are much lower than gigapixel resolutions. At the time of this writing, PANDA [5] is the only publicly available dataset for gigapixel crowd counting. PANDA contains 45 images with resolutions up to 26,908×15,024 pixels taken from three different scenes: an airport terminal, a graduation ceremony, and a marathon. Images in the PANDA dataset are extremely densely populated with crowd sizes of up to 4,302 people, and ground truth annotations are available in the form of bounding boxes for each person's head. PANDA offers no predefined training or test splits.\nVarious crowd counting methods exist in the literature. CSRNet [6] uses the first ten layers of VGG-16 [7], pretrained on ImageNet [8], as a feature extractor, which is followed by six dilated convolution layers to produce the output density map. Gigapixel CSRNet [9] utilizes CSRNet to process gigapixel images. During the training phase, CSRNet is trained on image patches of size 1,920×1,200 pixels, taken across three scales: the original gigapixel image, as well as the gigapixel image downsampled to 1 16 and 1 64 of the original size. In the inference phase, image patches of the same size are passed on to the trained CSRNet in non-overlapping sliding windows to produce a density map for each scale. The three density maps are then averaged to obtain a single aggregated density map.\nPromptMix [10] downsamples gigapixel images to 2,560×1,440 pixels, then processes them using CSRNet. It improves the accuracy of CSRNet by mixing artificially generated data with real data during training. SASNet [11] is a high-performing crowd counting method on various popular datasets such as Shanghai Tech and UCF-QNRF. Similar to CSRNet, SASNet also uses the first ten layers of VGG-16 [7], pre-trained on ImageNet [8], as feature extractor, and fuses features extracted by these layers across multiple scales to obtain an accurate density map." }, { "figure_ref": [ "fig_0" ], "heading": "B. Gigapixel Deep Learning", "publication_ref": [ "b1", "b1", "b11", "b12", "b13" ], "table_ref": [], "text": "The term \"gigapixel\" suggests an image containing one billion pixels. However, images with resolutions ranging from 100 megapixels up to hundreds of gigapixels are considered to be \"gigapixel images\" in the literature [2]. Using gigapixel images and videos reveals much more detail about the scene and has the potential to significantly improve the accuracy of deep learning tasks. However, as previously mentioned, processing gigapixel images with deep learning is challenging due to GPU memory and computation limits. Even without considering GPU limits, existing deep learning architectures and methods are not capable of properly training the massive number of parameters that would result from using gigapixel images directly as input. Moreover, gigapixel datasets typically contain a very low number of images, since manually labelling such large images is a difficult task. For instance, the PANDA dataset contains only 45 examples compared to the 1,535 examples UCF-QNRF.\nThe most common approach for dealing with very high resolutions in deep learning is to downsample the images to a manageable resolution. However, this obscures details and negates the benefits of capturing gigapixel images. For instance, as shown in Figure 1, there are locations in the downsampled gigapixel image where several people are represented by a single pixel, making it impossible for a deep learning model to accurately predict crowd density.\nProcessing gigapixel whole-slide images (WSIs) is common in histopathology for cancer detection, detecting metastatis (the spread of cancer), neuropathology and detecting tissue components [2]. For instance, HIPT [12] processes gigapixel WSIs using a hierarchy of Vision Transformers, and [13] uses neural image compression on WSIs so they can be processed with a CNN on a single GPU. However, a key difference between histopathology and crowd counting is the lack of perspective in the former. This means that in WSIs, cells and tissues always have roughly the same size, whereas in gigapixel crowd counting, the bounding box for a person near the camera can be up to 1 million times larger than that of a person far away.\nSeveral methods exist for gigapixel object detection. For instance, GigaDet [14] is a near real-time object detection method for gigapixel videos. GigaDet counts the number of objects on regions of downsampled version of image across multiple scales, then processes the top candidate regions to detect objects. However, as explained in section III, gigapixel object detection methods cannot be directly used for crowd counting." }, { "figure_ref": [], "heading": "III. GIGAZOOM", "publication_ref": [], "table_ref": [], "text": "GigaZoom is inspired by how people act when they are asked to count the number of people in gigapixel images, where they zoom into the dense regions of the crowd until they can distinguish individuals. Similarly, GigaZoom iteratively zooms into multiple dense regions to refine the coarse density map. Section III-A provides the details of the zooming and refinement process, and section III-B describes how the multiple regions are detected." }, { "figure_ref": [], "heading": "A. Iterative Zooming and Replacing", "publication_ref": [], "table_ref": [], "text": "Iterative zooming and replacing consists of two steps: a forward pass that iteratively zooms into the densest area of the image, and a backward pass that combines the density maps obtained during the forward pass to construct the final density map. Figure 2 shows an overview of the forward pass. Given a gigapixel image I 0 of resolution w 0 ×h 0 , we perform L zoomin operations until we reach a resolution within GPU memory limits. Note that L is a hyper-parameter of the method. The location of the zoomed-in image I t+1 depends on the density map obtained by previous image I t . Since resolution of I t is beyond the GPU memory limit for t < L, we are not able to use I t directly as input to the crowd counting model. Therefore, we first need to downsample I t to w max × h max , defined as the maximum image resolution that can fit into the available GPU memory.\nThe width and height of I t are determined based on the zoom formula. Linear zoom is defined as\nh t = h 0 - h 0 -h max L t, w t = w 0 - w 0 -w max L t;\n(1) whereas exponential zoom is defined as\nh t = h 0 h max h 0 t L , w t = w 0 w max w 0 t L .(2)\nSuppose that we have performed t zoom-ins so far and obtained image I t within I 0 , where (O w t , O h t ) is the top left corner of I t inside I 0 . Since the width and height of I t+1 are already known based on the zoom formula, our goal is to determine (O w t+1 , O h t+1 ), which is the top left corner of I t+1 inside I 0 . We start by uniformly downsampling I t to I small t with a resolution of w max × h max . We then pass I small t to a crowd counting model to obtain density map D t . Note that the density map size w D max × h D max might be smaller in size than I small t due to pooling operations in the crowd counting model.\nThe density of all sub-images within I t , which are candidates for I t+1 , can be calculated using a simple convolution \nk w = w t+1 w t w D max , k h = h t+1 h t h D max .(3)\nIn the resulting matrix S t , the point (O w t,D , O h t,D ) with the maximum value corresponds to the sub-image with the highest density. The top left corner of I t+1 can then be determined based on\nO w t+1 -O w t w t = O w t,D w D max , O h t+1 -O h t h t = O h t,D h D max .(4)\nFigure 3 shows an overview of the backward pass. During the forward pass, the density maps D t , t = 0, . . . , L along with the region of D t that corresponds to D t+1 are saved to be used in the backward pass. The backward pass starts by resizing the finest density map D L and replacing the region of D L-1 that corresponds to D L to obtain an improved density map D L-1 . Subsequently, D L-1 is resized and placed in the correponding region in D L-2 , and this process is repeated until the final improved density map is obtained. We tested more complex merging operations than simply replacing regions of density maps, for instance, we trained a CNN to combine D t and resized D t+1 to obtain an estimation closer to the corresponding part of the ground truth density map. However, replacement always resulted in the highest accuracy. Another simple merging operation is averaging, which is used in Gigapixel CSRNet. However, taking the average of density maps across multiple scales is not sensible, since the more zoomed-in density maps are almost always more accurate.\nCrowd counting models are designed and trained for a specific range of crowd density, therefore, if the density goes above or falls below that range, their error increases. Another advantage of GigaZoom over Gigapixel CSRNet is that by zooming into dense areas, it ensures that low density areas are not processed separately. In contrast, Gigapixel CSRNet always detects a small crowd of people even if the image patch is completely empty. This is exacerbated by the fact that in gigapixel images, many locations of the image are empty, resulting in a massive error. Note that empty regions are not an issue for gigapixel object detection methods, since they would simply be ignored. However, since, the density maps are added together in crowd counting, the errors accumulate." }, { "figure_ref": [], "heading": "B. Multiple Zoom Regions", "publication_ref": [ "b14" ], "table_ref": [], "text": "Iterative zooming and replacing only zooms into a single region. However, multiple dense regions might be present in a given image. Therefore, we specify several regions to apply iterative zooming and replacing. We start by smoothing the coarsest density map D 0 using a Gaussian filter to remove small spikes in density. Peaks in the smoothed density map are then detected using a local maximum filter [15]. The detected peaks are then filtered based on a threshold λ, and the remaining peaks are clustered using the k-means algorithm to k clusters. Finally, we apply iterative zooming and replacing on sub-images centered at the cluster centers. The overall process is depicted in Figure 4.\nNote that using multiple zoom regions may lead to conflicts since some areas might be processed during several iterative zooming and replacing operations. To resolve these conflicts, we tested several aggregation strategies such as averaging or using the maximum value. However, we found that all strategies obtain similar results. Therefore, we opted for the simplest strategy, which is to use the latest result in case of a conflict.\nGigaZoom performs k × L CSRNet inferences per input gigapixel image. With the hyper-parameters specified in section IV, this translates to 20 CSRNet inferences, which is more than 10× faster than Gigapixel CSRNet, which performs an average of 204 CSRNet inferences per input gigapixel image. Note that iterative zooming and replacing cannot be parallelized, however, multiple iterative zooming and replacing operations can be performed in parallel." }, { "figure_ref": [], "heading": "IV. EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Setup and Results", "publication_ref": [ "b4", "b8", "b10", "b9", "b9", "b8", "b9", "b15", "b10" ], "table_ref": [], "text": "Since PANDA [5] does not specify training and test splits, we selected 30 images for training, 6 images for validation Method Year MAE↓ Gigapixel CSRNet [9] 2019 2680.20 SASNet [11] 2021 263.88 PromptMix [10] 2023 110.34 GigaZoom (ours) 2023 63.51 The hyper-parameters used in GigaDet are as follows. We use exponential zoom with L = 10. The maximum resolution w max × h max that fits our GPU memory is 2,560×1,440. To determine multiple zoom regions, a Gaussian filter with σ = 4 and radius of 7 is used for smoothing, threshold λ = 0.1 is used for filtering and number of clusters k = 2 is used for clustering. We use two separate crowd counting models: a PromptMix model [10] to obtain D 0 , and for all other density maps D 1 , . . . , D L we use a CSRNet model [9] trained on patches of different scales. The first model is trained with the procedure outlined in [10], and the second model is trained by initializing with pre-trained weights from the PromptMix model, and fine-tuning for 100 epochs with a weight decay of 10 -4 , batch size of 12 and a learning rate of 10 -4 which is multiplied by 0.99 each epoch. All experiments were conducted on 3×Nvidia A6000 GPUs, each with 48 GBs of video memory.\nCrowd counting methods are typically evaluated by using the mean absolute error (MAE) or the mean squared error (MSE), defined as\nMAE = N i=1 |ŷ i -y i | N , MSE = N i=1 (ŷ i -y i ) 2 N ,(5)\nwhere ŷi is the prediction for i-th image, y i is the ground truth label, and N is the total number of examples in the dataset. In crowd counting, MAE is typically used as a measure of accuracy, whereas MSE is a measure of robustness [16]. Since our primary objective is accuracy, we use MAE to evaluate crowd counting methods in this work.\nThe original SASNet paper does not include experiments on the PANDA dataset [11], therefore, we initalize the training with pre-trained weights for Shanghai Tech Part A and finetune on the PANDA dataset downsampled to 2,560×1,440 pixels. Although Gigapixel CSRNet uses the PANDA dataset, the authors do not report accuracy metrics, therefore, we reproduce the method to measure its accuracy. Since PromptMix includes experiments on PANDA, we use the number from the original paper. " }, { "figure_ref": [], "heading": "B. Ablation Studies", "publication_ref": [], "table_ref": [ "tab_2", "tab_3", "tab_5", "tab_6" ], "text": "Table II compares the accuracy obtained by the two different zoom methods defined in Equations 1 and 2, which shows that exponential zoom leads to a higher accuracy. Table III shows the effect of the number of zoom levels L on the accuracy. Based on these results, using too few or too many zoom levels can lead to sub-optimal accuracy. Table IV shows that using multiple zoom regions can slightly boost the accuracy. However, similar to the number of zoom levels, using too few or too many clusters can degrade the accuracy. Even though the accuracy improvement is slight in these experiments, using multiple zoom regions makes GigaZoom more robust and might lead to more significant improvements is other scenarios and scenes. We also investigated the effect of overzooming in Table V. Overzooming is defined as zooming beyond a 1to-1 pixel ratio, where several pixels in the resulting image correspond to a single pixel in the original image, effectively upsampling a region of the original gigapixel image. However, these results show that the method does not benefit from overzooming. " }, { "figure_ref": [], "heading": "V. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "We showed that our proposed method signifcantly outperforms existing methods for crowd counting on gigapixel images. Through ablation studies, we showed that exponential zoom performs better than linear, a moderate number of zoom levels achieves best accuracy, and using multiple zoom regions provides robustness for inputs with multiple dense crowds. Although GigaZoom is much more efficient than Gigapixel CSRNet, it still performs multiple CSRNet inferences, which can result in a long inference time overall.\nCurrently, PANDA is the only publicly available dataset for this task, which contains only 45 images taken from three scenes. In order to further compare and validate methods, it is crucial that more gigapixel crowd counting datasets are created and published, that have a higher number of examples taken from more diverse scenes." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "This work was funded by the European Union's Horizon 2020 research and innovation programme under grant agreement No 957337, and by the Danish Council for Independent Research under Grant No. 9131-00119B." } ]
The increasing prevalence of gigapixel resolutions has presented new challenges for crowd counting. Such resolutions are far beyond the memory and computation limits of current GPUs, and available deep neural network architectures and training procedures are not designed for such massive inputs. Although several methods have been proposed to address these challenges, they are either limited to downsampling the input image to a small size, or borrowing from other gigapixel tasks, which are not tailored for crowd counting. In this paper, we propose a novel method called GigaZoom, which iteratively zooms into the densest areas of the image and refines coarser density maps with finer details. Through experiments, we show that GigaZoom obtains the state-of-the-art for gigapixel crowd counting and improves the accuracy of the next best method by 42%.
Accurate Gigapixel Crowd Counting by Iterative Zooming and Refinement
[ { "figure_caption": "Fig. 1 :1Fig. 1: (left) Example gigapixel image from the PANDA dataset, with a resolution of 26,908×15,024 downsampled to 2,688×1,412; and (right) zoomed into the region specified by the rectangle in the original image, with a resolution of 2,880×1,410 pixels.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 :Fig. 3 :Fig. 4 :234Fig. 2: Overview of the forward pass in iterative zooming and replacing.", "figure_data": "", "figure_id": "fig_1", "figure_label": "234", "figure_type": "figure" }, { "figure_caption": "Comparison of crowd counting performance for various methods on the PANDA gigapixel dataset. and 9 images for test. The selection procedure can be viewed in our code. To obtain a ground truth density map from the bounding box annotations available in the PANDA dataset, for each bounding box, we apply a 2D Gaussian filter with σ = 4 and filter size the same as the bounding box.", "figure_data": "", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "Table I compares the accuracy of GigaZoom with previous methods on the PANDA dataset. Observe that GigaZoom significantly outperforms other methods.", "figure_data": "Zoom Method MAE↓Linear81.02Exponential63.51", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Effect of linear and exponential zoom on the accuracy of GigaZoom.", "figure_data": "Zoom Levels (L) MAE↓580.041064.4920105.45", "figure_id": "tab_2", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "Effect of the number of zoom levels on the accuracy of GigaZoom. Clustering was not used in these experiments, and only a single pass of iterative zooming and replacing was performed on the densest sub-image of the input.", "figure_data": "", "figure_id": "tab_3", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "Effect of multiple zoom regions on the accuracy of GigaZoom.", "figure_data": "Zoom Levels (L) Overzoom Levels MAE↓10064.4910167.9710293.26", "figure_id": "tab_5", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "Effect of overzooming on the accuracy of Giga-Zoom. Clustering was not used in these experiments, and only a single pass of iterative zooming and replacing was performed on the densest sub-image of the input.", "figure_data": "", "figure_id": "tab_6", "figure_label": "V", "figure_type": "table" } ]
Arian Bakhtiarnia; Qi Zhang; Alexandros Iosifidis
[ { "authors": "G Gao; J Gao", "journal": "", "ref_id": "b0", "title": "Cnn-based density estimation and crowd counting: A survey", "year": "2020" }, { "authors": "A Bakhtiarnia; Q Zhang; A Iosifidis", "journal": "", "ref_id": "b1", "title": "Efficient high-resolution deep learning: A survey", "year": "2022" }, { "authors": "Y Zhang; D Zhou", "journal": "", "ref_id": "b2", "title": "Single-image crowd counting via multicolumn convolutional neural network", "year": "2016" }, { "authors": "H Idrees; M Tayyab", "journal": "", "ref_id": "b3", "title": "Composition loss for counting, density map estimation and localization in dense crowds", "year": "2018" }, { "authors": "X Wang; X Zhang", "journal": "", "ref_id": "b4", "title": "Panda: A gigapixel-level human-centric video dataset", "year": "2020" }, { "authors": "Y Li; X Zhang; D Chen", "journal": "CVPR", "ref_id": "b5", "title": "Csrnet: Dilated convolutional neural networks for understanding the highly congested scenes", "year": "2018" }, { "authors": "K Simonyan; A Zisserman", "journal": "ICLR", "ref_id": "b6", "title": "Very deep convolutional networks for large-scale image recognition", "year": "2015" }, { "authors": "J Deng; W Dong", "journal": "", "ref_id": "b7", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Z Cao; R Yan", "journal": "ICMEW", "ref_id": "b8", "title": "Gigapixel-level image crowd counting using csrnet", "year": "2019" }, { "authors": "A Bakhtiarnia; Q Zhang; A Iosifidis", "journal": "", "ref_id": "b9", "title": "Promptmix: Text-to-image diffusion models enhance the performance of lightweight networks", "year": "2023" }, { "authors": "Q Song; C Wang", "journal": "AAAI", "ref_id": "b10", "title": "To choose or to fuse? scale selection for crowd counting", "year": "2021" }, { "authors": "R J Chen; C Chen", "journal": "", "ref_id": "b11", "title": "Scaling vision transformers to gigapixel images via hierarchical self-supervised learning", "year": "2022" }, { "authors": "D Tellez; G Litjens", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b12", "title": "Neural image compression for gigapixel histopathology image analysis", "year": "2021" }, { "authors": "K Chen; Z Wang", "journal": "Neurocomputing", "ref_id": "b13", "title": "Towards real-time object detection in gigapixel-level video", "year": "2022" }, { "authors": "M Wulder; K Niemann; D G Goodenough", "journal": "Remote Sensing of Environment", "ref_id": "b14", "title": "Local maximum filtering for the extraction of tree locations and basal area from high spatial resolution imagery", "year": "2000" }, { "authors": "F Dai; H Liu", "journal": "ICMR", "ref_id": "b15", "title": "Dense scale network for crowd counting", "year": "2021" } ]
[ { "formula_coordinates": [ 2, 311.98, 478.66, 251.06, 22.31 ], "formula_id": "formula_0", "formula_text": "h t = h 0 - h 0 -h max L t, w t = w 0 - w 0 -w max L t;" }, { "formula_coordinates": [ 2, 338.63, 530.12, 224.4, 27.74 ], "formula_id": "formula_1", "formula_text": "h t = h 0 h max h 0 t L , w t = w 0 w max w 0 t L .(2)" }, { "formula_coordinates": [ 3, 74.75, 283.63, 225.27, 23.22 ], "formula_id": "formula_2", "formula_text": "k w = w t+1 w t w D max , k h = h t+1 h t h D max .(3)" }, { "formula_coordinates": [ 3, 66.79, 365.62, 233.23, 27.17 ], "formula_id": "formula_3", "formula_text": "O w t+1 -O w t w t = O w t,D w D max , O h t+1 -O h t h t = O h t,D h D max .(4)" }, { "formula_coordinates": [ 5, 57.53, 480.34, 242.49, 32.43 ], "formula_id": "formula_4", "formula_text": "MAE = N i=1 |ŷ i -y i | N , MSE = N i=1 (ŷ i -y i ) 2 N ,(5)" } ]
10.18653/v1/2022.findings-acl.176
2023-05-16
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b3", "b3", "b5", "b25" ], "table_ref": [], "text": "In Race After Technology, Benjamin (2019) coins the term \"The New Jim Code\", which she describes as :\n\"The employment of new technologies that reflect and reproduce existing inequities but that are promoted and perceived as more objective or progressive than discriminatory systems of a previous era.\"\nWhile the Jim Code is a spin on, \"Jim Crow\", a derogatory epithet for African-Americans, the same concept can be generalized to the bias and unfairness in artificial intelligence (AI) systems against all marginalised groups. It is crucial to study bias and fairness in machine learning (ML) and natural language processing (NLP) models to understand how existing social biases and stereotyping are being encoded in the data used to train them, as well as to compare (1) the fairness of the decisions made by NLP models due to biases in the datasets, with (2) biased choices made by the developers of those models as a result of unintended bias or to maximize profit. Studying bias and unfairness in NLP models is one way to pierce a hole in the black box and shed a little light on the limitations of widely used models. However, it is not possible to understand the roots of algorithmic bias, without incorporating relevant studies from social sciences, critical race theory, gender studies, LGBTQ studies, and digital humanities studies, as recommended by Benjamin (2019).\nIn this paper, we study the various origins of bias in NLP models from two perspectives: (1) the NLP pipeline perspective, where we review the sources of bias in models from the NLP literature; and (2) the Jim Code perspective, in which we review the origins of bias from the literature on social science, critical race theory, gender, LGBTQ, and digital studies. We argue that, in fact, the sources of bias found in the NLP pipeline are rooted in those uncovered in the social sciences. Then, we discuss how the lack of inclusion of social sciences in attempts at eliminating social issues like bias in NLP models has resulted in problematic quantitative measures of bias (Blodgett et al., 2021) and superficial mitigation techniques (Gonen and Goldberg, 2019). Finally, we propose recommendations to NLP researchers to mitigate biases and improve the fairness of the models that they develop by addressing their underlying social causes." }, { "figure_ref": [], "heading": "Background: History of discrimination", "publication_ref": [ "b40", "b11", "b3", "b37", "b15", "b44", "b54", "b33", "b14", "b61", "b55", "b49", "b4" ], "table_ref": [], "text": "In Western societies, the biases and inequalities towards marginalised groups based on ethnicity, sex, class, religion, sexual orientation, age, or disability that we see today are direct results of centuries of racism, sexism, and homophobia, as has been discussed by many scholars.\nIn The Myth of Race: The Troubling Persistence of an Unscientific Idea, Sussman (2014a) reviews the history of 500 years of racism in Western Eu-rope to answer the question of why the invalid concept of race still prevails. He argues that the ideology of race developed from multiple historical events and movements ranging from the Spanish Inquisition to social Darwinism, eugenics, and modern IQ tests, starting as early as the fifteenth century, when the Catholic Church in Spain persecuted the Jewish population for \"impurity of blood\" (Sussman, 2014a).\nHe goes on to explain that some Enlightenment scholars like David Hume and Immanuel Kant believed that, based on skin colours, there are more than one race of humans, and that white men are the most civilized people (Sussman, 2014b). In the nineteenth century, drawing from evolution theory, social Darwinists like Herbert Spencer argued that helping the poor and the weak was an interference with natural selection, coining the term \"survival of the fittest\". This led to sterilization and ultimately the extermination camps of the eugenics movement (Sussman, 2014b).\nMoving to the 1970s, Sussman (2014c) shows that Arthur Jensen, a professor of Educational Psychology at the University of California, argued that Black people are intellectually inferior to white people. This argument was reasserted in the 1990s with the publication of Richard Herrnstein and Charles Murray's The Bell Curve.\nSussman (2014d) goes on to show that in the 2000s, racism took on a disguise of \"Culturism\", as coined by the anthropologist Franz-Boas to explain the difference in human behaviour and social organizations. Culturism paved the way to modern-day anti-immigration agendas with immigrants, like Arabs or Muslims, not claimed to be genetically inferior to Europeans, but to have a cultural burden that prevents them from integrating in the West.\nHomophobia is intertwined with racism, as argued by Morris (2010) in their research on the history of the LGBTQ community social movement. Morris explains that homosexuality and transgender identity were accepted in many ancient societies like those of ancient Greece, Native Americans, North Africa, and the Pacific Islands. These accepting cultures oppose the Western culture of heterosexuality and binary genders, who regarded homosexuality and transgender as foreign, savage, and evidence of inferior races. When Europeans started colonization campaigns, they imposed their moral codes and persecuted LGBTQ communities. The first known case of punishing homosexuality by death was in North America in 1566. Later, in the era of sexology studies in 1882 and 1897, European doctors and scientists labelled homosexuality as degenerate and abnormal, and as recently as the 1980s and 1990s, AIDS was widely rationalised as being god's punishment for gay people.\nAs argued by Criado Perez (2019) in Invisible Women: Data Bias in a World Designed for Men, Sexism can be tracked back to the fourth century B.C. when Aristotle articulated that the male form is the default form as an inarguable fact. This concept still persists today, as we can see in the one-size-fits-men approach to designing supposedly gender-neutral products like piano keyboards and smartphones. Overall, as Manne (2017) describes it in Down Girl: The Logic of Misogyny, sexism consists of \"assumptions, beliefs, theories, stereotypes, and broader cultural narratives that . . . make rational people more inclined to support and participate in patriarchal social arrangements\".\nMarginalization has been studied in social sciences by many scholars in critical race theory (Benjamin, 2019), gender studies (McIntosh, 2001;Davis, 1982), andLGBTQ studies (Fausto-Sterling, 2008). However, negative stereotyping, stigma, and unintended bias continue against marginalised people based on ethnicity, religion, disability, sexual orientation, or gender. These stigmas and unintended bias have led to different forms of discrimination from education, job opportunities, health case, housing, incarceration, and others, as Nordell (2021) details in The End of Bias.\nThey can also have negative impact on the cognitive ability, and mental and physical health of the people who carry their load. As Steele (2011) shows in Whistling Vivaldi: How Stereotypes Affect Us and What We Can Do, based on experiments in behavioural psychology, carrying stigma made women underperform in maths tests, and African-American students underperform in academia. Hence, stereotypes become selffulfilling prophecies, eventually leading to their perpetuation and the continuation of the prejudice and discrimination.\nIn the age of knowledge, computing, and big data, prejudice and discrimination have found their way to machine learning models. These models that are now dictating every aspect of our lives from online advertising, to employment and judicial systems that rely on black box models and discriminate against marginalised groups, while benefitting privileged elites, as O'Neil (2017) explains in Weapons of Maths Destruction. One of the most well-known examples of discriminative decisions made by a machine learning models is the COMPAS algorithm, a risk assessment tool that measures the likelihood that a criminal becomes a recidivist, a term used in legal systems to describe a criminal who reoffends. Despite Northpoint, the company that produced the COMPAS tool not sharing how the model measures the recidivism scores, the algorithm was deployed by the state of New York in 2010. In 2016, ProPublica found that Black defendants are more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism while the latter were more likely than Black defendants to be incorrectly flagged as low risk (Larson et al., 2016).\nOne example of algorithmic gender discrimination is the CV screening model used by Amazon, which, according to a Reuters report in 2018, favoured CVs of male over female candidates even when both had the same skills and qualifications (Dastin, 2018). Similar examples of algorithmic discrimination can be found against the LGBTQ community (Tomasev et al., 2021), older people (Stypinska, 2022), Muslims (Samuel, 2021), and people with disabilities (Binns and Kirkham, 2021)." }, { "figure_ref": [], "heading": "Bias and fairness: Definitions", "publication_ref": [ "b45", "b23", "b45", "b8", "b22", "b42", "b41", "b9", "b31", "b53", "b8", "b8", "b22", "b42", "b41", "b29", "b32", "b10", "b32", "b0", "b47" ], "table_ref": [], "text": "The term bias is defined and used in many ways (Olteanu et al., 2019). The normative definition of bias, in cognitive science is: \"behaving according to some cognitive priors and presumed realities that might not be true at all\" (Garrido-Muñoz et al., 2021). The statistical definition of bias is \"systematic distortion in the sampled data that compromises its representatives\" (Olteanu et al., 2019).\nIn NLP, while bias and fairness have been described in several ways, the statistical definition is most dominant (Elsafoury et al., 2022a;Caliskan et al., 2017;Garg et al., 2018;Nangia et al., 2020;Nadeem et al., 2021). In the last two years or so, there has been a trend to distinguish two types of bias in NLP systems: intrinsic bias and extrinsic bias (Cao et al., 2022;Kaneko et al., 2022;Steed et al., 2022). Intrinsic bias is used to describe the biased representations of pre-trained models. As far as we know, there is no formal definition of intrinsic bias in the literature. However, from the research done to study bias in word embeddings (Elsafoury et al., 2022a), we can infer the following definition: Intrinsic bias is stereotypical representations of certain groups of people learned during pre-training. For example, when a model associates women with certain jobs like caregivers and men with doctors (Caliskan et al., 2017). This type of bias exists in both static (Caliskan et al., 2017;Garg et al., 2018) and contextual word embeddings (Nangia et al., 2020;Nadeem et al., 2021).\nOn the other hand, Extrinsic bias, also known as model fairness, has many formal definitions built on those from literature on the fairness of exam testing from the 1960s, 70s and 80s (Hutchinson and Mitchell, 2019). The most recent fairness definitions are broadly categorized into two groups: Individual fairness, which is defined as \"An algorithm is fair if it gives similar predictions to similar individuals\" (Kusner et al., 2017).\nFor a given model Ŷ : X → Y with features X, sensitive attributes A, prediction Ŷ , and two individuals i and j, and if individuals i and j are similar. The model achieves individual fairness if\nŶ (X i , A i ) ≈ Ŷ (X j , A j )(1)\nThe second type of fairness definition is Group fairness, which can be defined as \"An algorithm is fair if the model prediction Ŷ and sensitive attribute A are independent\" (Caton and Haas, 2020;Kusner et al., 2017). Based on group fairness, the model is fair if\nŶ (X|A = 0) = Ŷ (X|A = 1) (2)\nGroup fairness is the most common definition used in NLP. There are different ways to measure it, like equality of odds (Baldini et al., 2022). However, other metrics have been proposed in the NLP literature to measure individual fairness like counterfactual fairness methods (Prabhakaran et al., 2019)." }, { "figure_ref": [ "fig_0" ], "heading": "Origins of bias", "publication_ref": [ "b3" ], "table_ref": [], "text": "While much literature proposes methods to measure bias and fairness in NLP models, there are far fewer papers that discuss their origins. Those that do so tend to neglect literature from social science or the critical race theory that has examined topics directly related to bias like racism, sexism, or homophobia. This short-sightedness has, so far, led to cosmetic changes in the proposed NLP models to fix the problem of bias rather than fixing the racist, sexist, homophobic status quo (Benjamin, 2019). In this section, we review the different origins of bias in NLP systems from the Jim Code perspective of social science, using tools like critical race theory, digital studies, gender studies, LGBTQ studies, and internet and data activism. Then we review the sources of bias from a purely NLP perspective, while trying to connect these two strands to gain a more profound understanding of the origins of bias. Figure 1 shows an overview summary of the origins of bias from the different perspectives, and how the biases in the NLP pipeline originate in the Jim Code." }, { "figure_ref": [], "heading": "The Jim Code perspective", "publication_ref": [ "b3", "b3", "b3" ], "table_ref": [], "text": "As previously described, Jim Code is a term that refers to the new forms of systematic discrimination found in new technologies that build on older discriminatory systems. This is one of the main origins of bias and unfairness that we find in most NLP systems. This can be broken down into the following sources of bias:\n1. Lack of context: In More than a Glitch, Broussard (2023) explains that, like computers, the data used to train NLP and ML models are produced without a specific human context. A similar point is made by Benjamin (2019), who discusses how social and historical contexts are not taken into consideration when data is collected to train NLP models. But it is not only the data. With the NLP models being developed in isolation from social science perspectives, how these systems impact the people of different identity groups gets overlooked. For example, models output decisions on who is eligible to get a loan or get a job without consideration of the fact that this might increase the wealth gap between marginalised and privileged groups.\nMoreover, it is because of the lack of context that researchers in NLP do not think about the harmful ways that their proposed systems could be used. For example, when models are used to detect race from last names and zip codes, their developers have probably failed to consider how they will be employed by certain businesses to illegally collect information on ethnicity (Benjamin, 2019). Even greater harm is caused when a model categorises people as criminals or terrorists due to their inferred ethnicity.\n2. Lack of creativity: Because of the lack of context, many developers of ML and NLP models tend to build their systems on top of existing racist, sexist, homophobic, ageist, and ableist systems. An example is when recommender systems used \"cultural segregation\" to infer information about a person's ethnicity to personalise their recommendations, using ethnicity as a proxy for individuality (Benjamin, 2019). Hence, those systems perpetuate the racist view that people who belong to a specific group must have similar preferences. Researchers need to be more creative and find other ways to recommend content that do not rely on social bias shortcuts." }, { "figure_ref": [], "heading": "Lack of accountability:", "publication_ref": [ "b7", "b43", "b3", "b46", "b3" ], "table_ref": [], "text": "There is a lack of accountability that allows tech companies to maximize profits and get away with creating oppressive systems that are not just \"glitches\" as explained by critical race and digital humanities studies activists (Broussard, 2023;Nobel, 2018;Benjamin, 2019). A lack of accountability enables companies to sell their systems as black boxes without ex-plaining how their models make decisions (O'Neil, 2017). We also see that in the scientific community, where big tech companies publish papers emphasising their models' excellent results without sharing those models or the data that were used to train them, precluding reproducibility. Moreover, when, the Justice League, a group of AI ethicists and activists, launched the Safe Face pledge to ensure that computer vision models don't discriminate between people based on their skin colour, no major tech company was willing to sign it (Benjamin, 2019). With the lack of accountability and legislation, big tech companies, which are one of the main drivers of the field, have no reason to revise and change the way they build their ML and NLP systems, or to include the social and historical context into their research in a way that profoundly changes the systems instead of just covering it up and fixing the \"glitches\"." }, { "figure_ref": [], "heading": "Lack of diversity:", "publication_ref": [ "b3", "b7", "b14", "b3" ], "table_ref": [], "text": "The majority of ML and NLP technologies are developed in companies or research institutes in Western societies and by researchers who are mostly white, able-bodied, heterosexual men. They develop and test systems that work well for them, without considering how functional these systems are for people from different backgrounds. Examples are facial recognition systems that only work with people with light skin (Benjamin, 2019;Broussard, 2023) and CV recommendation systems that favour applicants with male names (Dastin, 2018). There is also a lack of diversity when it comes to the targeted customers of the systems. Since most of these technologies are expensive to buy, the developers of these systems focus on the customers who can afford it and who are also predominantly white, able-bodied, heterosexual men (Benjamin, 2019). This lack of diversity, in addition to the lack of social and historical contexts, leads to the development of discriminatory systems." }, { "figure_ref": [], "heading": "Lack of public awareness:", "publication_ref": [ "b7", "b3", "b7" ], "table_ref": [], "text": "In addition to the previously discussed origins of bias in NLP, another factor that allows the biases to spread is the lack of public awareness. This is a result of using mathematical and statistical terminology and jargon that most non-specialists can't understand. This lack of understanding of how ML and NLP models work and their limitations led people to over-trust AI systems and leads to \"Technochauvinism\", described by Broussard (2023) as:\n\"the kind of bias that considers computational solutions to be superior to all other solutions. Embedded in this bias is a priori assumption that computers are better than humans which is actually a claim that the people who make and program computers are better than other humans.\"\nThe lack of public awareness and Technochauvinism are the reasons why banks, schools, hospitals, universities, and other institutions that are supposed to deal with people and society and make social decisions adopting NLP systems that are poorly understood, with the false notion that they are unbiased, and their decisions are faultless and objective (Benjamin, 2019;Broussard, 2023)." }, { "figure_ref": [], "heading": "The NLP pipeline perspective", "publication_ref": [ "b52", "b27", "b3", "b60", "b50", "b8" ], "table_ref": [], "text": "We now turn to the sources of bias in the NLP pipeline described in the literature. Shah et al. (2020) introduce four sources of bias in the NLP pipeline that might impact the model's fairness. Hovy and Prabhumoye (2021) also discuss these, adding a fifth source related to the overarching design of NLP research projects.\nHere, we outline these pipeline sources of bias and also show how they, in fact, originate in the Jim Code perspective.\n1. Research design: According to Hovy and Prabhumoye (2021), research design bias is manifested in the skewness of NLP research towards Indo-European languages, especially English. This skew leads to a self-fulfilling prophecy, since most of the research focuses on text in English, more data in English becomes available, which in turn makes it easier for NLP researchers to work on English text. This has further ramifications as Hovy and Prabhumoye (2021) also question whether, if English was not the \"default\" language, the n -gram would have been the focus of NLP models. The authors argue that the lack of diversity in the makeup of NLP research groups, is one of the reasons behind the linguistic and cultural skewness in NLP research.\nIn addition to these skews, there are further sources of bias reflected in research design that originate from the Jim Code perspective. Lack of social context is clearly manifested in NLP research design. For example, NLP researchers deal with language as a number of word occurrences and cooccurence probabilities rather than dealing with language as a diverse social component that reflects societal relationships and biases (Holmes, 2013). Another example, is lack of historical context, with most of the data that NLP models are trained on generated by white middle-class men, resulting in speech recognition models not recognizing African American dialects (Benjamin, 2019;Tatman, 2017) and hate speech detection models falsely flagging African American dialect as hateful (Sap et al., 2019). Lack of creativity is also reflected in research design. For example, with NLP models relying on the n -gram models and words co-occurrences, they incorporate biases such that they associate gendered words,\"woman\" and \"man\", with certain jobs, \"nurse\" and \"doctor\" (Caliskan et al., 2017). As Hovy and Prabhumoye (2021) contend, lack of diversity is also reflected in the research design bias, as evident in the skewness towards Indo-European languages. Because of the lack of accountability and the lack of public awareness, NLP research design bias has been going on for decades, largely unnoticed and unconsidered." }, { "figure_ref": [], "heading": "Selection bias:", "publication_ref": [ "b52", "b28", "b52", "b50", "b17", "b39", "b28", "b50", "b21", "b38", "b8", "b22", "b42", "b41", "b28", "b52", "b52", "b28" ], "table_ref": [], "text": "Selection bias is a result of nonrepresentative observations in the datasets used to train NLP models (Shah et al., 2020;Hovy and Prabhumoye, 2021). This bias could manifest when a model is trained on text data that has been generated by one group of people, but is subsequently deployed in the real world and used by more diverse groups. For example, the syntactic parsers and part-of-speech taggers that were trained on data generated by white middle-aged men, which then impacted the accuracy of these models when tested on text generated by different groups of people (Shah et al., 2020). Another example in hate speech detection models, where the models were trained on data with over-representation of terms associated with marginalised identity groups with the positive class (hateful) resulting in the models falsely labelling content as hateful just because it includes mentions of those identities (Sap et al., 2019;Dixon et al., 2018).\nSelection bias is also a result of lack of context, since the NLP researchers used datasets with over-representation of one group and underrepresentation of many other groups due to their lack of social and historical context of who generated that data and which identity groups are underrepresented in the chosen data. Lack of diversity is also a prominent reason behind selection bias in NLP, as most of the researchers come from non-marginalised backgrounds (Michael et al., 2022) with blind spots for the under-represented groups of people. Finally, lack of creativity is another reason behind selection bias. As NLP researchers build their models on biased systems that generated biased data, instead of being more creative and using more diverse representative data that work for everyone.\n3. Label bias: Label bias, also known as annotator bias, is a result of a mismatch between the annotators and the authors of the data. There are many reasons behind label bias. It can result from spamming annotators who are uninterested in the task and assign labels randomly to get the task done, as can happen on crowdsourcing platforms. It also happens due to confusion or ill-designed annotation tasks. Another reason is due to the individual annotator's perception and interpretation of the task or the label (Hovy and Prabhumoye, 2021). Moreover, there could be a mismatch between the authors' and annotators' linguistic and social norms. For example, annotators are prone to mislabel content as hateful for including the N-word, despite its often benign in-group use by African Americans. Finally, labels might carry the annotators societal perspectives and social biases (Sap et al., 2019).\nOn the other hand, we can argue that some of these biases result from unfairness in the crowdsourcing systems. Since the pay that annotators receive is often extremely low they are incentivised to complete as many tasks as they can as fast as possible to make ends meet, which in turn impacts the quality of the labels (Fort et al., 2011). Moreover, Miceli et al. (2022) argue that the bias in the labels is not only due to the biased perceptions of the annotators, but also due to a certain format the annotators have to follow for their annotation tasks and if that format falls short on diversity, the annotators lack the means to communicate that to the designers of the task. An example is when an annotator is presented with a binary gender choice even if the data contains information about non-binary or transgender people. Hence, label bias could be seen as a result of the lack of context. As the NLP researchers who mismatch the demographics of their data's authors and annotators do that due to lack of social context of the author of the data. Label bias is also a result of the lack of accountability, as big tech and NLP research groups hire annotators with unfair pay in addition to the lack of means for those annotators to communicate problems in the annotation task with the task designer due to power dynamics. 4. Representation bias: Representation bias, also known as intrinsic bias or semantic bias, describes the societal stereotypes that language models encode during pre-training. The bias exists in the training dataset that then gets encoded in the language models static (Caliskan et al., 2017;Elsafoury et al., 2022a;Garg et al., 2018), or contextual (Nangia et al., 2020;Nadeem et al., 2021). Hovy and Prabhumoye (2021) argue that one of the main reasons behind representation bias is the objective function that trains the language models. As these objective functions aim to predict the most probable next term given the previous context, which in turn makes these models reflect our biased societies in the data.\nAgain, representation bias is a result of the lack of social and historical context, which is why NLP researchers tend to use biased data to train these language models. It is also a result of lack of creativity as instead of using objective function that aim to reproduce the biased word that we live in, NLP researchers could have used different objective functions that optimize fairness and equality in addition to performance. Shah et al. (2020), overampflication bias happens because, during training, the models rely on small differences between sensitive attributes regarding an objective function and amplify these differences to be more pronounced in the predicted outcome. For example, in the imSitu image captioning dataset, 58% of the captions involving a person in a kitchen mention women, resulting in models trained on such data predicting people depicted in kitchens to be women 63% of the time (Shah et al., 2020). For the task of hate speech detection, overampflication bias could happen because certain identity groups could exist within different semantic contexts, for example, when an identity group like \"Muslims\" co-occurs with the word \"terrorism\". Even if the sentence does not contain any hate, e.g. \"Anyone could be a terrorist not just muslims\", the model will learn to pick this information up about Muslims and amplify them, leading to these models predicting future sentences that contain the word \"Muslim\" as hateful. According to (Hovy and Prabhumoye, 2021), one of the sources of overampflication bias is the choice of objective function used in training the model. Since these ob-jective functions mainly aim to improve precision, the models tend to exploit spurious correlations or statistical irregularities in the data to achieve high performance by that metric." }, { "figure_ref": [], "heading": "Model overampflication bias: According to", "publication_ref": [], "table_ref": [], "text": "Overamplification bias is again a result of the lack of social and historical context, which results in using data that has an over-representation of certain identities in a certain social or semantic context. These over-representations are then picked up by the models during training. Another reason is the lack of creativity that results in choosing objective functions that exacerbate the differences found in the datasets between different identity groups and prioritising overall performance over fairness." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [], "text": "It is clear that the sources of bias that we find in the NLP pipeline do not come out of nowhere, but have their origins in those that have been outlined in the social science, critical race theory and digital humanities studies-the Jim Code perspective. Despite this, the bias metrics that have been proposed in the NLP literature measure only pipeline bias, which has led to limitations in the currently proposed methods to measure and mitigate bias.\nIn this section, we outline these limitations and recommend measures to mitigate them." }, { "figure_ref": [], "heading": "Limitations of bias research in NLP", "publication_ref": [ "b42", "b41", "b5", "b36", "b24", "b31", "b9", "b13", "b48", "b26", "b1", "b26", "b7", "b6", "b34", "b51", "b31", "b25", "b7", "b16" ], "table_ref": [], "text": "The lack of scrutiny of the social background behind biases, has led approaches to bias measurement to incorporate the same methods that introduced bias in the first place. For example, crowdsourcing the data used in measuring bias in language models (Nangia et al., 2020;Nadeem et al., 2021) reintroduces label bias into the metric that is supposed to measure bias. Moreover, studies that propose bias metrics in NLP don't incorporate the social science literature on bias and fairness, which results in a lack of articulation of what these metrics actually measure, and ambiguities and unstated assumptions, as discussed in (Blodgett et al., 2021).\nThis results in limitations to the current bias metrics proposed and used in the NLP literature. One of these is that different bias metrics produce different bias scores, which makes it difficult to come to any conclusion on how biased the different NLP models are (Elsafoury et al., 2022b). There is also the limitation that current bias metrics claim to measure the existence of bias and not its absence, mean-ing that lower bias scores do not necessarily mean the absence of bias (May et al., 2019), leading to lack of conclusive information about the NLP models. Another consequence of the lack of understanding what the bias metrics in NLP actually measure, is that most of the research done on investigating the impact of social bias in NLP models on the downstream tasks could not find an impact on the performance of the downstream tasks (Goldfarb-Tarrant et al., 2021;Elsafoury et al., 2022a) or the fairness of the downstream tasks (Kaneko et al., 2022;Cao et al., 2022).\nSimilarly, one of the main limitations of the proposed methods to measure individual fairness metrics is that the motivation behind the proposed metrics and what the metrics actually measure are not disclosed. For example, Prabhakaran et al. ( 2019); Czarnowska et al. (2021); Qian et al. (2022) propose metrics to measure individual fairness using counterfactuals without explaining the intuition behind their proposed methods and how these metrics meet the criteria for individual fairness.\nAs for group fairness metrics, they are all based on statistical measures that have come in for criticism. For example, Hedden (2021) argues that group fairness metrics are based on criteria that cannot be satisfied unless the models make perfect predictions or that the base rates are equal across all the identity groups in the datase. Base rate here refers to the class of probability that is unconditioned on the featural evidence (Bar-Hillel, 1980). Hedden (2021) goes on to ask if the statistical criteria of fairness cannot be jointly satisfied except in marginal cases, which criteria then are conditions of fairness.\nIn the same direction of questioning the whole notion of using statistical methods to measure fairness, Broussard (2023) argues that some of the founders of the field of statistics were white supremacists, which resulted in skewed statistical methods and suggests that to measure fairness, maybe we should use non-statistical methods. Approaching the bias and fairness problem in NLP as a purely quantitative problem led the community to develop quantitative methods to remove the bias from NLP models like (Bolukbasi et al., 2016;Liang et al., 2020;Schick et al., 2021) which resulted in only a superficial fix of the problem while the models are still biased (Kaneko et al., 2022;Gonen and Goldberg, 2019). As shown above, Section 4.2, bias and fairness in NLP models are the results of deeper sources of bias, and removing the NLP pipeline sources of bias would not lead to any real change unless the more profound issues from the social science perspective are addressed.\nSimilarly, current efforts to improve the model's fairness have relied on quantitative fairness measures that aim to achieve equality between different identity groups, when equality does not necessarily mean equity (Broussard, 2023). Achieving equality would mean that the NLP models give similar performances to different groups of people. However, in some cases, fairness or equity would require treating people of certain backgrounds differently. For example , Dias Oliva et al. (2021) demonstrate that Facebook's hate speech detection models restrict the use of certain words considered offensive without taking into consideration the context in which they are being used. This leads to the censoring of some of the comments written by members of the LGBTQ community, who claim some of these restricted words as self-expression. In this case, equality did not lead to equity." }, { "figure_ref": [], "heading": "How to mitigate those limitations?", "publication_ref": [ "b7" ], "table_ref": [], "text": "Addressing the Jim Code sources of bias, is not a simple task. However, by doing so, we can take steps towards developing more effective ways to make NLP systems more inclusive, fairer and safer for everyone. Here, we outline actionable recommendations for the NLP community:\n1. Lack of context can be addressed by incorporating social sciences as part of the effort of mitigating bias in NLP models. This is only possible through:\n(a) Interdisciplinary research where scientists with backgrounds in fields such as critical race theory, gender studies and digital humanities studies are included in NLP project teams, so they can point out the social impact of the choices made by the NLP researchers. (b) It can also be addressed by further integration of the teaching of data and machine learning ethics into NLP curricula, whereby students gain an understanding of the societal implications of the choices they make. Currently, they are typically only exposed to minimal and tokenistic treatment of the topics of bias and fairness in NLP models, which is insufficient to understand the origins of bias from a social science perspective. This should also include training in AI auditing, enabling students to as-sess the limitations and societal impact of the NLP systems they develop (Broussard, 2023).\n2. Lack of creativity is a direct result of lack of context. We can address the lack of creativity by: (a) Raising awareness of the social and historical context and the social impact of development choices among NLP researchers. This will encourage more creative methods to achieve their goals, instead of the reproduction of oppressive systems in shiny new packaging. Online competition and code sharing platforms could be a place to start, for example, creating shared tasks in which participants develop new NLP models that do not rely on n -grams or objective functions that do not amplify societal biases.\n(b) Another way to encourage NLP researchers to re-investigate NLP fundamentals, is specialized conferences and workshops on reimagining NLP models with an emphasis on fairness and impact on society. This effort is already underway with conferences like ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT) * . The outcomes of these endeavours should be open for auditing, evaluation and reproducibility. One way to achieve that, without the controversy of open-source, is for NLP conferences to adopt the ACM artifact evaluation measures † and give reproducibility badges to published papers. This could be developed further to give social responsibility badges to the papers that were audited by a special responsible NLP committee.\n(c) Specialized interdisciplinary seminars in major NLP conferences could encourage NLP researchers to collaborate with social scientists. For example, organizing events like the Dagstuhl seminars ‡ that invite social scientists to discuss their work on bias and fairness which might lead to the exchange and development of ideas between social and NLP researchers.\n3. Lack of diversity can be addressed with: " }, { "figure_ref": [], "heading": "Lack of accountability", "publication_ref": [ "b7" ], "table_ref": [], "text": "The suggested measures should be enforced with:\n(a) State level regulation to make sure that research is not conducted in a way that may harm society, which is only possible by holding universities and big tech companies accountable for the systems they produce. One step taken in this direction is the EU AI Act || which is a legislative proposal that assigns AI applications to three risk categories: \"First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.\" (b) There should also be an AI regulation team that works for governments and employs AI auditing teams and social scientists to approve newly developed NLP systems before they are released to the public. (Broussard, 2023) " }, { "figure_ref": [], "heading": "Lack of awareness and Technochauvinism", "publication_ref": [ "b2", "b46", "b3", "b7", "b43" ], "table_ref": [], "text": "The suggested regulations, can only happen by democratically electing people who are willing to § http://nlpprogress.com/ ¶ https://www.winlp.org/ || https://artificialintelligenceact.eu/ put these regulations in place. This comes with raising the awareness of the limitations of the current ML and NLP systems. It is important that the public is aware that the likely doomsday scenario is not an AI system that outsmarts humans and controls them, but one that behaves like a Stochastic Parrot (Bender et al., 2021) that keeps reproducing our discriminative systems on a wider scale under the mask of objectivity (O'Neil, 2017;Benjamin, 2019;Broussard, 2023;Nobel, 2018). NLP researchers can help to raise public awareness through:\n(a) Journalism is an important resource to inform the public of the limitations and ethical issues in the current AI systems. Muckraking journalists in ProPublica, and The New York Times investigate AI technologies, sharing their investigations with the public (Broussard, 2023). For example, the journalist's investigation of the COMPAS system and its unfairness was published by ProPublica. NLP researchers should be encouraged to accept interview invitations from journalists to share their worries about the limitations of the current NLP systems. (b) Published Books for non-specialists is another way to raise public awareness on issues related to discrimination in AI systems. Especially books that are targeted at non-specialists. For example, books like Race after Technology, More than a Glitch, and Algorithms of Oppression. NLP researchers can participate in those efforts by writing about current NLP systems and their limitations for non-specialists. (c) Talks: NLP researchers should be encouraged to share their views on AI in non-academic venues. For example, participating in documentaries like Coded Bias ** could bring awareness to the public. (d) Museums of technology, and arts could also raise public awareness of the limitations and potential dangers of AI. For example, in 2022, the Modern Museum of Arts, had an exhibition called \"Systems\" † † , showing how AI systems work, their inequalities, and how much natural resources are used to build them. NLP researchers and universities can help organize exhibitions on the limitations of NLP systems.\n** https://www.imdb.com/title/ tt11394170/ † † https://www.moma.org/collection/ works/401279?sov_referrer=theme&theme_ id=5472 (e) Social media awareness campaigns could be a way to reach more people, especially younger people. Currently, individual NLP researchers share on social media their views and worries about NLP systems. However, an organized campaign can be more effective." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we have reviewed the literature on historic forms of sexism, racism, and other types of discrimination that are being reproduced in the new age of technology on a larger scale and under the cover of supposed objectivity in NLP models. We reviewed the origins of bias from the NLP literature in addition to the social science, critical race theory, and digital humanities studies literature. We argue that the sources of bias in NLP originate in those identified in the social sciences, and that they are direct results of the sources of bias from the \"Jim Code\" perspective. We also demonstrate that neglecting the social science literature in attempting to build unbiased and fair NLP models has led to unreliable bias metrics and ineffective debiasing methods. We argue that the way forward is to incorporate knowledge from social sciences and further collaborate with social scientists to make sure that these goals are achieved effectively without negative impacts on society and its diverse groups. Finally, we share a list of actionable suggestions and recommendations with the NLP community on how to mitigate the discussed Jim code origins of bias in NLP research." }, { "figure_ref": [], "heading": "Ethical Statement", "publication_ref": [], "table_ref": [], "text": "Our critical review on the origins of bias in NLP systems should not produce any direct negative impacts or harms. However, our work does not come without risks. One of these could be in discouraging quantitative research on bias and fairness in NLP by making such work seem daunting, requiring collaborations and more effort than other research disciplines in NLP. However, our aim is rather to encourage researchers to be more cautious and take a more inclusive approach to their research, incorporating social scientists and their knowledge into efforts at understanding bias in NLP." } ]
In this paper, we trace the biases in current natural language processing (NLP) models back to their origins in racism, sexism, and homophobia over the last 500 years. We review literature from critical race theory, gender studies, data ethics, and digital humanities studies, and summarize the origins of bias in NLP models from these social science perspective. We show how the causes of the biases in the NLP pipeline are rooted in social issues. Finally, we argue that the only way to fix the bias and unfairness in NLP is by addressing the social problems that caused them in the first place and by incorporating social sciences and social scientists in efforts to mitigate bias in NLP models. We provide actionable recommendations for the NLP research community to do so.
On the Origins of Bias in NLP through the Lens of the Jim Code
[ { "figure_caption": "Figure 1 :1Figure 1: The origins of bias in supervised NLP models", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" } ]
Fatma Elsafoury; Gavin Abercrombie
[ { "authors": "Ioana Baldini; Dennis Wei; Karthikeyan Natesan Ramamurthy; Moninder Singh; Mikhail Yurochkin", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Your fairness may vary: Pretrained language model fairness in toxic text classification", "year": "2022" }, { "authors": "Maya Bar-Hillel", "journal": "Acta Psychologica", "ref_id": "b1", "title": "The base-rate fallacy in probability judgments", "year": "1980" }, { "authors": "Emily M Bender; Timnit Gebru; Angelina Mcmillan-Major; Shmargaret Shmitchell", "journal": "Association for Computing Machinery", "ref_id": "b2", "title": "On the dangers of stochastic parrots: Can language models be too big?", "year": "2021" }, { "authors": "Ruha Benjamin", "journal": "Polity", "ref_id": "b3", "title": "Race after Technology: Abolitionist Tools for the New Jim Code", "year": "2019" }, { "authors": "Reuben Binns; Reuben Kirkham", "journal": "ACM Trans. Access. Comput", "ref_id": "b4", "title": "How could equality and data protection law shape AI fairness for people with disabilities?", "year": "2021" }, { "authors": "Lin Su; Gilsinia Blodgett; Alexandra Lopez; Robert Olteanu; Hanna M Sim; Wallach", "journal": "", "ref_id": "b5", "title": "Stereotyping norwegian salmon: An inventory of pitfalls in fairness benchmark datasets", "year": "2021-08-01" }, { "authors": "Tolga Bolukbasi; Kai-Wei Chang; James Zou; Venkatesh Saligrama; Adam Kalai", "journal": "", "ref_id": "b6", "title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings", "year": "2016" }, { "authors": "Meredith Broussard", "journal": "MIT Press", "ref_id": "b7", "title": "More than a glitch: Confronting race, gender, and ability bias in tech", "year": "2023" }, { "authors": "Aylin Caliskan; Joanna J Bryson; Arvind Narayanan", "journal": "Science", "ref_id": "b8", "title": "Semantics derived automatically from language corpora contain human-like biases", "year": "2017" }, { "authors": "Yang Cao; Yada Pruksachatkun; Kai-Wei Chang; Rahul Gupta; Varun Kumar; Jwala Dhamala; Aram Galstyan", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "On the intrinsic and extrinsic fairness evaluation metrics for contextualized language representations", "year": "2022" }, { "authors": "Simon Caton; Christian Haas", "journal": "", "ref_id": "b10", "title": "Fairness in machine learning: A survey", "year": "2020" }, { "authors": "Caroline Criado; Perez ", "journal": "", "ref_id": "b11", "title": "Invisible women: Data bias in a world designed for men", "year": "2019" }, { "authors": " Abrams", "journal": "", "ref_id": "b12", "title": "", "year": "" }, { "authors": "Paula Czarnowska; Yogarshi Vyas; Kashif Shah", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b13", "title": "Quantifying social biases in NLP: A generalization and empirical comparison of extrinsic fairness metrics", "year": "2021" }, { "authors": "Jeddery Dastin", "journal": "", "ref_id": "b14", "title": "Amazon scraps secret AI recruiting tool that showed bias against women", "year": "2018" }, { "authors": "Angela Davis", "journal": "Women's Studies Quarterly", "ref_id": "b15", "title": "Women, race and class: An activist perspective", "year": "1982" }, { "authors": "Thiago Dias; Oliva ; Dennys ; Marcelo Antonialli; Alessandra Gomes", "journal": "Sexuality & Culture", "ref_id": "b16", "title": "Fighting hate speech, silencing drag queens? Artificial intelligence in content moderation and risks to lgbtq voices online", "year": "2021" }, { "authors": "Lucas Dixon; John Li; Jeffrey Sorensen; Nithum Thain; Lucy Vasserman", "journal": "Association for Computing Machinery", "ref_id": "b17", "title": "Measuring and mitigating unintended bias in text classification", "year": "2018" }, { "authors": "Fatma Elsafoury; Steve R Wilson; Stamos Katsigiannis; Naeem Ramzan", "journal": "International Committee on Computational Linguistics", "ref_id": "b18", "title": "SOS: Systematic offensive stereotyping bias in word embeddings", "year": "2022" }, { "authors": "Fatma Elsafoury; Steven R Wilson; Naeem Ramzan", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "A comparative study on word embeddings and social NLP tasks", "year": "2022" }, { "authors": "Anne Fausto-Sterling", "journal": "Basic Books", "ref_id": "b20", "title": "Myths of gender: Biological theories about women and men", "year": "2008" }, { "authors": "Karën Fort; Gilles Adda; K Bretonnel Cohen", "journal": "Computational Linguistics", "ref_id": "b21", "title": "Last words: Amazon Mechanical Turk: Gold mine or coal mine?", "year": "2011" }, { "authors": "Nikhil Garg; Londa Schiebinger; Dan Jurafsky; James Zou", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b22", "title": "Word embeddings quantify 100 years of gender and ethnic stereotypes", "year": "2018" }, { "authors": "Ismael Garrido-Muñoz; Arturo Montejo-Ráez; Fernando Martínez-Santiago; L Alfonso Ureña-López", "journal": "Applied Sciences", "ref_id": "b23", "title": "A survey on bias in deep nlp", "year": "2021" }, { "authors": "Seraphina Goldfarb-Tarrant; Rebecca Marchant; Ricardo Muñoz Sánchez; Mugdha Pandya; Adam Lopez", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Intrinsic bias metrics do not correlate with application bias", "year": "2021" }, { "authors": "Hila Gonen; Yoav Goldberg", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them", "year": "2019" }, { "authors": "Brian Hedden", "journal": "Philosophy & Public Affairs", "ref_id": "b26", "title": "On statistical criteria of algorithmic fairness", "year": "2021" }, { "authors": "Janet Holmes", "journal": "Routledge", "ref_id": "b27", "title": "An Introduction to Sociolinguistics", "year": "2013" }, { "authors": "Dirk Hovy; Shrimai Prabhumoye", "journal": "Language and Linguistics Compass", "ref_id": "b28", "title": "Five sources of bias in natural language processing", "year": "2021" }, { "authors": "Ben Hutchinson; Margaret Mitchell", "journal": "", "ref_id": "b29", "title": "", "year": "2019" }, { "authors": "", "journal": "Association for Computing Machinery", "ref_id": "b30", "title": "years of test (un)fairness: Lessons for machine learning", "year": "" }, { "authors": "Masahiro Kaneko; Danushka Bollegala; Naoaki Okazaki", "journal": "International Committee on Computational Linguistics", "ref_id": "b31", "title": "Debiasing isn't enough! -on the effectiveness of debiasing MLMs and their social biases in downstream tasks", "year": "2022" }, { "authors": "Matt J Kusner; Joshua Loftus; Chris Russell; Ricardo Silva", "journal": "", "ref_id": "b32", "title": "Counterfactual fairness. Advances in neural information processing systems", "year": "2017" }, { "authors": "Jeff Larson; Surya Mattu; Lauren Kirchner; Julia Angwin", "journal": "", "ref_id": "b33", "title": "How we analyzed the compas recidivism algorithm", "year": "2016" }, { "authors": "Paul Pu Liang; Irene Mengze Li; Emily Zheng; Chong Yao; Ruslan Lim; Louis-Philippe Salakhutdinov; Morency", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Towards debiasing sentence representations", "year": "2020" }, { "authors": "Kate Manne", "journal": "Oxford University Press", "ref_id": "b35", "title": "Down Girl: The Logic of Misogyny", "year": "2017" }, { "authors": "Chandler May; Alex Wang; Shikha Bordia; R Samuel; Rachel Bowman; Rudinger", "journal": "Association for Computational Linguistics (ACL)", "ref_id": "b36", "title": "On measuring social biases in sentence encoders", "year": "2019" }, { "authors": "Peggy Mcintosh", "journal": "", "ref_id": "b37", "title": "White privilege and male privilege: A personal account of coming to see correspondences through work in women's studies", "year": "1988" }, { "authors": "Milagros Miceli; Julian Posada; Tianling Yang", "journal": "Proc. ACM Hum.-Comput. Interact", "ref_id": "b38", "title": "Studying up machine learning data: Why talk about bias when we mean power?", "year": "2022" }, { "authors": "Julian Michael; Ari Holtzman; Alicia Parrish; Aaron Mueller; Alex Wang; Angelica Chen; Divyam Madaan; Nikita Nangia; Richard Yuanzhe Pang; Jason Phang", "journal": "", "ref_id": "b39", "title": "What do nlp researchers believe? results of the nlp community metasurvey", "year": "2022" }, { "authors": "Bonnie J Morris", "journal": "PsycEXTRA Dataset", "ref_id": "b40", "title": "History of lesbian, gay, bisexual and transgender social movements", "year": "2010" }, { "authors": "Moin Nadeem; Anna Bethke; Siva Reddy", "journal": "", "ref_id": "b41", "title": "StereoSet: Measuring stereotypical bias in pretrained language models", "year": "2021" }, { "authors": "Nikita Nangia; Clara Vania; Rasika Bhalerao; Samuel R Bowman", "journal": "", "ref_id": "b42", "title": "CrowS-pairs: A challenge dataset for measuring social biases in masked language models", "year": "2020" }, { "authors": "Umoja Safiya; Nobel", "journal": "New York University Press", "ref_id": "b43", "title": "Algorithms of Oppression: How Search Engines Reinforce Racism", "year": "2018" }, { "authors": "Jessica Nordell", "journal": "", "ref_id": "b44", "title": "The End of Bias", "year": "2021" }, { "authors": "Alexandra Olteanu; Carlos Castillo; Fernando Diaz; Emre Kıcıman", "journal": "Frontiers in Big Data", "ref_id": "b45", "title": "Social data: Biases, methodological pitfalls, and ethical boundaries", "year": "2019" }, { "authors": "O' Cathy; Neil", "journal": "Crown", "ref_id": "b46", "title": "Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy", "year": "2017" }, { "authors": "Ben Vinodkumar Prabhakaran; Margaret Hutchinson; Mitchell", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "Perturbation sensitivity analysis to detect unintended model biases", "year": "2019" }, { "authors": "Rebecca Qian; Candace Ross; Jude Fernandes; Eric Michael Smith; Douwe Kiela; Adina Williams", "journal": "Association for Computational Linguistics", "ref_id": "b48", "title": "Perturbation augmentation for fairer NLP", "year": "2022" }, { "authors": "Sigal Samuel", "journal": "", "ref_id": "b49", "title": "AIs Islamophobia problem", "year": "2021" }, { "authors": "Maarten Sap; Dallas Card; Saadia Gabriel; Yejin Choi; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b50", "title": "The risk of racial bias in hate speech detection", "year": "2019" }, { "authors": "Timo Schick; Sahana Udupa; Hinrich Schütze", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b51", "title": "Self-diagnosis and self-debiasing: A proposal for reducing corpus-based bias in nlp", "year": "2021" }, { "authors": "Deven Santosh Shah; H Andrew Schwartz; Dirk Hovy", "journal": "Association for Computational Linguistics", "ref_id": "b52", "title": "Predictive biases in natural language processing models: A conceptual framework and overview", "year": "2020" }, { "authors": "Ryan Steed; Swetasudha Panda; Ari Kobren; Michael Wick", "journal": "Association for Computational Linguistics", "ref_id": "b53", "title": "Upstream Mitigation Is Not All You Need: Testing the Bias Transfer Hypothesis in Pre-Trained Language Models", "year": "2022" }, { "authors": "Claude M Steele", "journal": "WW Norton & Company", "ref_id": "b54", "title": "Whistling Vivaldi: How stereotypes affect us and what we can do", "year": "2011" }, { "authors": "Justyna Stypinska", "journal": "AI & society", "ref_id": "b55", "title": "Ai ageism: a critical roadmap for studying age discrimination and exclusion in digitalized societies", "year": "2022" }, { "authors": "Robert Wald; Sussman ", "journal": "Harvard University Press", "ref_id": "b56", "title": "Early Racism in Western Europe", "year": "2014" }, { "authors": "Robert Wald; Sussman ", "journal": "Harvard University Press", "ref_id": "b57", "title": "Eugenics and the Nazis", "year": "2014" }, { "authors": "Robert Wald; Sussman ", "journal": "Harvard University Press", "ref_id": "b58", "title": "The Pioneer Fund, 1970s-1990s", "year": "2014" }, { "authors": "Robert Wald; Sussman ", "journal": "Harvard University Press", "ref_id": "b59", "title": "The Pioneer Fund in the Twenty-First Century", "year": "2014" }, { "authors": "Rachael Tatman", "journal": "", "ref_id": "b60", "title": "Gender and dialect bias in YouTube's automatic captions", "year": "2017" }, { "authors": "Nenad Tomasev; Kevin R Mckee; Jackie Kay; Shakir Mohamed", "journal": "", "ref_id": "b61", "title": "Fairness for unobserved characteristics: Insights from technological impacts on queer communities", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 361.72, 378.14, 162.69, 12.56 ], "formula_id": "formula_0", "formula_text": "Ŷ (X i , A i ) ≈ Ŷ (X j , A j )(1)" }, { "formula_coordinates": [ 3, 351.31, 494.96, 173.1, 12.57 ], "formula_id": "formula_1", "formula_text": "Ŷ (X|A = 0) = Ŷ (X|A = 1) (2)" } ]
2023-05-16
[ { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Introduction", "publication_ref": [ "b25", "b18", "b28", "b29", "b14", "b35", "b26", "b12", "b33", "b44", "b44", "b35", "b26", "b3", "b9", "b35", "b26" ], "table_ref": [], "text": "Face anti-spoofing (FAS), to distinguish a live face of a genuine user and a spoof face with biometric presentation attacks, is a crucial task that has a remarkable evolution [26,19,4,29,30] to ensure the security of face recognition systems. Most progress is sparked by new features The comparison between prevalent FAS method and our Latent Distribution Adjusting (LDA) method. LDA acquires several cluster centers for each class to capture the inherent mixture distribution of the data. The Spoof class consists of three local clusters marked by blue. Among these clusters, the light blue cluster in the middle is comprised of several Replay Spoof samples. The other Spoof clusters capture much latent information, which is hard to be represented by simple semantic annotations. While there exist another two clusters for Live class marked by red. The prevalent FAS method constrains the samples for each class only by a single center marked by squares. As a consequence, the samples enclosed by the hollow circle are predicted to the wrong class. However, LDA can correct these predictions by assigning several local clusters for each class. and robust architectures for profiling the intrinsic property of FAS. Despite the many efforts, these prior works often consider FAS as a classification problem, where each class (i.e. Spoof or Live) is assumed to merely contain a single cluster optimized by the commonly used softmax loss.\nHowever, in the real world FAS system, each class may contain multiple interior cluster centers, whereas a single center is insufficient to capture the inherent mixture distribution of the data. There is a surge of interest to seek for adjusting the centers of the mixture distribution to boost FAS.\nA snapshot of Spoof and Live distribution 1 is shown in Fig. 1, where the Spoof class marked in blue symbols has three clusters. The light blue cluster has significant semantic representations, i.e. \"Replay\". While there exist two clusters for the Live class, which are represented with red symbols. This toy example provides two observations: 1) A single cluster-center embedding learned by the prevailing softmax loss may fail in a complex data distribution as the samples enclosed by the hollow circle are wrongly predicted due to the closer distance to the wrong class. 2) Not all the clusters could be represented with semantic labels and measured by the semantic supervision. For example, it seems non-trivial to find the disciplines of semantic meaning for two respective Live clusters and two respective Spoof clusters. As far as we know, few approaches consider large distribution discrepancies in the field of FAS. The most related work [15], a domain generalization method, separates the embedding feature space into K Spoof clusters and one Live cluster, where K is pre-defined by human prior. It is unexplainable to consider all the Live data from different domains into one cluster, and a straightforward-defined and non-learnable K cannot guarantee the effectiveness of spoof classification.\nInspired by the above observations, a straightforward solution is introducing the scheme of prototype learning (PL) by modeling complex data distribution with multiple prototypes. Prototypes represent each class with several local clusters and thus increasing the size of the last fully connected layer mildly. The toy example in Fig. 1 shows that with prototype learning, the measurement of the distance (in dotted line) between a sample and the class center (in solid box) is substituted by the distance (in solid line) between the sample and the prototype center (in the solid circle). In this way, the wrongly-predicted sample can be corrected to the right class. Motivated by the theory of prototype learning, we propose a unified framework, named Latent Distribution Adjusting (LDA), by modelling each class of FAS with multiple prototypes in a latent and adaptive way.\nThe proposed LDA is designed with a few unique properties, in comparison with the traditional PL frameworks [36].\n(1) Latent. Some PL methods [27,13] explicitly assign different prototypes to different predefined domains or clusters in several tasks such as Few-Shot Learning [34], Class-Incremental Learning [40], etc. Nevertheless, there exist predefined semantic domains labeled by human knowledge in the FAS datasets such as Spoof type, illumination condi- 1 The samples are selected from CelebA-Spoof dataset [45] tion and environment, input sensor, etc. [45], which causes indistinct definitions of prototypes. Therefore, LDA should assign prototypes implicitly.\n(2) Discriminative. Traditional PL algorithms [36,40,27] mainly concentrate on learning more discriminative cues. Still, for the FAS task, we focus on making the final representation for both intra-class compact and inter-class separable. In practical scenarios, the performance of FAS is measured based on thresholds (such as ACER [24] and HTER [10]) rather than merely the classification accuracy with a threshold of 0.5 equivalently. As a consequence, we find that a more strict intra-class constraint is needed for FAS compared with general classification tasks. Accordingly, we design a margin-based loss to constrain intra-class and inter-class prototypes.\n(3) Adaptive. For most PL methods [36,27], the numbers of prototypes are fixed during training. Due to the large distribution discrepancies of FAS data, it is difficult to manually pre-defined an appropriate prototype number in a tradeoff between efficiency and effectiveness. To this end, an Adaptive Prototype Selection (APS) is proposed by selecting the appropriate number of prototype centers adaptively based on the data density of each prototype, and thus more data are gathered with fewer prototypes. (4) Generic. With the aforementioned design, the proposed LDA has unique advantages in terms of unseen domain adaption with very few training data without retraining, which can achieve improvements with less cost.\nWe conduct empirical evaluations on the proposed LDA and thoroughly examine and analyze the learned prototype representations. Extensive experiments demonstrate that LDA can achieve state-of-the-art results on multiple FAS datasets and benchmarks." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b8", "b9", "b25", "b18", "b28", "b29", "b27", "b17", "b36", "b0", "b20", "b16", "b42", "b14", "b44", "b35", "b35", "b26", "b12", "b33", "b43", "b40" ], "table_ref": [], "text": "Face Anti-Spoofing Methods. Traditionally, many Face Anti-Spoofing methods adopt hand-craft features to capture the spoof cues, such as LBP [3,9,10,26], HOG [19,38], SURF [4], SIFT [29], DoG [30], etc. Some methods focused on temporal cues, such as eye-blinking [28,32] and lips motion [18]. Recently, with the development of deep learning, many methods begin to employ Convolutional Neural Network(CNN) to extract discriminative cues. Yang et al. [37] regrades FAS as a binary classification and perform well. Atoum et al. [1] assists the binary classification with depth map, which is learned from Fully Convolutional Network. Liu et al. [21,22] leverages depth map combined with rPPG signal as the auxiliary supervision. Kim et al. [17] utilize depth map and reflection map as the bipartite auxiliary supervision. Yang et al. [39] combine the spatial information with global temporal information to detect attacks. And Yu et al. [43] . An overview of the proposed framework LDA. LDA generates example embedding and deploys multiple learnable prototype functions in the last fully connected layer for adjusting complex data distribution. The dimension N of example embedding and prototype function is set to 512 in this paper. These embedding are fixed by l2 normalization. P r S/L represents the r th prototype function from the Spoof/Live class. K S/L represents the number of prototype functions of the Spoof/Live class. The prototype prediction is obtained by calculating the inner product of the sample embedding and related prototype function. All of the prototype functions contribute to final decision making by a self-distributed mixture method. Prototype Center Loss (LP C ) is applied for providing distribution constraints for prototype functions. The solid lines denote intra-class regularizer. Inter-class regularizer is marked by the dotted lines. Moreover, we use Adaptive Prototype Selection (APS) algorithm in the inference stage for selecting the appropriate prototype centers adaptively.\n𝑃 ! #! 𝑃 ! \" 𝑃 $ #\" … … … 𝑃 ! \" 𝑃 ! #! 𝑃 ! \" 𝑃 $ #\" … N N … N … N 𝑃 $ #\" Removed 1 X N (2XK) X N(\nvolution to capture intrinsic detailed patterns. Few methods consider the distribution discrepancies. SSDG [15] separate the Spoof samples while aggregate Live ones of different domains to seek a compact and generalized feature space for the Spoof class. AENet [45] constrains the distribution of embedding features of the Live and Spoof examples by multiple auxiliary centers to improve the robustness for the binary classification task. Prototype Learning. Prototype learning [36], claims that the lack of robustness for CNN is caused by the Soft-Max layer, which is a discriminative model. It can improve the intra-class compactness of the feature representation, which can be viewed as a generative model based on the Gaussian assumption. As a flexible tool, prototype learning method [36,27,13] are applied to several tasks, such as Few-Shot Learning [34], Zero-Shot Learning [44], Class-Incremental Learning [40], Object Instance Search in Videos [41], etc." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "In this section, we give a detailed description of our proposed Latent Distribution Adjusting(LDA). Existing works on FAS either assume each class contains a single cluster optimized by softmax-based loss function or manually defined clusters based on the corresponding dataset, which are insufficient to make the final representation space both intra-class compact and inter-class separable. Our method, Latent Distribution Adjusting (LDA), improves the FAS model's robustness with multiple prototypes by automatically adjusting complex data distribution with the properties of latent, discriminative, adaptive, generic." }, { "figure_ref": [], "heading": "Overall Framework", "publication_ref": [], "table_ref": [], "text": "As shown in Fig. 2, the prototype functions refer to the last fully connected layer's components. LDA attempts to model data of each class as a Gaussian mixture distribution, and the prototypes act as the means of Gaussian components for each class. LDA acquires flexible numbers of efficient prototype functions for each class implicitly by forcing them to learn latent cues from image data thoroughly. All prototype functions contribute to the final class prediction. To enhance the intra-class compactness and inter-class discrepancy, we propose the Prototype Center Loss (L P C ) to constrain the distribution of prototype centers in intraclass aspects and inter-class aspects, shown as the solid lines and dotted lines separately. After completing the training stage, we designed Adaptive Prototype Selection (APS) algorithm to adaptively and efficiently select the appropriate prototype centers for different distributions and reduce redundant parameters for LDA." }, { "figure_ref": [], "heading": "LDA Loss", "publication_ref": [ "b35", "b10", "b22", "b32", "b30", "b32", "b10" ], "table_ref": [], "text": "The final objective of LDA contains two parts: a FAS classification loss based on training data and a margin-based loss function that constrains prototype centers. For the convenience of distinction, we name them as Prototype Data Loss and Prototype Center Loss respectively, i.e. L P D and L P C . Prototype Data Loss. Following conventional prototype learning [36], we maintain and learn multiple prototype functions in the embedding space for Live/Spoof class and use prototype matching for classification. We assume Spoof and Live class have equal numbers of K prototypes in the initialization stage for simplicity. These prototype functions are represents as P r j ∈ R N , where j ∈ {0, 1} represents the index of Live/Spoof class, r ∈ {1, 2, ..., K} represents the index of prototype functions within its class. f i ∈ R N denotes the embedding feature of the i-th sample, belonging to the y i -th class.\nThe effectiveness of embedding normalization and weight normalization has been verified in the field of face recognition. Therefore, we utilize the normalization approach to promote the generalization ability of the LDA. Following [11,23,33], we fix prototype P r j = 1 by l 2 normalization. Following [31,33], we fix the example embedding f i = 1 by l 2 normalization and re-scale it to 1/τ . τ is set to 10.0 in our experiments. After these normalization steps, the prototype predictions will be made only based on the angle between example embedding and prototype center.\nIn the classification stage, samples are classified by weighted sum prototype predictions. The class j prediction of the example x i is defined as follows:\ncos θ j = K r=0 e 1 τ f i P r j K r=0 e 1 τ f i P r j f i P r j .(1)\nFollowing [11], adding an angular margin penalty m, between f i and P r j can increase the compactness among samples from the same class and the discrepancy among samples from different classes. By applying the class predictions to cross entropy, we define the L P D as follows:\nL P D (x i ) = -log\ne s(cos(θy i +m )) e s(cos (θy i +m)) + e s(cos θ1-y i ) , (2) where s is a scaling factor. Prototype Center Loss. To enhance the intra-class compactness and inter-class discrepancy, we propose a marginbased Prototype Center Loss (L P C ) to provide distribution constraints. L P C consists of two components: one aims to decrease inter-class variance by guaranteeing a relative margin between the intra-class prototypes and inter-class prototypes. The other one constrains the intra-class prototype similarities by another margin penalty to reduce the intra-class variance. According to the observation of the prototype distribution in L P D , prototype centers from different classes may be closer than prototype centers from the same classes. The samples gathered by these prototype centers lead to the case that inter-class variation is smaller than the intra-class variation. Therefore, we utilize an inter-class regularizer to maintain the relationship between the interclass variance and intra-class variance to solve this problem. The constrain is provided by adding a strict margin penalty represented as δ 1 between the highest inter-class prototype similarity and the lowest intra-class prototype similarity. The loss is defined as follows:\nL P C inter (P) = [ max j,r 1 ,r 2 (P r 1 j • P r 2 1-j ) -min j ,r 1 ,r 2 (P r 1 j • P r 2 j ) + δ1]+, (3)\nwhere j, j ∈ {0, 1} represents the class. r 1 , r 2 , r 1 , r 2 ∈ {1, 2, ..., K} represents the index of prototype functions for corresponding class. They subject to r 1 = r 2 and r 1 = r 2 . P represents all prototypes {P r j }. The plus symbol in the bottom right corner means negative values are clamped by zero. From our observation, this method constrains interclass variance between Spoof and Live class; it can develop a solution by compacting the same class prototypes. However, it decreases the effectiveness of multiple prototypes and can even degrade them to a single one. Therefore, the intra-class variance may be affected. To solve this problem, we propose an intra-class regularizer to reduce the whole intra-class prototype pairs' similarity. The loss is defined as follows:\nL P Cintra (P) = 1 j=0 K r=1 K t=r+1 [P r j P t j -δ 2 ] + ,(4)\nwhere δ 2 is the relative margin penalty. Integrating all modules mentioned above, the objective of the proposed LDA for FAS is:\nL LDA = L P D + λ 1 L P Cinter + λ 2 L P Cintra ,(5)\nwhere λ 1 and λ 2 are the balanced parameters. Therefore, LDA is end-to-end trainable." }, { "figure_ref": [], "heading": "Adaptive Prototype Selection(APS)", "publication_ref": [ "b11" ], "table_ref": [], "text": "In our LDA, we train LDA with equal and sufficient K for Spoof and Live class. After completing the training stage for LDA, selecting appropriate prototype centers. Inspired by the traditional DBSCAN algorithm [12], the selection depends on the sample density of the relevant cluster centers. In LDA, the sample density for each prototype center is the number of samples in its region, defined by the distance threshold. As the optimization carry on in normalized embedding space, we utilize the cosine similarity to measure the distance. Those prototype centers with low sample Function DENSITY(P, F , t):\nE ← {}, D ← {} ; for p in P do ← 0, D ← {} ; for f in F do if p f > t then ← + 1, D ← D ∪ f ; end E ← E ∪ ; D ← D ∪ D ; end return E, D;\ndensity mean that they cannot gather sufficient samples in the embedding space. It means that they make few contributions to adjust relevant distribution. To remove them efficiently, we design APS algorithm to extract valid prototype with max density from the candidates continuously.\nIn the initialization stage of APS, we assign one prototype center for the Live and Spoof class separately to ensure the effectiveness of binary classification. It is wasteful to cover one sample with more than one prototypes. Therefore, after popping the selected prototype from the candidates, all the samples in its region should be popped simultaneously. The selection process will stop when the max density of candidates is zero, or all the prototype centers are popped. The detailed process of APS is shown in Algorithm 1, where F j , t j represent the set of example embedding and density threshold for class j separately. P j , E j , D j represent the -th prototype function, related sample density and sample set of class j. To distinguish the variables between different class, the index 0 and 1 are used to represent the Live and Spoof class, respectively." }, { "figure_ref": [], "heading": "Few-shot Domain Adaptation", "publication_ref": [ "b35" ], "table_ref": [], "text": "Traditionally, several FAS methods improve adaptability to the newly-arrived domain by utilizing domain adaption methods with unlabelled data or fine-tuning the model with labelled data, while LDA can effectively adapt to crossdomain data by leveraging very few labelled training data available in most practical scenarios.\nWe utilize one prototype for each class to demonstrate the embedding distribution of target domain data. Following [36], we use the mean of each class' training data embedding from the target domain as the newly arrived prototype function. In this way, we can then directly extend the FAS method to make predictions for both the source domain and the target domain." }, { "figure_ref": [], "heading": "Semantic Auxiliary for LDA", "publication_ref": [], "table_ref": [], "text": "Additionally, we exploit the auxiliary capacity of rich annotated semantic information for LDA. LDA S learns with auxiliary semantic information and original prototype func-tions jointly. The auxiliary semantic information, i.e. Spoof type {S s k } n k=1 and illumination conditions {S i k } n k=1 are learned via the backbone network followed by additional FC layers. The auxiliary supervision loss L Aux is defined as follows:\nL Aux = λ s L S s + λ i L S i ,(6)\nwhere L S s and L S i are softmax cross entropy losses. Loss weights λ s and λ i are used to balance the contribution of each loss. The loss function of our LDA S is:\nL LDAs = L LDA + λ Aux L Aux ,(7)\nwhere λ Aux is the balanced parameter for auxiliary task.\nExtensive experiment results are shown in Section 4.3." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [ "b5", "b3", "b44", "b13" ], "table_ref": [], "text": "Datasets. Three public FAS datasets are utilized with extensive experiment results to evaluate the effectiveness of our proposed methods: Oulu-NPU [6], SiW [24] and CelebA-Spoof [45].\nMetrics. As for Oulu-NPU, we follow original protocols and evaluate metrics, such as APCER, BPCER and ACER, to comparing our methods fairly. Besides, we also use TPR@FPR for evaluating in CelebA-Spoof. Moreover, Half Total Error Rate (HTER) is adopted during crossdataset evaluation. Implementation Details. We take ResNet-18 [14] as the leading backbone network and pre-train it on ImageNet. The network takes face images as the input with a size of 224×224. It is trained with batch size 1024 on 8 GPUs. In Oulu-NPU experiments, the model is trained with Adam optimizer. The SGD optimizer with the momentum of 0.9 is used for CelebA-Spoof. Besides, detailed training procedures including learning rate and the other hyperparameters of the loss functions are provided in the supplementary material." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [ "b34", "b42", "b41", "b16", "b34", "b41", "b42", "b16", "b15", "b3", "b34", "b41", "b42", "b3", "b42", "b15", "b16", "b41", "b34" ], "table_ref": [ "tab_2", "tab_4", "tab_7", "tab_8" ], "text": "To demonstrate the effectiveness of our LDA framework, we explore the roles of multiple prototype centers, Prototype Center Loss and Adaptive Prototype Selection (APS) algorithm. Due to the high quantity, diversity and rich annotation properties of CelebA-Spoof, relevant experiments are conducted on the intra-dataset benchmark of CelebA-Spoof with ACER metric. Implicit Prototype Learning. L P D degenerates to general classification loss when assigning one prototype center for each class. As the green line in Fig. 3 shows, LDA w/o L P C has a significant improvement when increasing the number of prototype centers from 2 to 8, which confirms that L P D is helpful to capture the hidden complex distribution for reducing the intra-class variance. Intra-/Inter-Prototype Constrain. We further study about intra-class compactness and inter-class discrepancy and validate the effect of Prototype Center Loss. Due to LDA reaches the best performance when K is set to 4, our ablation experiments following this setting. Table 1 shows that both the inter-class module and intra-class module of the Prototype Center Loss is useful to improve the classification performance. Furthermore, the combination of these modules can further improve the performance significantly. Moreover, as shown in Fig. 3, compared with the baseline LDA w/o L P C (green-line), both LDA with L P C (yellowline) and LDA with L Aux (red-line) can achieve superior results, which demonstrate the proposed L P C with only latent distribution adjusting (weakly supervised) is comparable with that with auxiliary semantic annotations (fully supervised). Furthermore, their combination (blue-line) is even better proves their complementarity.\nNumber of Prototype Centers. As Fig. 3 shows, the performance fluctuates in a certain range when the number of prototype centers, i.e., K is over-sized. As for LDA without L P C , four prototypes for each class is sufficient for capturing the hidden complex distribution. Nevertheless, it is hard to set a specific K to deal with large discrepancies of Spoof and Live class in different datasets or applications. APS algorithm is proposed to solve this problem. The circles in Fig. 3 show the selection procedure of APS algorithm. It indicates that each class' number of prototypes can be different and adaptive. Moreover, the performance of selected prototypes is within the stable range of manual selection method. Additionally, APS can reduce redundant parameters. Accordingly, our LDA framework can adapt to various applications without manually traversing all the expected settings and over-parameterization. Cross-Domain Test. The cross domain dataset test is carried out on Oulu-NPU, and CelebA-Spoof. Four protocols and two protocols are designed respectively to evaluate the generalization capability of LDA. Besides, to show the effectiveness of semantic auxiliary for LDA, we conduct ablation experiments by the aid of the annotation information from these datasets. Oulu-NPU proposes four protocols to assess the generalization for the FAS methods. The semantic information provided by each protocol is different. In protocol I, the train and evaluation set are constructed with different sessions and the same spoof type. Therefore, we utilize the Spoof label as auxiliary information. Following this setting, the session information of protocol II, both the session information and spoof type information of protocol III are utilized for auxiliary task. As shown in Table 3, except for LDA S , LDA ranks the first on all four protocols of Oulu-NPU, which indicates the great generalization ability of our method on different environments conditions, spoof types, and input sensors. LDA S outperforms LDA in two of three protocols, which improves 26.7% and 20.0% for ACER separately. In protocol I, compared to LDA, LDA S causes 0.2% decrease for ACER. As for CelebA-Spoof, Table 4 shows that, compared to state-of-the-art method AENet C,S,G , LDA improves 46.6% and 65.4% on two protocols separately for ACER. Besides, the same significant improvement is implemented for TPR@FPR. Above results 1.2 1.7 1.5 FAS-SGTD [35] 2.0 0.0 1.0 CDCN [43] 0.4 1.7 1.0 BCN [42] 0 2.7 2.7 2.7 BASN [17] 2.4 3.1 2.7 GRADIANT [2] 3.1 1.9 2.5 FAS-SGTD [35] 2.5 1.3 1.9 BCN [42] 2.6 0.8 1.7 CDCN [43] 1.5 1.4 1.5\nLDA 1.0 2.0 1.5 LDA S 1.2 1.0 1.1 3 GRADIANT [2]\n2.6± 3.9 5.0± 5.3 3.8± 2.4 BASN [17] 1.8± 1.1 3.5± 3.5 2.7± 1.6 FaceDS [16] 4.0± 1.8 3.8± 1.2 3.6± 1.6 Auxuliary [24] 2.7± 1.3 3.1± 1.7 2.9± 1.5 FAS-SGTD [35] 3.2± 2.0 2.2± 1.4 2.7± 0.6 BCN [42] 2.8± 2.4 2.3± 2.8 2.5± 1.1 CDCN [43] 2.4± [24] 9.3± 5.6 10.4± 6.0 9.5± 6.0 CDCN [43] 4.6± 4.6 9.2± 8.0 6.9± 2.9 FaceDS [16] 1.2± 6.3 6.1± 5.11 5.6± 5.7 BASN [17] 6.4± 8.6 7.5± 6.9 5.2± 3.7 BCN [42] 2.9± 4.0 7.5± 6.9 5.2± 3.7 FAS-SGTD [35] 6.7± 7. To further improve the performance of fine-tune method, we try to upsample the few training data from the target domain and increase relevant loss weight for leveraging the few data sufficiently. However, increasing the effect of few target domain data cannot promote the adaptability in this case. As for LDA, we follow the adaptation method mentioned as 3.4. The performance comparison between binary supervision fine-tune method and LDA are shown as Table 5. As (1) and (2) show, compared with evaluating on target domain directly, both the fine-tune method and LDA are useful for adapting to the target domain. As (2) shows, compared to the fine-tune method, LDA has better performance both in the known domain and target domain.\n(3) shows that, as the sample size increases, both the fine-tune method and LDA achieve better performance. Furthermore, the effectiveness of LDA is more significant when obtaining more target domain training data." }, { "figure_ref": [ "fig_7", "fig_7" ], "heading": "Further Analysis", "publication_ref": [ "b24" ], "table_ref": [], "text": "Visualization of the Prototype Similarities. In order to observe and explore the effectiveness of Prototype Center Loss, we conduct ablation study and visualize the similarities among prototypes. As Fig. 5 intra-class prototypes. It leads to a decrease of adjustability for LDA. This problem can be solved by introducing intraclass module, which can also maintain inter-class prototype distribution simultaneously as shown in Fig. 5 (c).\nVisualization of the Embedding Space. To demonstrate the embedding space learned by LDA, we adopt t-SNE [25] to show the comparison between prevalent FAS method and our Latent Distribution Adjusting (LDA) method. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we observe and analyze the large distribution discrepancies in the field of FAS. We propose a unified framework called Latent Distribution Adjusting (LDA) to improve the robustness of the FAS model by adjusting complex data distribution with multiple prototype centers. To enhance the intra-class compactness and inter-class dis- " } ]
With the development of deep learning, the field of face anti-spoofing (FAS) has witnessed great progress. FAS is usually considered a classification problem, where each class is assumed to contain a single cluster optimized by softmax loss. In practical deployment, one class can contain several local clusters, and a single-center is insufficient to capture the inherent structure of the FAS data. However, few approaches consider large distribution discrepancies in the field of FAS. In this work, we propose a unified framework called Latent Distribution Adjusting (LDA) with properties of latent, discriminative, adaptive, generic to improve the robustness of the FAS model by adjusting complex data distribution with multiple prototypes. 1) Latent. LDA attempts to model the data of each class as a Gaussian mixture distribution, and acquires a flexible number of centers for each class in the last fully connected layer implicitly. 2) Discriminative. To enhance the intraclass compactness and inter-class discrepancy, we propose a margin-based loss for providing distribution constrains for prototype learning. 3) Adaptive. To make LDA more efficient and decrease redundant parameters, we propose Adaptive Prototype Selection (APS) by selecting the appropriate number of centers adaptively according to different distributions. 4) Generic. Furthermore, LDA can adapt to unseen distribution by utilizing very few training data without re-training. Extensive experiments demonstrate that our framework can 1) make the final representation space both intra-class compact and inter-class separable, 2) outperform the state-of-the-art methods on multiple standard FAS benchmarks.
Latent Distribution Adjusting for Face Anti-Spoofing
[ { "figure_caption": "Figure 1 .1Figure", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure2. An overview of the proposed framework LDA. LDA generates example embedding and deploys multiple learnable prototype functions in the last fully connected layer for adjusting complex data distribution. The dimension N of example embedding and prototype function is set to 512 in this paper. These embedding are fixed by l2 normalization. P r S/L represents the r th prototype function from the Spoof/Live class. K S/L represents the number of prototype functions of the Spoof/Live class. The prototype prediction is obtained by calculating the inner product of the sample embedding and related prototype function. All of the prototype functions contribute to final decision making by a self-distributed mixture method. Prototype Center Loss (LP C ) is applied for providing distribution constraints for prototype functions. The solid lines denote intra-class regularizer. Inter-class regularizer is marked by the dotted lines. Moreover, we use Adaptive Prototype Selection (APS) algorithm in the inference stage for selecting the appropriate prototype centers adaptively.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "s s 6 Figure 3 .63Figure 3. The number of prototype centers refers to the sum of Live ones and Spoof ones. The green, yellow, red and blue line refers to LDA w/o LP C , LDA, LDAS w/o LP C and LDAS separately. The dots pointed by the gray circle show the performance before conducting APS algorithm. Those symbolised by the black circle show the performance and the number of selected prototype centers provided by APS algorithm.", "figure_data": "", "figure_id": "fig_3", "figure_label": "63", "figure_type": "figure" }, { "figure_caption": "(a) shows, all of the similarities are very close. Therefore, the significant diversity between the intra-class variance and inter-class variance does not exist. Inter-class module can separate inter-class prototype effectively as shown in Fig.5 (b). However, this improvement is achieved by sacrificing the discrepancy for", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Comparison between prevalent FAS method and our LDA method in real embedding space. (a) and (b) are the sample distributions of prevalent FAS method and LDA separately. (c) visualizes some samples from these distributions. The first column symbolised by dot rectangle shows the outliers of prevalent FAS method in its embedding space (a). The dot lines show their distribution discrepancy within different embedding spaces. For each sample, the other samples within the same row are its neighbours in LDA embedding space.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 4 (a) shows the performance of prevalent FAS method. Each class is represented by a single cluster. Spoof samples is more dispersed than Live samples. There exist some outliers, which cannot be constrained well by these clusters. Therefore, outliers can be classified into the wrong class easily. LDA can solve this problem by introducing multiple local clusters. As Fig. 4 (b) shows that there are six clusters, two for Live class and four for Spoof class. Compared to the prevalent FAS method, the Live samples are more compact, and the Spoof class is represented by several local clusters with low intra-class variance. We sample some outliers of the prevalent FAS method shown as the first column of Fig. 4 (c). For each example, the other examples within the same row are its neighbours in LDA. As the dot lines in Fig. 4 (a) and (b) show, the outliers in the prevalent FAS method are contrained well by local clusters in LDA. Besides, the neighbor examples of these examples have significant semantic commons. Above all, LDA does well in these outliers and is able to learn some semantic pattern.", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. The comparison of the prototype similarities driven by the modules of Prototype Center Loss. The Live/Spoof annotation represents the class of corresponding prototypes. Because the diagonal elements correspond to the same prototype on the horizontal and vertical axes, all the values are equal to one. crepancy, we propose a margin-based loss for providing distribution constrains for prototype learning. To make LDA more efficient and decrease redundant parameters, we propose Adaptive Prototype Selection (APS) to select the appropriate prototype centers adaptively according to different domains. Furthermore, LDA can adapt to unseen distribution effectively by utilizing very few training data without re-training. Extensive experimental results on multiple standard FAS benchmarks demonstrate the robustness of proposed LDA framework.", "figure_data": "", "figure_id": "fig_7", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "𝑃 ! \"LiveInnerProduct1 X NInput ImageSpoofSpoofInnerProductInput ImageAPSAlgorithm", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Quantitative results of LP C ablation studies. L P D L P C inter L P C intra ACER(%) ↓", "figure_data": "1.030.980.990.87", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results of the intra-dataset test on CelebA-Spoof. Bolds are the best results. Underlines are the second best results. We evaluate the intra dataset test on CelebA-Spoof. It is designed to evaluate the overall capability of the proposed method. As shown in Table2, compared to AENet C,S,G , LDA improves 46.6% for ACER and the same significant improvement for TPR@FPR, which indicates a brilliant overall capacity of LDA on a large scale dataset. To show the effectiveness of semantic auxiliary for LDA, we conduct ablation experiments by the aid of the whole annotation information provided by CelebA-Spoof. LDA S outperforms LDA, which improves 13.8% for ACER. It shows the effectiveness of rich annotated semantic for LDA.", "figure_data": "MethodsTPR (%)↑APCER (%)↓ BPCER (%)↓ ACER (%)↓FPR = 1% FPR = 0.5% FPR = 0.1%Auxiliary* [24]97.395.283.25.711.413.56BASN [17]98.997.890.94.01.12.6AENetC,S,G [45]98.997.387.32.290.961.63LDA99.297.790.10.581.170.87LDAS99.598.490.30.570.940.754.3. Comparison with the State-of-the-ArtIntra-Dataset Test.", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Results of the cross-domain test on Oulu-NPU. Bolds are the best results. Underlines are the second best results.", "figure_data": "Prot.MethodsAPCER(%)↓ BPCER(%)↓ ACER(%)↓GRADIANT [2]1.312.56.9BASN [17]1.55.83.6Auxiliary [24]1.61.61.61FaceDs [16]", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Results of the cross-domain test on CelebA-Spoof. Bolds are the best results. Underlines are the second best results.", "figure_data": "53.3± 4.15.0± 2.2LDA2.1± 2.23.9± 5.72.7± 3.3Prot.MethodsTPR (%) ↑APCER (%)↓ BPCER (%)↓ ACER (%)↓FPR = 1% FPR = 0.5% FPR = 0.1%AENetC,S,G [45]95.091.473.64.092.093.091LDA96.994.381.71.411.891.65LDAS97.394.181.31.821.30AENetC,S,G [45]###4.94±3.421.24±0.733.09±2.082LDA###0.98±0.351.14±0.381.07±0.36LDAS###1.09±0.460.95±0.301.02±0.38indicate the great generalization capacity of LDA on largerscale dataset. LDA S outperforms LDA in these protocols,which improves 5.45% and 4.67% separately. It shows theeffectiveness of rich annotated semantic for LDA. In ad-dition, our method achieves comparable results in SiW asshown in the supplementary material.", "figure_id": "tab_7", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Results of cross-dataset testing between CelebA-Spoof and SiW. Bolds are the best results.Cross-Dataset Test. To further evaluate the generalization ability of LDA based on practical scenarios, we conduct cross dataset test with two protocols. One is training on the CelebA-Spoof and testing on the SiW. As Table5(1) shows, LDA outperforms traditional binary supervision method (ResNet-18 with softmax) in terms of ACER and HTER, which demonstrates its strong direct adaptability for unseen domain data. The second protocol is a few labelled training data from the unseen domain is available for adaption. This protocol is used to evaluate the adaptability of LDA by following practical scenarios, which is mentioned as 3.4. We regard CelebA-Spoof intra-dataset as the known domain called A and SiW intra-dataset as the target domain. We sample very few training samples from SiW intra-dataset as the training data from the target domain. To demonstrate the adaptive performance of our method fairly, we experiment with different sample sizes, such as 30 and 300. Related training data is represented by B 1 and B 2 , respectively. These training data is randomly sampled. To ensure the credibility of experimental results, we conduct five experiments for each sample size and take the average performance of them as the final performance. In this setting, traditional binary supervision (ResNet-18) gather previous training data and a few training data from the target domain as the new training set to fine-tune the model.", "figure_data": "MethodsTrainAPCER(%) ↓ BPCER(%) ↓ ACER(%) ↓HTER(%) ↓(1)ResNet-18 LDAA A1.04 0.691.31 0.841.17 0.7621.89 17.80(2)ResNet-18 A & B1 1.13 ± 0.04 LDA B1 0.48 ± 0.03 0.92 ± 0.03 0.70 ± 0.02 16.93 ± 0.45 1.34 ± 0.09 1.22 ± 0.09 20.71 ± 1.55(3)ResNet-18 A & B2 0.95 ± 0.10 LDA B2 0.49 ± 0.03 0.85 ± 0.02 0.68 ± 0.01 16.13 ± 0.16 1.53 ± 0.08 1.24 ± 0.06 20.50 ± 1.90", "figure_id": "tab_8", "figure_label": "5", "figure_type": "table" } ]
Qinghong Sun; Zhenfei Yin; Yichao Wu; Yuanhan Zhang; Jing Shao
[ { "authors": "Yousef Atoum; Yaojie Liu; Amin Jourabloo; Xiaoming Liu", "journal": "IEEE", "ref_id": "b0", "title": "Face anti-spoofing using patch and depth-based cnns", "year": "2017" }, { "authors": "Zinelabdine Boulkenafet; Jukka Komulainen; Zahid Akhtar; Azeddine Benlamoudi; Djamel Samai; Eddine Salah; Abdelkrim Bekhouche; Fadi Ouafi; Abdelmalik Dornaika; Le Taleb-Ahmed; Qin", "journal": "IEEE", "ref_id": "b1", "title": "A competition on generalized software-based face presentation attack detection in mobile scenarios", "year": "2017" }, { "authors": "Zinelabidine Boulkenafet; Jukka Komulainen; Abdenour Hadid", "journal": "IEEE", "ref_id": "b2", "title": "Face anti-spoofing based on color texture analysis", "year": "2015" }, { "authors": "Zinelabidine Boulkenafet; Jukka Komulainen; Abdenour Hadid", "journal": "IEEE Signal Processing Letters", "ref_id": "b3", "title": "Face antispoofing using speeded-up robust features and fisher vector encoding", "year": "2016" }, { "authors": "Zinelabidine Boulkenafet; Jukka Komulainen; Abdenour Hadid", "journal": "TIFS", "ref_id": "b4", "title": "Face spoofing detection using colour texture analysis", "year": "2016" }, { "authors": "Zinelabinde Boulkenafet; Jukka Komulainen; Lei Li; Xiaoyi Feng; Abdenour Hadid", "journal": "IEEE", "ref_id": "b5", "title": "Oulu-npu: A mobile face presentation attack database with real-world variations", "year": "2017" }, { "authors": "Ivana Chingovska; André Anjos; Sébastien Marcel", "journal": "IEEE", "ref_id": "b6", "title": "On the effectiveness of local binary patterns in face antispoofing", "year": "2012" }, { "authors": "Chollet Franc", "journal": "", "ref_id": "b7", "title": "Xception: Deep learning with depthwise separable convolutions", "year": "2017" }, { "authors": "Tiago De; Freitas Pereira; André Anjos; José ; Mario De Martino; Sébastien Marcel", "journal": "Springer", "ref_id": "b8", "title": "Lbp-top based countermeasure against face spoofing attacks", "year": "2012" }, { "authors": "Tiago De; Freitas Pereira; André Anjos; José ; Mario De Martino; Sébastien Marcel", "journal": "ICB", "ref_id": "b9", "title": "Can face anti-spoofing countermeasures work in a real world scenario?", "year": "2013" }, { "authors": "Jiankang Deng; Jia Guo; Stefanos Zafeiriou; Arcface ", "journal": "CoRR", "ref_id": "b10", "title": "Additive angular margin loss for deep face recognition", "year": "2018" }, { "authors": "Martin Ester; Hans-Peter Kriegel; Jörg Sander; Xiaowei Xu", "journal": "", "ref_id": "b11", "title": "Density-based spatial clustering of applications with noise", "year": "1996" }, { "authors": "Samantha Guerriero; Barbara Caputo; Thomas Mensink", "journal": "", "ref_id": "b12", "title": "Deep nearest class mean classifiers", "year": "2018" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b13", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Yunpei Jia; Jie Zhang; Shiguang Shan; Xilin Chen", "journal": "", "ref_id": "b14", "title": "Single-side domain generalization for face anti-spoofing", "year": "2020-06" }, { "authors": "Amin Jourabloo; Yaojie Liu; Xiaoming Liu", "journal": "", "ref_id": "b15", "title": "Face despoofing: Anti-spoofing via noise modeling", "year": "2018" }, { "authors": "Taewook Kim; Yonghyun Kim; Inhan Kim; Daijin Kim", "journal": "ICCVW", "ref_id": "b16", "title": "Basn: Enriching feature representation using bipartite auxiliary supervisions for face anti-spoofing", "year": "2019" }, { "authors": "Klaus Kollreider; Hartwig Fronthaler; Maycel Isaac Faraj; Josef Bigun", "journal": "TIFS", "ref_id": "b17", "title": "Real-time face detection and motion analysis with application in \"liveness\" assessment", "year": "2007" }, { "authors": "Jukka Komulainen; Abdenour Hadid; Matti Pietikäinen", "journal": "IEEE", "ref_id": "b18", "title": "Context based face anti-spoofing", "year": "2013" }, { "authors": "Jiangwei Li; Yunhong Wang; Tieniu Tan; Anil K Jain", "journal": "International Society for Optics and Photonics", "ref_id": "b19", "title": "Live face detection based on the analysis of fourier spectra", "year": "2004" }, { "authors": "Si-Qi Liu; Xiangyuan Lan; Pong C Yuen", "journal": "", "ref_id": "b20", "title": "Remote photoplethysmography correspondence feature for 3d mask face presentation attack detection", "year": "2018" }, { "authors": "Siqi Liu; Shengping Pong C Yuen; Guoying Zhang; Zhao", "journal": "Springer", "ref_id": "b21", "title": "3d mask face anti-spoofing with remote photoplethysmography", "year": "2016" }, { "authors": "Weiyang Liu; Yandong Wen; Zhiding Yu; Ming Li; Bhiksha Raj; Le Song", "journal": "", "ref_id": "b22", "title": "Sphereface: Deep hypersphere embedding for face recognition", "year": "2017" }, { "authors": "Yaojie Liu; Amin Jourabloo; Xiaoming Liu", "journal": "", "ref_id": "b23", "title": "Learning deep models for face anti-spoofing: Binary or auxiliary supervision", "year": "2018" }, { "authors": "Laurens Van Der Maaten; Geoffrey Hinton", "journal": "Journal of machine learning research", "ref_id": "b24", "title": "Visualizing data using t-sne", "year": "2008-11" }, { "authors": "Jukka Määttä; Abdenour Hadid; Matti Pietikäinen", "journal": "IEEE", "ref_id": "b25", "title": "Face spoofing detection from single images using micro-texture analysis", "year": "2011" }, { "authors": "Pascal Mettes; Elise Van Der Pol; Cees Snoek", "journal": "", "ref_id": "b26", "title": "Hyperspherical prototype networks", "year": "2019" }, { "authors": "Gang Pan; Lin Sun; Zhaohui Wu; Shihong Lao", "journal": "IEEE", "ref_id": "b27", "title": "Eyeblink-based anti-spoofing in face recognition from a generic webcamera", "year": "2007" }, { "authors": "Keyurkumar Patel; Hu Han; Anil K Jain", "journal": "TIFS", "ref_id": "b28", "title": "Secure face unlock: Spoof detection on smartphones", "year": "2016" }, { "authors": "Bruno Peixoto; Carolina Michelassi; Anderson Rocha", "journal": "IEEE", "ref_id": "b29", "title": "Face liveness detection under bad illumination conditions", "year": "2011" }, { "authors": "Rajeev Ranjan; Carlos D Castillo; Rama Chellappa", "journal": "", "ref_id": "b30", "title": "L2-constrained softmax loss for discriminative face verification", "year": "2017" }, { "authors": "Lin Sun; Gang Pan; Zhaohui Wu; Shihong Lao", "journal": "Springer", "ref_id": "b31", "title": "Blinking-based live face detection using conditional random fields", "year": "2007" }, { "authors": "Feng Wang; Jian Cheng; Weiyang Liu; Haijun Liu", "journal": "IEEE Signal Processing Letters", "ref_id": "b32", "title": "Additive margin softmax for face verification", "year": "2018" }, { "authors": "Kaixin Wang; Jun Hao Liew; Yingtian Zou; Daquan Zhou; Jiashi Feng", "journal": "", "ref_id": "b33", "title": "Panet: Few-shot image semantic segmentation with prototype alignment", "year": "2019-10" }, { "authors": "Zezheng Wang; Zitong Yu; Chenxu Zhao; Xiangyu Zhu; Yunxiao Qin; Qiusheng Zhou; Feng Zhou; Zhen Lei", "journal": "", "ref_id": "b34", "title": "Deep spatial gradient and temporal depth learning for face anti-spoofing", "year": "2020" }, { "authors": "Hong-Ming Yang; Xu-Yao Zhang; Fei Yin; Cheng-Lin Liu", "journal": "", "ref_id": "b35", "title": "Robust classification with convolutional prototype learning", "year": "2018-06" }, { "authors": "Jianwei Yang; Zhen Lei; Stan Z Li", "journal": "", "ref_id": "b36", "title": "Learn convolutional neural network for face anti-spoofing", "year": "2014" }, { "authors": "Jianwei Yang; Zhen Lei; Shengcai Liao; Stan Z Li", "journal": "IEEE", "ref_id": "b37", "title": "Face liveness detection with component dependent descriptor", "year": "2013" }, { "authors": "Xiao Yang; Wenhan Luo; Linchao Bao; Yuan Gao; Dihong Gong; Shibao Zheng; Zhifeng Li; Wei Liu", "journal": "", "ref_id": "b38", "title": "Face antispoofing: Model matters, so does data", "year": "2019" }, { "authors": "Lu Yu; Bartlomiej Twardowski; Xialei Liu; Luis Herranz; Kai Wang; Yongmei Cheng; Shangling Jui; Joost Van De Weijer", "journal": "", "ref_id": "b39", "title": "Semantic drift compensation for class-incremental learning", "year": "2020-06" }, { "authors": "Tan Yu; Yuwei Wu; Junsong Yuan", "journal": "", "ref_id": "b40", "title": "Hope: Hierarchical object prototype encoding for efficient object instance search in videos", "year": "2017-07" }, { "authors": "Zitong Yu; Xiaobai Li; Xuesong Niu; Jingang Shi; Guoying Zhao", "journal": "", "ref_id": "b41", "title": "Face anti-spoofing with human material perception", "year": "2020" }, { "authors": "Zitong Yu; Chenxu Zhao; Zezheng Wang; Yunxiao Qin; Zhuo Su; Xiaobai Li; Feng Zhou; Guoying Zhao", "journal": "", "ref_id": "b42", "title": "Searching central difference convolutional networks for face anti-spoofing", "year": "2020" }, { "authors": "Xingxing Zhang; Shupeng Gui; Zhenfeng Zhu; Yao Zhao; Ji Liu", "journal": "IEEE Transactions on Multimedia", "ref_id": "b43", "title": "Hierarchical prototype learning for zero-shot recognition", "year": "2019" }, { "authors": "Yuanhan Zhang; Zhenfei Yin; Yidong Li; Guojun Yin; Junjie Yan; Jing Shao; Ziwei Liu", "journal": "", "ref_id": "b44", "title": "Celeba-spoof: Large-scale face anti-spoofing dataset with rich annotations", "year": "2020" }, { "authors": "Zhiwei Zhang; Junjie Yan; Sifei Liu; Zhen Lei; Dong Yi; Stan Z Li", "journal": "ICB", "ref_id": "b45", "title": "A face antispoofing database with diverse attacks", "year": "2012" } ]
[ { "formula_coordinates": [ 3, 145.24, 98.58, 254.6, 185.78 ], "formula_id": "formula_0", "formula_text": "𝑃 ! #! 𝑃 ! \" 𝑃 $ #\" … … … 𝑃 ! \" 𝑃 ! #! 𝑃 ! \" 𝑃 $ #\" … N N … N … N 𝑃 $ #\" Removed 1 X N (2XK) X N(" }, { "formula_coordinates": [ 4, 98, 470.7, 188.36, 30.66 ], "formula_id": "formula_1", "formula_text": "cos θ j = K r=0 e 1 τ f i P r j K r=0 e 1 τ f i P r j f i P r j .(1)" }, { "formula_coordinates": [ 4, 64.31, 583.22, 72.42, 17.29 ], "formula_id": "formula_2", "formula_text": "L P D (x i ) = -log" }, { "formula_coordinates": [ 4, 312.94, 233.75, 232.17, 17.05 ], "formula_id": "formula_3", "formula_text": "L P C inter (P) = [ max j,r 1 ,r 2 (P r 1 j • P r 2 1-j ) -min j ,r 1 ,r 2 (P r 1 j • P r 2 j ) + δ1]+, (3)" }, { "formula_coordinates": [ 4, 333.66, 436.73, 211.46, 30.32 ], "formula_id": "formula_4", "formula_text": "L P Cintra (P) = 1 j=0 K r=1 K t=r+1 [P r j P t j -δ 2 ] + ,(4)" }, { "formula_coordinates": [ 4, 336.89, 523.99, 208.22, 17.29 ], "formula_id": "formula_5", "formula_text": "L LDA = L P D + λ 1 L P Cinter + λ 2 L P Cintra ,(5)" }, { "formula_coordinates": [ 5, 77.41, 93.21, 129.7, 107.85 ], "formula_id": "formula_6", "formula_text": "E ← {}, D ← {} ; for p in P do ← 0, D ← {} ; for f in F do if p f > t then ← + 1, D ← D ∪ f ; end E ← E ∪ ; D ← D ∪ D ; end return E, D;" }, { "formula_coordinates": [ 5, 375.71, 134.02, 169.4, 17.29 ], "formula_id": "formula_7", "formula_text": "L Aux = λ s L S s + λ i L S i ,(6)" }, { "formula_coordinates": [ 5, 363.34, 193.28, 181.77, 17.29 ], "formula_id": "formula_8", "formula_text": "L LDAs = L LDA + λ Aux L Aux ,(7)" }, { "formula_coordinates": [ 7, 59.66, 270.69, 209.22, 64.28 ], "formula_id": "formula_9", "formula_text": "LDA 1.0 2.0 1.5 LDA S 1.2 1.0 1.1 3 GRADIANT [2]" } ]
2023-08-09
[ { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b47", "b17", "b5", "b33", "b44", "b30", "b40", "b41", "b27", "b39", "b2", "b15", "b40", "b32", "b13", "b7", "b28", "b12", "b50", "b33", "b20", "b24", "b51", "b3", "b33", "b24", "b51", "b36", "b25", "b18" ], "table_ref": [], "text": "Training effective text classification models often relies on sufficient precisely labeled data. However, it is common in some real-world scenarios that collecting plenty of valid text data is difficult, e.g., requiring considerable effort from human annotators. Enabling the model to learn efficiently with limited resources therefore becomes a practical need and hotspot in the industry and research communities. For example, text classification in few-shot learning that trains on small tasks composed of a few examples and tests on new tasks of unseen classes (Yu et al. 2018;Geng et al. 2019;Chen et al. 2022), semi-supervised learning that provides a few labeled texts and plenty of unlabeled texts for training (Miyato, Dai, and Goodfellow 2017;Xie et al. 2020;Lee, Ko, and Han 2021) and the low-resource regime that also provides a few labeled texts in training but no unlabeled texts available (Wei and Zou 2019;Wu et al. 2022). Data augmentation is widely used in these tasks to increase data size and boost training and is often evaluated in the lowresource regime, which we will focus on in this paper. Text data augmentation is challenged by the discrete nature and complex semantics of texts. Despite the challenges, two branches of textual data augmentation techniques have been explored and demonstrated effective: contextual augmentation and representational augmentation. The contextual augmentation methods augment the texts by replacing, inserting, deleting or swapping words (Kolomiyets, Bethard, and Moens 2011;Wang and Yang 2015;Artetxe et al. 2018;Gao et al. 2019;Wei and Zou 2019;Miao et al. 2020), or paraphrasing the original texts (Edunov et al. 2018;Chen et al. 2019;Kumar et al. 2019;Dopierre, Gravier, and Logerais 2021). For example, in the upper part of Figure 1, given the text \"It's a good movie that acts exquisitely,\" one of its contextual augmentations may be \"It's a good film with exquisite acting.\" The contextual augmentations are semantically interpretable because the modifications are expected to keep semantic consistency with the original texts. However, such heuristic modifications may produce simple data augmentations that bring little improvements or even degraded performance (Zhang, Zhao, and LeCun 2015).\nThe representational augmentation methods generate augmented inputs by interpolation or perturbation of the word embedding or text representations (Miyato, Dai, and Goodfellow 2017;Hsu, Tang, and Glass 2018;Wu et al. 2019b;Chen et al. 2020;Jiang et al. 2020;Zhu et al. 2020;Chen et al. 2021). One most intensively used perturbation method is adversarial data augmentation, which creates augmented examples by adding adversarial noise to the original representations (Miyato, Dai, and Goodfellow 2017;Jiang et al. 2020;Zhu et al. 2020). As shown in the lower part of Figure 1, since the adversarial noise is generated by gradients minimizing the model objective, the adversarial data augmentations could be regarded as hard positive examples. However, the augmentations lack semantic interpretability because the noise is semantic-independent and the perturbed representations no longer represent any valid word. Moreover, adversarial perturbation requires gradient computation for each example, making it inflexible to extend to new examples.\nHard positive examples have been proven to be effective in improving model robustness (Schroff, Kalenichenko, and Philbin 2015;Khosla et al. 2020). Thus, we hope to develop a data augmentation method that can produce hard positive examples without losing interpretability and extensibility. Based on this objective, we propose to generate hard data augmentations by weighted mixing the embedding of strong positive words with unknown-word embedding. Our motivation is to dilute the strong positive words, making the texts become more neutral and turn into hard positive examples. For example, the sentence \"It's a good movie that acts exquisitely\" expresses the positive sentiment supported by strong positive words good and exquisitely. Diluting their expressiveness makes the sentence become harder because the sentence semantics becomes less positive. Our Word dilution method is a practical realization since it is easy to implement and well interpretable. The remaining challenge is how to acquire the dilution weight assigned to each word.\nTo not struggle to estimate the dilution weights manually or heuristically, motivated by Generative Adversarial Networks (Goodfellow et al. 2014), we automatically learn the dilution weights and train the classifier by the constrained adversarial optimization process. Specifically, we introduce neural networks (i.e., dilution networks) to produce the dilution weights. At the inner-loop step, we fix the classifier and learn the dilution weights by maximizing the loss. At the outer-loop step, we fix the dilution weights and train the classifier by minimizing the loss with augmented inputs. We also use separate dilution networks for different classes to guide the dilution-weight learning process with the label information. As the dilution networks are learned independent of the classifier, they can be extended to compute dilution weights for new examples without further training.\nTo summarize, our work makes the following contributions. (1) We propose AWD data augmentation that generates hard positive examples by diluting strong positive words with unknown-word embedding. (2) We adversarially learn the dilution weights by the constrained min-max optimization with the guidance of the labels. (3) We empirically show that AWD outperforms the state-of-the-art data augmentations and demonstrates its interpretability and extensibility." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b40", "b21", "b29", "b41", "b27", "b39", "b32", "b22", "b45", "b2", "b23", "b1", "b13", "b7", "b28", "b12", "b14", "b26", "b0", "b46", "b48", "b19", "b49", "b32", "b9", "b4", "b33", "b51", "b3", "b20", "b35", "b8", "b16" ], "table_ref": [], "text": "Low-resource text classification aims to learn a classification model from a few labeled examples. The earlier work EDA (Wei and Zou 2019) generates text augmentations by random replacement, insertion, swap, deletion of words and trains the model on a small fraction of the original training data. The recent work explores manipulating the text data by augmenting and weighting data examples (Hu et al. 2019). Another recent work studies text data augmentations using different pre-trained transformer models (Kumar, Choudhary, and Cho 2020). The text smoothing model converts the text data from one-hot representations to controllable smoothed representations (Wu et al. 2022). Following these works, we also evaluate our text augmentation method AWD in low-resource text classification.\nText data augmentation techniques can be divided into contextual augmentation approaches and representational augmentation approaches. Besides EDA, contextual augmentation approaches also include works that substitute words using synonyms (Kolomiyets, Bethard, and Moens 2011;Wang and Yang 2015;Miao et al. 2020), randomly delete, insert, replace and swap words (Iyyer et al. 2015;Xie et al. 2017;Artetxe et al. 2018), replace or re-combine fragments of the sentences (Jia and Liang 2016;Andreas 2020), paraphrase the original text (Edunov et al. 2018;Chen et al. 2019;Kumar et al. 2019;Dopierre, Gravier, and Logerais 2021), replace or generate words using language models (Fadaee, Bisazza, and Monz 2017;Kobayashi 2018;Anaby-Tavor et al. 2020;Yang et al. 2020).\nThe representational augmentation approaches generate augmented inputs by interpolation or perturbation of the representations, i.e., word embedding or text representations. One popular interpolation-based representational augmentation method is Mixup (Zhang et al. 2018) which generates augmented examples by linear interpolations of the pair of data points and labels. This method has recently been intensively used in NLP tasks (Guo, Kim, and Rush 2020;Zhang, Yu, and Zhang 2020). For example, some works interpolate the original data with word-level augmented data (Miao et al. 2020) or adversarial data (Cheng et al. 2020), or interpolate the hidden representations of the original texts and augmented texts (Chen, Yang, and Yang 2020). Adversarial perturbation is often used to generate perturbation-based text augmentations, e.g., applying adversarial perturbations to the word embeddings (Miyato, Dai, and Goodfellow 2017;Zhu et al. 2020;Chen et al. 2020Chen et al. , 2021) ) or sentence representations (Hsu, Tang, and Glass 2018;Wu et al. 2019b).\nImproving the interpretability of adversarial perturbation methods is challenging. Recent works in this direction guide word modification by adversarial training instead of directly applying perturbation to the representations (Ren et al. 2019;Cheng, Jiang, and Macherey 2019;Garg and Ramakrishnan 2020). Although these methods can produce interpretable examples, the generated data augmentations sacrifice some adversarial nature by replacing the optimal adversarial representations with approximated word embeddings. They also need adversarial modification on each example, making them inflexible to extend to new examples. As the main challenge in this task is to deal with the overfitting problem caused by a few training examples, the previous works focus on designing various data augmentations as supplementary data to improve the model's robustness. In this paper, we follow this direction and concentrate our topic on text data augmentations in the low-resource regime." }, { "figure_ref": [], "heading": "Problem Formulation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "In this section, we first make a general illusion of our word dilution method and the challenges in dilution weights acquisition, then introduce our systematic solution that learns the dilution weights with neural networks and trains with the text classifier by adversarial min-max optimization." }, { "figure_ref": [ "fig_3" ], "heading": "General Illusion of Word Dilution", "publication_ref": [], "table_ref": [], "text": "To generate hard positive examples as text data augmentations while keeping their semantic interpretability, we need to make word modification dependent on their semantics and simultaneously let the modified text hard to be recognized as positive by the text classifier. Considering that the polarity of a text is significantly determined by some emblematic words. As shown from the example in Figure 2, the emblematic words \"good\" and \"exquisitely\" indicate that the sentence \"It's a good movie that acts exquisitely\" expresses the positive sentiment. We call such emblematic words strong positive words. When we weaken the expressiveness of these strong positive words, the semantics of the text becomes more neutral and harder to be recognized by the classifier. Word Dilution For an input text example x i ∈ D, let the sequence {w i1 , w i2 , • • • , w ini } be the set of all words that consist of the text, where n i denotes the total number of words in x i . For each word w ij in x i , we can obtain its word embedding w ij ∈ R d by an embedding function e. We define the embedding function and obtain w ij as follows\nw ij = e(w ij ; E),(1)\nwhere E ∈ R |V|×d is the parameters of the embedding function e, i.e., the embedding matrix, which is learnable during training. And |V| denotes the vocabulary size. In a neural classification model, the word-embedding sequence {w i1 , w i2 , • • • , w ini } of the input text is sent to the classifier and makes predictions. Word dilution intends to create hard positive examples challenging the classifier by weighted mixing the input word embeddings with unknownword embedding. Specifically, we use the mark unk to denote the unknown word, whose embedding can also be obtained with the function e. We assign a dilution weight α ij ∈ [0, 1] for each word w ij according to its relevance to the text polarity and compute the diluted word wij by\nwij = (1 -α ij )e(w ij ; E) + α ij e(unk; E)(2)\nAfter getting all diluted word embeddings wij , we can treat the sequence { wi1 , wi2 , • • • , wini } (simplified as { wij }) as the data augmentation of the original input word-embedding sequence to further train the text classification model." }, { "figure_ref": [], "heading": "Challenges in Dilution-Weight Acquisition", "publication_ref": [], "table_ref": [], "text": "The acquisition of dilution weights suffers two challenges.\n(1) The dilution weights can be obtained from experience or heuristic methods, e.g., manually selecting the strong positive words for each text or heuristically finding them according to the semantic relevance of the word embeddings. However, these methods are usually laborious and inflexible to extend to new examples. (2) An appropriate α ij is essential to guarantee the effect of word dilution, but it is difficult to decide. We expect the high α ij for a word closely related to the polarity of the text, i.e., the strong positive word; so that the expressiveness of these words is diluted significantly according to Equation (2). However, this does not mean that assigning a highest α ij for each strong positive word is the best because an extremely high α ij may completely obliterate the semantics of the strong positive word and make the input text meaningless or even become a false positive example. Such meaningless data augmentations will no longer be considered as positive examples and may harm the model training. To tackle the challenges, our solution is to adversarially learn the dilution weights with neural networks through a constrained min-max optimization process." }, { "figure_ref": [ "fig_4" ], "heading": "Systematic Solution: Adversarial Word Dilution", "publication_ref": [], "table_ref": [], "text": "The optimization of our adversarial word dilution (AWD) is shown in Figure 3, which consists of training the classifier with original inputs and adversarially learning the dilution weights and optimizing the classifier with augmented inputs. \nwhere ℓ(x i , y i ) is the loss corresponds to the i th example in D and Φ denotes the set of parameters {ϕ y : y ∈ Y}." }, { "figure_ref": [], "heading": "Dilution Networks", "publication_ref": [ "b18" ], "table_ref": [], "text": "To produce text data augmentations through word dilution, we propose the label-guided dilution networks to obtain the dilution weights {α i1 , α i2 , • • • , α ini } (simplified as {α ij }). Our solution is two-fold: on the one hand, to prevent obtaining the dilution weights by laborious experience or heuristic methods as well as enable the data augmentations to flexibly extend to new examples, we attempt to learn the dilution weights using neural networks; on the other hand, to allow the dilution weights to be learned in accordance with the semantics of different classes, we use separate neural networks for each class and train the dilution weights guided by the labels. Specifically, for each y ∈ Y, we use a Multilayer Perceptron (MLP) parameterized by θ y to produce the dilution weights for all words in the input text, i.e., the dilution weight of w ij is computed by\nα ij = σ(θ T yi e(w ij ; E) + b yi ), (4\n)\nwhere σ is the logistic function and b yi is bias. For convenience, we denote all parameters {θ y , b y : y ∈ Y} as Θ.\nAdversarial Optimization We leverage the min-max optimization derived form GAN (Goodfellow et al. 2014) to adversarially update the dilution weights and train the text classifier with augmented inputs. To that end, we first modify the classification loss in Equation ( 3) to include the dilution weights {α ij } and augmented inputs { wij } as follows\nL a (D; E,Φ,Θ) = k×|Y| i=1 ℓ a (x i , y i , {α ij }; E, Φ, Θ) = - k×|Y| i=1 log exp(s({ wij }, ϕ yi )) y∈Y exp(s({ wij }, ϕ y ))(5)\nTo learn appropriate dilution weights, we expect not to generate extreme {α ij }. Thus, for each i, we optimize the dilution networks under the following condition\nR i ({α ij }; Θ) = ∥{α ij }∥ 1 -ρn i ⩽ 0 (6)\nwhere ∥{α ij }∥ 1 = ni j=1 |α ij | and ρ ∈ (0, 1) controls the amount of dilution allowed. This constraint guarantees that no more than ρ fraction of words are diluted.\nBased on Equations ( 5) and ( 6), we have the following constrained min-max optimization problem\nmin E,Φ max Θ L a (D; E, Φ, Θ) s.t. k×|Y| i=1 R i ({α ij }; Θ) ⩽ 0 (7)\nNote that the inner-loop constrained maximization problem can be converted to an unconstrained problem via Penalty Function method, then the optimization problem turns to\nmin E,Φ max Θ [L a (D;E,Φ,Θ)-λ k×|Y| i=1 max(R i ({α ij }; Θ), 0)], (8)\nwhere λ ⩾ 0 is the weight of the constraint item.\nIt is easy to control the dilution-weight range by modifying ρ. However, as ∥{α ij }∥ 1 ⩽ ρn i is a strict constraint, it may force the model to produce undesirable lower dilution weights for all words. To encourage generating some high dilution weights, we loose the strict constraint as \nmin E,Φ max Θ [L a (D;E,Φ,Θ)-γ k×|Y| i=1 ∥{α ij }∥ 1 1 n i ].(9" }, { "figure_ref": [], "heading": "Training Text Classification Model with AWD", "publication_ref": [], "table_ref": [], "text": "We train the text classification model with adversarial word dilution by iteratively executing three optimization steps shown in Algorithm 1 (to describe the process easier, we omit the constraint items in ( 8) and ( 9)): (1) input the original data and minimize L c (D; E, Φ); (2) fix E, Φ and update Θ by maximizing L a (D; E, Φ, Θ), compute dilution weights and generate augmented data; (3) fix Θ, input the augmented data and update E, Φ by minimizing L a (D; E, Φ, Θ). We perform one SGD updating at each optimization step.\nFor convenience, we name the model optimized with the strict constraint term in (8) as AWD(strict) and loosed constraint item in (9) as AWD or AWD(loose), respectively." }, { "figure_ref": [], "heading": "Experiment Datasets and Experiment Setting", "publication_ref": [ "b40", "b21", "b29", "b41", "b38", "b34", "b31", "b10" ], "table_ref": [], "text": "Following the previous works in low-resource text calssification (Wei and Zou 2019;Hu et al. 2019;Kumar, Choudhary, and Cho 2020;Wu et al. 2022), we evaluate our data augmentations on three benchmarks: SST, TREC, and SNIPS.\nSST-2 Stanford Sentiment Treebank (SST) (Socher et al. 2013) is a sentiment classification dataset. The text data are extracted from movie reviews of the rottentomatoes.com website and is originally collected by (Pang and Lee 2005). The texts involve 2 classes, i.e., positive and negative.\nTREC TREC (Li and Roth 2002) is a fine-grained question classification dataset. The text data is collected from USC, TREC 8, TREC 9 and TREC 10 questions. The 500 questions from TREC 10 are used for test. This dataset contains 6 question types (including person, location, etc.).\nSNIPS SNIPS (Coucke et al. 2018) is an English dataset for natural language understanding, which is widely used in intent classification. The text data is collected from crowdsourced queries. This dataset contains 7 user intents from different domains (such as movie, book, etc.)." }, { "figure_ref": [], "heading": "Experiment Setting", "publication_ref": [ "b41", "b41", "b33" ], "table_ref": [], "text": "We use the datasets provided by (Wu et al. 2022). To simulate low-resource text classification, we randomly select k = 10, 20, 50 examples for each class as the training sets. The training set withk = 10, the validation set and the test set are the same in (Wu et al. 2022) (Miyato, Dai, and Goodfellow 2017).\nImplementation We implement our AWD model using Pytorch deep learning framework1 . The BERT-uncased Base model is used as the text classifier. The dimension of the word embedding d is set to 768. The dilution network for each class is implemented using an MLP followed by a sigmoid activation function. We train our model using Adam optimizer with default configurations. The learning rate is set to 5 × 10 -4 . We train AWD and each baseline model for 30 epochs. We repeat all experiments 15 times and report their mean accuracy. We tune the hyper-parameters by grid search on the validation set. The hyper-parameter for training AWD(strict) is λ = 1 and ρ = 0.3, 0.5, 0.3 for respect k=10, 20, 50. When training AWD(strict), we perform 5 SGD updates in a dilution-network optimization step with a learning rate of 0.01. The hyper-parameter for training AWD(loose) is γ = 0.005. All experiments are conducted on an NVIDIA Tesla P100 GPU with 16GB memory." }, { "figure_ref": [], "heading": "Low-Resource Text Classification Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [ "tab_2", "tab_4", "tab_4", "tab_4" ], "text": "The low-resource text classification results of AWD and baseline models are shown in Table 2. The results show that most contextual data augmentation methods and representational data augmentation methods improve upon the BERT baselines. The representational data augmentations are relatively better than the contextual data augmentation methods. These observations manifest the effectiveness of representational data augmentations. Compared to existing representational data augmentation methods Mixup, Textsmooth and ADV, our AWD(strict) and AWD(loose) achieve better results in all settings on SST-2, TREC, Average and in the k = 50 setting on SNIPS. These results demonstrate that our methods AWD(strict) and AWD(loose) significantly improve generalization in lowresource text classification and build themselves as the new state-of-the-art methods of text data augmentation. BERT 62.2(7.1) 71.9(4.8) 79.7(4.3) 72.1(10.5) 82.9(4.9) 88.4(3.4) 90.6(1.3) 92.9(1.4) 94.7(0.9) 74.9(6.3) 82.6(3.7) 87.6(2.9) EDA 62.7(8.5) 71.5(6.1) 80.3(3.1) 75.0(7.5) 80.8(4.6) 86.6(3.9) 90.2(2.0) 93.1(1.2) 94.2(1.3) 75.9(6.0) 81.8(4.0) 87.1(2.8) BT 64.0(7.8) 70.1(6.8) 80.8(3.8) 73.7(7.8) 82.1(5.9) 88.5(2.3) 91.0(1.5) 93.2(0.9) 94.6(0.8) 76.3(5.7) 81.8(4.6) 88.0(2.3) CBERT 61.4(7.9) 72.6(6.6) 80.2(3.1) 73.2(7.9) 82.7(5.0) 88.7(2.7) 90.1(2.3) 93.2(1.0) 94.3(1.2) 74.9(6.1) 82.9(4.2) 87.7(2.3) BERTexpand 62.4(8.3) 71.1(5.2) 80.3(4.5) 75.3(6.4) 83.3(4.6) 86.7(4.0) 90.5(2.1) 93.1(1.3) 94.8(1.0) 76.0(5.6) 82.5(3.7) 87.3(3.2) BERTprepend 63.8(7.6) 70.9 (5.4) 78.8(6.2) 73.1(7.5) 81.4(4.4) 86.0(3.7) 90.4(1.7) 93.4(1.1) 94.9(1.1) 75.7(5.6) 81.9(3.6) 86.6(3.6 1(7.6) 72.5(5.6) 82.5(2.8) 75.3(6.7) 81.3(5.8) 88.1(3.9) 90.5(1.2) 93.7(1.0) 94.8(1.0) 76.3(5.2) 82.5(4.2) 88.5(2.6) ADV 64.0(7.7) 72.4(8.1) 81.3(3.9) 74.9(9.6) 82.6(6.4) 88.0(4.7) 90.5(2.4) 93.5(0.9) 94.8(0.7) 76.5(6.6) 82.8(5.1) 88.0(3.1) AWD(strict) 65.4(6.8) 72.9 (5.8) 82.7(3.0) 76.7(7.3) 83.7(4.4) 89.1(3.4) 91.2(1.8) 93.8(1.0) 95.0(1.0) 77.7(5.3) 83.5(3.7) 88.9(2 " }, { "figure_ref": [ "fig_13" ], "heading": "More Detailed Analysis on AWD", "publication_ref": [], "table_ref": [], "text": "The Hardness of Different Data Augmentations We train a BERT model using the whole training set excluding the low-resource data as a discriminator to measure the hardness of given data augmentations, i.e., a higher error rate on this discriminator implies that the examples are harder.\nTo appropriately evaluate all methods, we compute the harmonic mean (HM), the integrated performance of classification accuracy (Acc) and error rate on the discriminator (Err). Harder augmentations and better accuracy result in High HM. High Err but low Acc reflect that a model may produce undesired hard negative augmentations. We com- BERT 62.2(7.1) 71.9(4.8) 79.7(4.3) 72.1(10.5) 82.9(4.9) 88.4(3.4) 90.6(1.3) 92.9(1.4) 94.7(0.9) 74.9(6.3) 82.6(3.7) 87.6(2.9) BERT+strict 63.2(7.8) 72.7(6.2) 81.0(3.6) 75.2(9.3) 83.5(4.5) 89.2(3.2) 90.9(2.3) 93.3(1.1) 94.6(0.9) 76.4(6.5) 83.2(3.9) 88.3(2.6) BERT+loose 63.2(7.5) 74.2(5.7) 82.5(2.2) 74.8(9.7) 83.2(4.5) 88.9(2.2) 90.8(1.7) 93.4(1.0) 94.9(1.1) 76.2(6.3) 83.6(3.7) 88.8(1.8) The Interpretability of AWD To demonstrate the interpretability of AWD, we make case studies on the dilution weights assigned to each word of some selected examples. We select three text examples, each from one of the SST-2, TREC, SNIPS datasets, and visualize their learned dilution weights in Figure 7. In the (a) case from the SST-2 dataset, the words lovely, beautifully, photographed and romance all express positive sentiment and they are semantically related to the label Positive of the text. These words are respectively assigned with high dilution weights 0.763, 0.825, 0.830 and 0.770. This case manifests that our AWD model effectively learns the interpretable dilution weights through adversarial optimization. In the (b) case from the TREC dataset, the word city related to the label Location is assigned with a high dilution weight 0.853. However, the word connecticut also related to Location is assigned a low dilution weight 0.247. This may be because this word is a low-frequency knowl- " }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This work was supported in part by the National Key R&D Program of China under Grant 2021ZD0110700, in part by the Fundamental Research Funds for the Central Universities, in part by the State Key Laboratory of Software Development Environment." } ]
Data augmentation is widely used in text classification, especially in the low-resource regime where a few examples for each class are available during training. Despite the success, generating data augmentations as hard positive examples that may increase their effectiveness is under-explored. This paper proposes an Adversarial Word Dilution (AWD) method that can generate hard positive examples as text data augmentations to train the low-resource text classification model efficiently. Our idea of augmenting the text data is to dilute the embedding of strong positive words by weighted mixing with unknown-word embedding, making the augmented inputs hard to be recognized as positive by the classification model. We adversarially learn the dilution weights through a constrained min-max optimization process with the guidance of the labels. Empirical studies on three benchmark datasets show that AWD can generate more effective data augmentations and outperform the state-of-the-art text data augmentation methods. The additional analysis demonstrates that the data augmentations generated by AWD are interpretable and can flexibly extend to new examples without further training.
Adversarial Word Dilution as Text Data Augmentation in Low-Resource Regime
[ { "figure_caption": "Figure 1 :1Figure 1: Illustration of contextual augmentation and representational augmentation (taking adversarial data augmentation as an example). The upper and lower part of the figure respectively show an example of contextual augmentation and the process of adversarial data augmentation. Contextual augmentation is interpretable but may not generate hard positive examples. Adversarial data augmentation can generate hard positive examples but is short of interpretability.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Text classification in the low-resource regime adopts supervised learning but is restricted by the available number of training examples. This task aims to efficiently learn a classification model given only a few labeled texts. Formally, in low-resource text classification, let Y be the set of all classes of interest. For each class y ∈ Y, we are only given a small number of k labeled texts for training. Let D = {(x 1 , y 1 ), (x 2 , y 2 ), • • • , (x k×|Y| , y k×|Y| )} be the training set which consists of k × |Y| labeled texts. Each (x i , y i ) pair in D denotes a text x i and its label y i . Given the training set D, the objective is to obtain a sufficiently robust model under the limited resource.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "This nature of the text data motivates us to design a new text data augmentation strategy word dilution shown in Figure 2 that dilutes the expressiveness of strong positive words in the original sentence by weighted mixing their embeddings with unknown word embedding, allowing us to generate hard positive examples without losing interpretability.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: General illusion of word dilution. The squares represent word embeddings. The deeper color manifests that the word has more positive polarity. The weights beside the arrows represent weights of input words or the unknown word.", "figure_data": "", "figure_id": "fig_3", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Adversarial word dilution. The ϕ and θ yi blocks respectively denote the text classifier and dilution networks.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": ".5) AWD(loose) 65.2(7.8) 73.9(5.4) 82.7(5.2) 77.1(7.1) 83.4(4.2) 90.0(3.1) 91.0(1.9) 93.7(1.2) 95.3(0.7) 77.8(5.6) 83.7(3.6) 89.3(3.0)", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Results during the training. The horizontal and vertical axes respectively denote the epochs and accuracy.", "figure_data": "", "figure_id": "fig_8", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: The hardness analysis of different text data augmentation methods on the SST-2 and TREC datasets.", "figure_data": "", "figure_id": "fig_9", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "k = 20 k = 50", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: The hyper-parameter analysis of ρ on SST-2.", "figure_data": "", "figure_id": "fig_11", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "An example in TREC with label Location. .8 71.6 73.3 44.8 80.4 68.1 47.1 (c) An example in SNIPS with label PlayMusic.", "figure_data": "", "figure_id": "fig_12", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: The selected examples from the three datasets, the values indicate the corresponding dilution weights. We pad the three sentences to the same length for a better view.", "figure_data": "", "figure_id": "fig_13", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Algorithm 1: Training the Classification Model with AWD Input: training set D and parameters E, Φ, Θ Output: the optimized dilution networks and classifier 1: initialize E, Φ with BERT, randomly initialize Θ;", "figure_data": "5:end for6: input all {w 8: compute all {α ij }, generate all { wij } by eq. (2);9:fix Θ, update E, Φ by min L a (D; E, Φ, Θ);10: until convergence", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": ". The data statistics are shown in Table1. The statistics of the datasets.", "figure_data": "Dataset #train #low-res #val #test #classSST6,228 10/20/502018212TREC5,406 10/20/50605006SNIPS 13,084 10/20/50707007Baseline Models and ImplementationBaseline Models We compare our AWD with the follow-ing baseline models: the pure BERT (Devlin et al. 2019)without data augmentations; contextual augmentation byword modification or paraphrasing, including EDA (Weiand Zou 2019) and BT(Back Translation) (Shleifer 2019);contextual augmentation using pre-trained laguage mod-els by word modification, including CBERT (Wu et al.2019a), BERTexpand, BERTprepend (Kumar, Choudhary,and Cho 2020), and by generation, including GPT2; rep-resentational data augmentation including Mixup (Zhanget al. 2018), Textsmooth (Wu et al. 2022), ADV(adversarialdata augmentation)", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "The evaluation results on SST-2, TREC and SNIPS. The bold and underline indicate the best and second-best results.", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "The results of applying AWD models to new examples. BERT+strict and BERT+loose denote the BERT model trained with data augmentations for new examples generated by pre-trained AWD(strict) and AWD(loose), respectively.", "figure_data": "", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" } ]
Junfan Chen; Richong Zhang; Zheyan Luo; Chunming Hu; Yongyi Mao
[ { "authors": "A Anaby-Tavor; B Carmeli; E Goldbraich; A Kantor; G Kour; S Shlomov; N Tepper; N Zwerdling", "journal": "", "ref_id": "b0", "title": "Do Not Have Enough Data? Deep Learning to the Rescue!", "year": "2020" }, { "authors": "J Andreas", "journal": "", "ref_id": "b1", "title": "Good-Enough Compositional Data Augmentation", "year": "2020" }, { "authors": "M Artetxe; G Labaka; E Agirre; K Cho", "journal": "", "ref_id": "b2", "title": "Unsupervised Neural Machine Translation", "year": "2018" }, { "authors": "J Chen; D Shen; W Chen; D Yang", "journal": "CoRR", "ref_id": "b3", "title": "Hidden-Cut: Simple Data Augmentation for Natural Language Understanding with Better Generalization", "year": "2021" }, { "authors": "J Chen; Z Yang; D Yang", "journal": "", "ref_id": "b4", "title": "MixText: Linguistically-Informed Interpolation of Hidden Space for Semi-Supervised Text Classification", "year": "2020" }, { "authors": "J Chen; R Zhang; Y Mao; J Xu", "journal": "", "ref_id": "b5", "title": "Contrast-Net: A Contrastive Learning Framework for Few-Shot Text Classification", "year": "2022" }, { "authors": "L Chen; W Ruan; X Liu; J Lu", "journal": "", "ref_id": "b6", "title": "SeqVAT: Virtual Adversarial Training for Semi-Supervised Sequence Labeling", "year": "2020" }, { "authors": "M Chen; Q Tang; S Wiseman; K Gimpel", "journal": "", "ref_id": "b7", "title": "Controllable Paraphrase Generation with a Syntactic Exemplar", "year": "2019" }, { "authors": "Y Cheng; L Jiang; W Macherey", "journal": "", "ref_id": "b8", "title": "Robust Neural Machine Translation with Doubly Adversarial Inputs", "year": "2019" }, { "authors": "Y Cheng; L Jiang; W Macherey; J Eisenstein", "journal": "", "ref_id": "b9", "title": "AdvAug: Robust Adversarial Augmentation for Neural Machine Translation", "year": "2020" }, { "authors": "A Coucke; A Saade; A Ball; T Bluche; A Caulier; D Leroy; C Doumouro; T Gisselbrecht; F Caltagirone; T Lavril; M Primet; J Dureau", "journal": "", "ref_id": "b10", "title": "Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces", "year": "2018" }, { "authors": "J Devlin; M Chang; K Lee; K Toutanova", "journal": "", "ref_id": "b11", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "year": "2019" }, { "authors": "T Dopierre; C Gravier; W Logerais", "journal": "", "ref_id": "b12", "title": "ProtAugment: Unsupervised diverse short-texts paraphrasing for intent detection meta-learning", "year": "2021" }, { "authors": "S Edunov; M Ott; M Auli; D Grangier", "journal": "", "ref_id": "b13", "title": "Understanding Back-Translation at Scale", "year": "2018" }, { "authors": "M Fadaee; A Bisazza; C Monz", "journal": "", "ref_id": "b14", "title": "Data Augmentation for Low-Resource Neural Machine Translation", "year": "2017" }, { "authors": "F Gao; J Zhu; L Wu; Y Xia; T Qin; X Cheng; W Zhou; T Liu", "journal": "", "ref_id": "b15", "title": "Soft Contextual Data Augmentation for Neural Machine Translation", "year": "2019" }, { "authors": "S Garg; G Ramakrishnan", "journal": "", "ref_id": "b16", "title": "BAE: BERT-based Adversarial Examples for Text Classification", "year": "2020" }, { "authors": "R Geng; B Li; Y Li; X Zhu; P Jian; J Sun", "journal": "", "ref_id": "b17", "title": "Induction Networks for Few-Shot Text Classification", "year": "2019" }, { "authors": "I J Goodfellow; J Pouget-Abadie; M Mirza; B Xu; D Warde-Farley; S Ozair; A C Courville; Y Bengio", "journal": "", "ref_id": "b18", "title": "Generative Adversarial Nets", "year": "2014" }, { "authors": "D Guo; Y Kim; A M Rush", "journal": "", "ref_id": "b19", "title": "Sequence-Level Mixed Sample Data Augmentation", "year": "2020" }, { "authors": "W Hsu; H Tang; J R Glass", "journal": "", "ref_id": "b20", "title": "Unsupervised Adaptation with Interpretable Disentangled Representations for Distant Conversational Speech Recognition", "year": "2018" }, { "authors": "Z Hu; B Tan; R Salakhutdinov; T M Mitchell; E P Xing", "journal": "NeurIPS", "ref_id": "b21", "title": "Learning Data Manipulation for Augmentation and Weighting", "year": "2019" }, { "authors": "M Iyyer; V Manjunatha; J L Boyd-Graber; H D Iii", "journal": "", "ref_id": "b22", "title": "Deep Unordered Composition Rivals Syntactic Methods for Text Classification", "year": "2015" }, { "authors": "R Jia; P Liang", "journal": "", "ref_id": "b23", "title": "Data Recombination for Neural Semantic Parsing", "year": "2016" }, { "authors": "H Jiang; P He; W Chen; X Liu; J Gao; T Zhao", "journal": "", "ref_id": "b24", "title": "SMART: Robust and Efficient Fine-Tuning for Pretrained Natural Language Models through Principled Regularized Optimization", "year": "2020" }, { "authors": "P Khosla; P Teterwak; C Wang; A Sarna; Y Tian; P Isola; A Maschinot; C Liu; D Krishnan", "journal": "", "ref_id": "b25", "title": "Supervised Contrastive Learning", "year": "2020" }, { "authors": "S Kobayashi", "journal": "", "ref_id": "b26", "title": "Contextual Augmentation: Data Augmentation by Words with Paradigmatic Relations", "year": "2018" }, { "authors": "O Kolomiyets; S Bethard; M Moens", "journal": "", "ref_id": "b27", "title": "Model-Portability Experiments for Textual Temporal Analysis", "year": "2011" }, { "authors": "A Kumar; S Bhattamishra; M Bhandari; P P Talukdar", "journal": "", "ref_id": "b28", "title": "Submodular Optimization-based Diverse Paraphrasing and its Effectiveness in Data Augmentation", "year": "2019" }, { "authors": "V Kumar; A Choudhary; E Cho", "journal": "", "ref_id": "b29", "title": "Data Augmentation using Pre-trained Transformer Models", "year": "2020" }, { "authors": "J H Lee; S Ko; Y Han", "journal": "", "ref_id": "b30", "title": "SALNet: Semisupervised Few-Shot Text Classification with Attentionbased Lexicon Construction", "year": "2021" }, { "authors": "X Li; D Roth", "journal": "", "ref_id": "b31", "title": "Learning Question Classifiers", "year": "2002" }, { "authors": "Z Miao; Y Li; X Wang; W Tan", "journal": "WWW", "ref_id": "b32", "title": "Snippext: Semi-supervised Opinion Mining with Augmented Data", "year": "2020" }, { "authors": "T Miyato; A M Dai; I J Goodfellow", "journal": "", "ref_id": "b33", "title": "Adversarial Training Methods for Semi-Supervised Text Classification", "year": "2017" }, { "authors": "B Pang; L Lee", "journal": "", "ref_id": "b34", "title": "Seeing Stars: Exploiting Class Relationships for Sentiment Categorization with Respect to Rating Scales", "year": "2005" }, { "authors": "S Ren; Y Deng; K He; Che ; W ", "journal": "", "ref_id": "b35", "title": "Generating Natural Language Adversarial Examples through Probability Weighted Word Saliency", "year": "2019" }, { "authors": "F Schroff; D Kalenichenko; J Philbin", "journal": "", "ref_id": "b36", "title": "FaceNet: A unified embedding for face recognition and clustering", "year": "2015" }, { "authors": "S Shleifer", "journal": "", "ref_id": "b37", "title": "Low Resource Text Classification with ULMFit and Backtranslation", "year": "2019" }, { "authors": "R Socher; A Perelygin; J Wu; J Chuang; C D Manning; A Y Ng; C Potts", "journal": "", "ref_id": "b38", "title": "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank", "year": "2013" }, { "authors": "W Y Wang; D Yang", "journal": "", "ref_id": "b39", "title": "That's So Annoying!!!: A Lexical and Frame-Semantic Embedding Based Data Augmentation Approach to Automatic Categorization of Annoying Behaviors using #petpeeve Tweets", "year": "2015" }, { "authors": "J W Wei; K Zou", "journal": "", "ref_id": "b40", "title": "EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks", "year": "2019" }, { "authors": "X Wu; C Gao; M Lin; L Zang; S Hu", "journal": "", "ref_id": "b41", "title": "Text Smoothing: Enhance Various Data Augmentation Methods on Text Classification Tasks", "year": "2022" }, { "authors": "X Wu; S Lv; L Zang; J Han; S Hu", "journal": "", "ref_id": "b42", "title": "Conditional BERT Contextual Augmentation", "year": "2019" }, { "authors": "Z Wu; S Wang; Y Qian; K Yu", "journal": "", "ref_id": "b43", "title": "Data Augmentation Using Variational Autoencoder for Embedding Based Speaker Verification", "year": "2019" }, { "authors": "Q Xie; Z Dai; E H Hovy; T Luong; Q Le", "journal": "", "ref_id": "b44", "title": "Unsupervised Data Augmentation for Consistency Training", "year": "2020" }, { "authors": "Z Xie; S I Wang; J Li; D Lévy; A Nie; D Jurafsky; A Y Ng", "journal": "", "ref_id": "b45", "title": "Data Noising as Smoothing in Neural Network Language Models", "year": "2017" }, { "authors": "Y Yang; C Malaviya; J Fernandez; S Swayamdipta; R L Bras; J Wang; C Bhagavatula; Y Choi; D Downey", "journal": "", "ref_id": "b46", "title": "G-DAug: Generative Data Augmentation for Commonsense Reasoning", "year": "2020" }, { "authors": "M Yu; X Guo; J Yi; S Chang; S Potdar; Y Cheng; G Tesauro; H Wang; B Zhou", "journal": "", "ref_id": "b47", "title": "Diverse Few-Shot Text Classification with Multiple Metrics", "year": "2018" }, { "authors": "H Zhang; M Cissé; Y N Dauphin; D Lopez-Paz", "journal": "", "ref_id": "b48", "title": "mixup: Beyond Empirical Risk Minimization", "year": "2018" }, { "authors": "R Zhang; Y Yu; C Zhang", "journal": "", "ref_id": "b49", "title": "SeqMix: Augmenting Active Sequence Labeling via Sequence Mixup", "year": "2020" }, { "authors": "X Zhang; J J Zhao; Y Lecun", "journal": "", "ref_id": "b50", "title": "Character-level Convolutional Networks for Text Classification", "year": "2015" }, { "authors": "C Zhu; Y Cheng; Z Gan; S Sun; T Goldstein; J Liu", "journal": "", "ref_id": "b51", "title": "FreeLB: Enhanced Adversarial Training for Natural Language Understanding", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 138.33, 656.52, 154.17, 9.68 ], "formula_id": "formula_0", "formula_text": "w ij = e(w ij ; E),(1)" }, { "formula_coordinates": [ 3, 354.68, 330.27, 203.32, 9.79 ], "formula_id": "formula_1", "formula_text": "wij = (1 -α ij )e(w ij ; E) + α ij e(unk; E)(2)" }, { "formula_coordinates": [ 4, 113.4, 662.59, 175.23, 12.69 ], "formula_id": "formula_3", "formula_text": "α ij = σ(θ T yi e(w ij ; E) + b yi ), (4" }, { "formula_coordinates": [ 4, 288.63, 664.95, 3.87, 8.64 ], "formula_id": "formula_4", "formula_text": ")" }, { "formula_coordinates": [ 4, 320.77, 305.83, 237.23, 67.46 ], "formula_id": "formula_5", "formula_text": "L a (D; E,Φ,Θ) = k×|Y| i=1 ℓ a (x i , y i , {α ij }; E, Φ, Θ) = - k×|Y| i=1 log exp(s({ wij }, ϕ yi )) y∈Y exp(s({ wij }, ϕ y ))(5)" }, { "formula_coordinates": [ 4, 363.19, 412.78, 194.81, 11.5 ], "formula_id": "formula_6", "formula_text": "R i ({α ij }; Θ) = ∥{α ij }∥ 1 -ρn i ⩽ 0 (6)" }, { "formula_coordinates": [ 4, 322, 487.09, 236, 31.18 ], "formula_id": "formula_7", "formula_text": "min E,Φ max Θ L a (D; E, Φ, Θ) s.t. k×|Y| i=1 R i ({α ij }; Θ) ⩽ 0 (7)" }, { "formula_coordinates": [ 4, 320.6, 557.2, 237.4, 31.18 ], "formula_id": "formula_8", "formula_text": "min E,Φ max Θ [L a (D;E,Φ,Θ)-λ k×|Y| i=1 max(R i ({α ij }; Θ), 0)], (8)" }, { "formula_coordinates": [ 4, 344.66, 669.12, 209.47, 31.18 ], "formula_id": "formula_9", "formula_text": "min E,Φ max Θ [L a (D;E,Φ,Θ)-γ k×|Y| i=1 ∥{α ij }∥ 1 1 n i ].(9" } ]
2023-05-16
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b2", "b13", "b20", "b0", "b4", "b29", "b18", "b23", "b12", "b23", "b22", "b10", "b15", "b28", "b19", "b8", "b6", "b30" ], "table_ref": [], "text": "Machine Learning (ML) focuses on developing computer algorithms able to learn from previous experience, in such a way that they could be applied to solve realworld problems or, at least, provide support for human activities. Inside the ML paradigm, different sub-domains can be found, which emerge according to the sort of data used. Specifically, this work deals with the classification of time series. A time series is a set of values collected chronologically. This type of data can be found in a wide range of fields. For instance, the prices of a market asset over a certain period of time, or the monthly sales of a shop.\nIn this study, we focus on Time Series Classification (TSC), a task in which a discrete label is associated with each time series specifying some property of interest about it. The main goal is finding a model that learns the correspondence between labels and time series, so that it is capable of labelling new, unknown patterns accurately. Examples of applications can be found in medical research [3], psychology [14] and industry [21], among others. Due to its versatility, the TSC paradigm has been greatly enhanced over the last decades. The main reason is the establishment of the UEA/UCR archive, a set of benchmark problems, that has made easier the validation of novel techniques.\nTSC approaches are divided into different groups according to the methodology adopted. A first detailed taxonomy of the state of the art was presented in [1], where six main categories were distinguished: whole series, intervals, shapelets, dictionary-based, combinations and model-based techniques. In subsequent years, three additional groups emerged in the literature: the convolutional-based models, introduced with the Random Convolutional Kernel Transform (ROCKET) method [5]; deep learning-based techniques, which mainly raised from the adaptation of residual and convolutional networks to the TSC case [30]; and ensemblebased methods, in which the Hierarchical Vote Collective of Transformationbased Ensembles (HIVE-COTE) [19] particularly stands out due to its superiority in terms of accuracy in comparison to the rest of the state-of-the-art methodologies. Later on, an improved version of this last technique, named as HIVE-COTE 2.0 (HC2), was introduced in [24]. The HC2 approach combines four methods from different categories: Arsenal, an ensemble of the ROCKET algorithm; Shapelet Transform Classifier (STC) [13], a standard classifier applied to a transformation built from the distances between the phase independent subsequences, known as shapelets, and the original time series; the intervalbased Diverse representation Canonical Interval Forest (DrCIF) [24], a random forest-based technique applied to statistical features extracted from dependent subsequences of the original time series; and the Temporal Dictionary Ensemble (TDE) [23], an approach using bag of words representations of time series. TDE is the basis for the methodology proposed in this work.\nMore specifically, this work deals with the classification of ordinal time series, a special type of time series in which the associated discrete target values present a natural order relationship between them. This vaguely explored subdomain of TSC is known as Time Series Ordinal Classification (TSOC) and was firstly presented in [11]. One example of this type of series was introduced in [16], in which the task is to associate a spectrograph of 1751 observations (i.e. time series) with a label that can take four different values, E35, E38, E40 and E45, ordered by the ethanol level of the sample. With this setting, during model training, misclassifying an E45 sample as E35 should be far more penalized than misclassifying it as E40. This property is known as ordinality, and can be exploited in a wide variety of domains including industry [29], image classification [20], atmospheric events detection [9], finance [7], and medicine [31], among others.\nFinally, the goal of this work is to develop a new dictionary-based approach for the TSOC paradigm. For this, the TDE, the state-of-the-art approach in this category of TSC, is considered as the basis. For this, a TDE methodology capable of exploiting the ordinal information of the output variable is proposed. Specifically, more appropriate strategies in the ensemble member selection and in the computation of the time series symbolic representation are employed.\nThe remainder of this paper is organized as follows: related works are described in Section 2; Section 3 describes the methodology developed, i.e. the Ordinal Temporal Dictionary Ensemble (O-TDE); Section 4 presents the datasets and experimental settings; Section 5 shows the obtained results; and finally, Section 6 provides the conclusions and future research of our work." }, { "figure_ref": [], "heading": "Related works", "publication_ref": [ "b17", "b16", "b25", "b11", "b26", "b24", "b14", "b22", "b23", "b10", "b21", "b3" ], "table_ref": [], "text": "The first dictionary-based method for time series classification was the Bag Of Patterns (BOP) presented in [18]. The BOP algorithm is divided into four phases: 1) a sliding window is applied to the time series; 2) a dimensionality reduction method called Symbolic Aggregate approXimation (SAX) [17] is used to transform each window to a symbolic representation. This representation is known as word; 3) the frequency of occurrence of each word is counted; and finally 4) histograms of words counts are computed for the time series of the training set. The prediction of new patterns is obtained through a k-Nearest Neighbours (kNN) classifier measuring the similarity between their histograms and those of the training instances.\nMost of the state-of-the-art methods follow the structure of the BOP algorithm. This is the case of Bag of Symbolic Fourier approximation Symbols (BOSS) [26]. BOSS also transforms the input time series into symbolic representations (words). For this purpose, instead of SAX, it uses the Discrete Fourier Transform (DFT) [12] method. DFT avoids issues related with noisy time series, achieving a more representative transformation.\nAnother distinguishing feature of BOSS is that it conforms an ensemble of BOSS approaches trained with different window sizes. Only those BOSS members achieving an over-threshold accuracy are included in the ensemble. BOSS significantly outperformed BOP. Given the performance of this approach, several BOSS-based methods were proposed in the literature. In this sense, we have the Word ExtrAction for time SEries cLassification (WEASEL) [27] method. WEASEL applies an ANOVA test to obtain a subset of the most significant DFT coefficients for each class. From this subset it builds the bag of words for each time series. Then a chi-square test is performed to select the most significant words to compute the histograms. This feature selection methodology makes WEASEL more scalable and faster than previous proposals.\nOn the same line, contractable BOSS (cBOSS) [25] performs a random selection on the parameter space making the BOSS ensemble lighter. cBOSS is significantly more scalable than BOSS but performs equally. Spatial Pyramids (SP) BOSS [15] incorporates the SP method, widely used in computer vision problems, to the BOSS technique. SP recursively segments the input time series and computes histograms for these segments. This allows the combination of temporal and phase independent features in the symbolic transformation process, slightly improving the robustness of the algorithm.\nFinally, the latest and most successful dictionary-based technique is the Temporal Dictionary Ensemble (TDE) [23]. TDE implements the same structure than BOSS, but makes use of a Gaussian process of the parameter space to do the ensemble member selection. Its superiority over competing dictionarybased methods led it to replace BOSS in the second version of the HIVE-COTE technique, HIVE-COTE2.0 (HC2) [24].\nFocusing now on TSOC, only one type of approaches have been developed. This is the Ordinal Shapelet Transform Classifier (O-STC) [11]. O-STC extracts phase independent features from the time series keeping those that satisfy a minimum shapelet quality (measured through a specific ordinal metric). The resulting set of shapelets are fed to an ordinal classifier such as a Proportional Odds Model (POM) [22] or an ordinal support vector machine technique [4].\nIn this work, we focus on implementing the ordinal version of the TDE approach, given its superiority over the existing dictionary-based approaches in TSC. This technique is known as Ordinal Temporal Dictionary Ensemble (O-TDE)." }, { "figure_ref": [], "heading": "Ordinal Temporal Dictionary Ensemble (O-TDE)", "publication_ref": [ "b27", "b22", "b11", "b7" ], "table_ref": [], "text": "First of all, a time series can be categorised according to the number of dimensions d as univariate (d = 1) or multivariate (d > 1). A univariate time series x of length l is an ordered set of l real values, x = (x 1 , . . . , x l ). Conversely, a multivariate time series with d dimensions (or channels) and length l is a collection of d ordered sets, each containing l real values denoted as x = {(x 1,1 , . . . , x 1,l ), . . . , (x d,1 , . . . x d,l )}. A time series dataset is then defined as D = {(x 1 , y 1 ), (x 2 , y 2 ), . . . , (x N , y N )}, where N is the number of available time series, x i is a time series (either univariate or multivariate), and y i is the output label associated with the respective time series. Both in this paper and in the wider TSC literature, our analyses rely on datasets comprising time series that are uniformly spaced, meaning that the observations within each time series are collected at equally-spaced time intervals. Additionally, all of the time series in the datasets are of equal length.\nFocusing now on the proposal, as BOSS and TDE, O-TDE also consists of several individual techniques which, to prevent ambiguity, will be referred to as individual O-TDE. In the O-TDE algorithm, a guided parameter selection is performed to build the ensemble members. This parameter selection is guided by a Gaussian process [28] intended to predict the Mean Absolute Er-ror (MAE) values for specific O-TDE configurations, basing its prediction on previous parameters-MAE pairs [23]. This helps to reduce the computational complexity of the ensemble construction. This process is similar to that followed in the original TDE algorithm, but considering the MAE metric instead of the accuracy. Note that MAE quantifies the error committed in the ordinal scale. Hence, it helps to boost the performance achieved for ordinal problems.\nRegarding the individual O-TDE, i.e. the method considered in the ensemble, it consists of a sequence of steps, summarised in the following lines. Firstly, a given input time series of size l is processed by sliding windows of length w, in such a way that w l. Then, a Discrete Fourier Transform (DFT) [12] is applied to each window, decomposing it into a set of w orthogonal basis functions using sinusoidal waves. The set of waves obtained through Fourier analysis is commonly referred to as Fourier coefficients. In practice, only the first c coefficients are typically retained, while the remaining coefficients, which contribute to higher frequencies, are discarded (c w). This selection process serves two purposes: 1) since the first Fourier coefficients are related to the smoothest sections of the time series, potentially noisy parts can be eliminated. And 2) the dimensionality of the representation can be substantially reduced from w coefficients to just c. This reduction can provide computational benefits, particularly for large or complex datasets.\nAt this point, from the initial time series, c Fourier coefficients are kept. The j-th Fourier coefficient extracted from the i-th time series is represented by a complex number F i,j = (real i,j , imag i,j ). With this setting, the following matrix A is built:\nA =      real 1,1 imag 1,1 . . . real 1,c imag 1,c real 2,1 imag 2,1 . . . real 2,c imag 2,c . . . . . . . . . . . . . . . real N,1 imag N,1 . . . real N,c imag N,c      , (1\n)\nwhere N is the number of time series of the training dataset. For each column of A, C m = (C 1,m , C 2,m , . . . , C N,m ), with m ∈ {1, 2, . . . , 2c}, a set of thresholds β m = (β m,0 , β m,1 , . . . , β m,T ) is extracted through a process called Information Gain Binning (IGB) that will covered below. The β m,0 and β m,T thresholds are set to -∞ and +∞ respectively. Note that as coefficients are represented by complex numbers (with real and imaginary parts), m takes values up to 2c. With this setting, the C m real-valued elements are discretised according to β m and a finite alphabet Σ = {α 1 , α 2 , . . . , α T }, where T is the size of the dictionary. An element\nC im of A is mapped to a symbol α t of Σ if β m,t-1 ≤ C i,m ≤ β m,t , with t ∈ {1, 2, . . . , T }.\nThe resulting symbolic representation of each column is what is called a word. The IGB process finds the optimal set β for each column by fitting a Decision Tree Regressor (DTR). Each β m,i corresponds to a threshold value used in a given splitting node of the tree. The impurity criterion i used in the DTR is the Mean Squared Error (MSE) with an improvement score proposed in [8]:\ni = w l • w r w l + w r (ȳ l -ȳr ),(2)\nwhere ȳl , ȳr are the left and right child nodes response means, and w l , w r are the corresponding sums of the weights. The utilisation of this criterion instead of the accuracy (considered in the original TDE proposal) greatly enhances the performance in ordinal problems. This criteria is usually known in the literature as friedman-MSE.\nIn base of all the above, an individual O-TDE transforms an input time series into a set of words (one word for each sliding window). Then, a histogram of words counts is built from this set. The label for a testing time series is obtained by computing the distances between its histogram and those of the training time series and returning the label of the closest one." }, { "figure_ref": [], "heading": "Experimental settings", "publication_ref": [], "table_ref": [], "text": "The experiments are performed on an extended version of the TSOC archive. To avoid possible randomisation biases, 30 runs have been performed. To measure the performance of the techniques, both nominal and ordinal metrics have been considered to get a better analysis on how the proposed ordinal methodology performs." }, { "figure_ref": [], "heading": "Datasets considered", "publication_ref": [ "b9" ], "table_ref": [], "text": "With the aim of performing a robust experimentation, a set of 18 TSOC problems from a wide variety of domains has been considered. In this section, we present these datasets and the source from which they have been collected. Table 1 provides a summary of the complete set of problems. We can distinguish four different data sources: 1) The UEA/UCR TSC archive 3 , where a subset of 9 ordinal problems has been identified [10]. 2) The Monash/UEA/UCR Time Series Extrinsic Regression (TSER) archive 4 . From this repository, we limited our selection to equal-length problems without missing values, adding two more datasets to our experiments. The originally continuous output variable of these datasets has been discretised into five equally wide bins. 3) Historical price data from 5 of the most important companies in the stock market. We have taken this data from Yahoo Finance5 website, extracting weekly price data from the earliest available date to March 2023. Each time series is built with the returns over 53 weeks (the number of weeks of a year) prior to a given date t, and the output label corresponds to the price return in t (r t ). This value is discretised according to a set of predefined symmetrical thresholds (-∞, -0.05, -0.02, 0.02, 0.05, ∞). In this way, our experimentation is extended with 5 more problems. 4) Buoy data from the National Data Buoy Center (NDBC) 6 . Two problems from this source has been considered, which are USASouthwestEnergyFlux and USASouth-westSWH. The first comprises a set of 468 time series. Each time series is built on 112 energy fluctuation measurements collected during 4 weeks (4 measures per day). The objective is to estimate the level of energy fluctuation during that period of time, being 0 the minimum level, and 3 the highest energy level. The second problem consists on 1872 time series of length 28 representing sea waves height variation along a week (4 measures per day). The purpose is to estimate the wave height level during that period of time, ranging from 0 (the lowest height) to 3 (the highest height). " }, { "figure_ref": [], "heading": "Experimental setup", "publication_ref": [], "table_ref": [], "text": "With the goal of demonstrating that ordinal approaches can outperform nominal techniques when dealing with ordinal datasets, the proposed methodology O-TDE is compared against 4 state-of-the-art approaches in dictionary-based techniques: BOSS, cBOSS, WEASEL, and TDE. The performance of these approaches is measured in terms of four metrics (1 nominal and 3 ordinal). The Correct Classification Rate (CCR), also known as accuracy, is the most spread measure when dealing with nominal time series. It measures the percentage of correctly classified instances.\nThe first ordinal measure is the Mean Absolute Error (MAE), that quantifies the error committed in the ordinal scale:\nMAE = 1 N N i=1 | ŷi -y i |,(3)\nwhere N represents the number of patterns, and ŷi and y i are the predicted and real labels, respectively.\nThe second ordinal measure is the Quadratic Weighted Kappa (QWK). QWK establishes different weights depending on the different disagreement levels between real and predicted values. As MAE, it penalises to a greater extent errors made in farther classes in the ordinal scale:\nQWK = 1 - N i,j ω i,j O i,j N i,j ω i,j E i,j ,(4)\nwhere ω is the penalization matrix with quadratic weights, O is the confusion matrix,\nE ij = Oi•O•j N\n, with O i• and O •j being the accumulated sum of all the elements of the i-th row and the j-th column, respectively.\nThe remaining ordinal metric considered is the 1-OFF accuracy (1-OFF) which is the same as the CCR but also considering as correct the predictions one category away from the actual class on the ordinal scale.\nFurthermore, given that the employed methodologies have a stochastic behaviour, the experiments have been performed using 30 different resamples. The first run is with the default data and subsequent runs are carried out with data resampled using the same train/test proportion as the original.\nFinally, the code of the nominal approach is open source and is available in the aeon toolkit 7 , a scikit-learn compatible implementation of the time series approaches. The ordinal version of the TDE will be included in aeon." }, { "figure_ref": [ "fig_2" ], "heading": "Results", "publication_ref": [ "b5", "b1" ], "table_ref": [ "tab_1" ], "text": "Table 2 shows the results achieved in terms of MAE. Results are shown as the as the mean and standard deviation of the 30 runs carried out. As can be seen, O-TDE is the approach achieving the best results, as is the best and second best in 10 and 4 of the 18 ordinal datasets, respectively. The second best approach is the nominal version of TDE, which obtained the best results for 3 datasets (tied with WEASEL) but is the second-best in other 6 datasets, whereas WEASEL only is the second-best in 4.\nFurthermore, to compare the results obtained for multiple classifiers over multiple datasets, Critical Difference Diagrams (CDDs) are used [6]. The posthoc Nemenyi test is replaced by a comparison of all classifiers using pairwise Wilcoxon signed-rank tests. Finally, cliques are formed using the Holm correction [2]. Figure 1 shows the CDDs for the four measures detailed in Section 4.2. The best results are highlighted in bold, whereas the second-best are in italics.\nFrom these results, it can be said that a solid superiority of the O-TDE method is observed against the nominal methodologies. O-TDE outperforms all the nominal techniques not only in terms of ordinal performance measures (MAE and QWK) but also in terms of CCR, a nominal measure. Even though improving the results in CCR is not the final goal of the ordinal approaches, this superiority demonstrates the potential of the ordinal techniques over nominal ones. Finally, indicate that this difference becomes statistically significant for the MAE and 1-OFF metrics, indicating an excellent performance of the O-TDE proposed approach." }, { "figure_ref": [], "heading": "Conclusion and future scope", "publication_ref": [], "table_ref": [], "text": "Time Series Ordinal Classification is still an unexplored paradigm in the time series literature, being a subset of the popular nominal Time Series Classification (TSC) task. However, it has a wealth of real-world applications in a wide range of fields such as finances, medicine or energy, among others. In this work, it has been shown that when this sort of problems are approximated through ordinal methods, such as the presented Ordinal Temporal Dictionary Ensemble (O-TDE), a significant boost in performance is obtained. This superiority is mainly achieved by penalising more severely those predictions that fall far away from the real class in the ordinal scale.\nFrom the original set of 7 datasets previously identified, this work provides another 11 datasets, taking the ordinal archive to 18 ordinal datasets, including 13 univariate and 5 multivariate, making the obtained results more robust. The performance of the 5 approaches has been measured in terms of accuracy, the most used one in nominal TSC, and three ordinal metrics, Mean Average Error (MAE), Quadratic Weighted Kappa (QWK) and 1-OFF accuracy (1-OFF). These three measures help to properly quantify the capacity of the approaches to model the ordinal scale. Consequently, the biggest differences in performance between nominal and ordinal methodologies are obtained in terms of these last three metrics, being the difference in terms of MAE and 1-OFF statistically significant.\nFor future works, the TSOC archive is sought to be expanded. In addition, multiple well-known TSC methods such as kernel-based, ensemble-based or interval-based techniques will be explored for the ordinal paradigm." }, { "figure_ref": [], "heading": " * ", "publication_ref": [], "table_ref": [], "text": "This work has been partially subsidised by \"Agencia Española de Investigación (España)\" (grant ref.: PID2020-115454GB-C22 / AEI / 10.13039 / 501100011033). David Guijo-Rubio's research has been subsidised by the University of Córdoba through grants to Public Universities for the requalification of the Spanish university system of the Ministry of Universities, financed by the European Union -NextGenerationEU (grant reference: UCOR01MS)." } ]
Time Series Classification (TSC) is an extensively researched field from which a broad range of real-world problems can be addressed obtaining excellent results. One sort of the approaches performing well are the so-called dictionary-based techniques. The Temporal Dictionary Ensemble (TDE) is the current state-of-the-art dictionary-based TSC approach. In many TSC problems we find a natural ordering in the labels associated with the time series. This characteristic is referred to as ordinality, and can be exploited to improve the methods performance. The area dealing with ordinal time series is the Time Series Ordinal Classification (TSOC) field, which is yet unexplored. In this work, we present an ordinal adaptation of the TDE algorithm, known as ordinal TDE (O-TDE). For this, a comprehensive comparison using a set of 18 TSOC problems is performed. Experiments conducted show the improvement achieved by the ordinal dictionary-based approach in comparison to four other existing nominal dictionary-based techniques.
A Dictionary-based approach to Time Series Ordinal Classification *
[ { "figure_caption": "7https://github.com/aeon-toolkit/aeon", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 1 :1Fig. 1: CDDs in terms of MAE (a), QWK (b), CCR (c) and 1-OFF (d). The significance value α is set to 0.1. The critical difference (CD) value is computed pairwise and is equal to 0.456.", "figure_data": "", "figure_id": "fig_2", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Information about the datasets considered. OAG stands for Outlin-eAgeGroup.", "figure_data": "Dataset name# Train # Test # Classes Length # DimensionsAAPL17204315531AMZN10352595531AppliancesEnergy9542514424AtrialFibrillation151536402Covid3Month140615841DistalPhalanxOAG4001393801DistalPhalanxTW4001396801EthanolConcentration261263417513EthanolLevel504500417511GOOG7321835531META4081035531MSFT15013765531MiddlePhalanxOAG4001543801MiddlePhalanxTW3991546801ProximalPhalanxOAG4002053801ProximalPhalanxTW4002056801USASouthwestEnergyFlux32714141127USASouthwestSWH13105624287", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results achieved in terms of MAE for the 5 dictionary-based approaches considered in this work. Results are exposed as the Mean and Standard Deviation (SD) of the 30 runs: Mean SD .", "figure_data": "DatasetBOSScBOSS WEASELTDEO-TDEAAPL1.3830.042 1.3760.045 1.2920.040 1.3800.051 1 .3640.048AMZN1.3900.059 1.3790.070 1.2930.063 1.3680.068 1 .3650.062AppliancesEnergy0.5720.010 0.5710.000 0.5610.028 0 .5440.044 0.5080.058AtrialFibrillation0.9580.129 0.8020.146 0.9530.125 0.9510.178 0 .8130.130Covid3Month0.7550.063 0.7670.050 0.7770.053 0.7370.043 0 .7470.036DistalPhalanxOAG0.1800.029 0 .2040.025 0.2130.025 0.2070.032 0.2050.025DistalPhalanxTW0 .3860.033 0.3880.037 0.3650.026 0.4060.030 0.4040.041EthanolConcentration0.7250.057 0.7900.050 0.5130.043 0.5520.086 0 .5390.061EthanolLevel0.5610.040 0.5850.036 0 .4660.061 0.4780.097 0.4250.057GOOG1.0820.051 1.0980.055 1.0120.055 0 .9660.059 0.9520.063META1.1930.082 1.1850.085 1 .1270.098 1.1500.082 1.1060.072MSFT1.1010.044 1.1060.038 1 .0170.042 1.0510.040 1.0090.042MiddlePhalanxOAG0.3610.034 0.3350.036 0.3880.039 0 .3150.038 0.3140.039MiddlePhalanxTW0.6570.042 0.6040.046 0.6220.046 0.5790.047 0 .5930.040ProximalPhalanxOAG0.1760.018 0.1450.020 0.1590.018 0 .1450.020 0.1470.020ProximalPhalanxTW0.2490.027 0 .2160.020 0.2180.022 0.2120.020 0.2200.017USASouthwestEnergy0.2210.015 0.2170.012 0.1890.022 0.2230.025 0 .2050.023USASouthwestSWH0.6770.048 0.3910.021 0.3830.013 0.3920.015 0 .3850.014Best (second best)1 (1)2 (2)6 (3)3 (4)6 (8)Rank4.1673.3332 .6672.7782.000", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Rafael Ayllón-Gavilán; David Guijo-Rubio; Pedro Antonio Gutiérrez; César Hervás-Martínez
[ { "authors": "A Bagnall; J Lines; A Bostrom; J Large; E Keogh", "journal": "DATA MINING AND KNOWLEDGE DISCOVERY", "ref_id": "b0", "title": "The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances", "year": "2017" }, { "authors": "A Benavoli; G Corani; F Mangili", "journal": "The Journal of Machine Learning Research", "ref_id": "b1", "title": "Should we really use post-hoc tests based on mean-ranks?", "year": "2016" }, { "authors": "K Buza; J Koller; K Marussy", "journal": "Springer", "ref_id": "b2", "title": "Process: projection-based classification of electroencephalograph signals", "year": "2015" }, { "authors": "W Chu; S S Keerthi", "journal": "", "ref_id": "b3", "title": "New approaches to support vector ordinal regression", "year": "2005" }, { "authors": "A Dempster; F Petitjean; G I Webb", "journal": "Data Mining and Knowledge Discovery", "ref_id": "b4", "title": "Rocket: exceptionally fast and accurate time series classification using random convolutional kernels", "year": "2020" }, { "authors": "J Demšar", "journal": "The Journal of Machine learning research", "ref_id": "b5", "title": "Statistical comparisons of classifiers over multiple data sets", "year": "2006" }, { "authors": "F Fernandez-Navarro; P Campoy-Munoz; M De La Paz-Marin; C Hervas-Martinez; X Yao", "journal": "IEEE Transactions on Cybernetics", "ref_id": "b6", "title": "Addressing the eu sovereign ratings using an ordinal regression approach", "year": "2013" }, { "authors": "J H Friedman", "journal": "The Annals of Statistics", "ref_id": "b7", "title": "Greedy function approximation: A gradient boosting machine", "year": "2001" }, { "authors": "D Guijo-Rubio; C Casanova-Mateo; J Sanz-Justo; P Gutierrez; S Cornejo-Bueno; C Hervás; S Salcedo-Sanz", "journal": "Atmospheric Research", "ref_id": "b8", "title": "Ordinal regression algorithms for the analysis of convective situations over madrid-barajas airport", "year": "2020" }, { "authors": "D Guijo-Rubio; P A Gutiérrez; A Bagnall; C Hervás-Martínez", "journal": "Springer", "ref_id": "b9", "title": "Ordinal versus nominal time series classification", "year": "2020-09-18" }, { "authors": "D Guijo-Rubio; P A Gutiérrez; A Bagnall; C Hervás-Martínez", "journal": "", "ref_id": "b10", "title": "Time series ordinal classification via shapelets", "year": "2020" }, { "authors": "F J Harris", "journal": "Proceedings of the IEEE", "ref_id": "b11", "title": "On the use of windows for harmonic analysis with the discrete fourier transform", "year": "1978" }, { "authors": "J Hills; J Lines; E Baranauskas; J Mapp; A Bagnall", "journal": "Data mining and knowledge discovery", "ref_id": "b12", "title": "Classification of time series by shapelet transformation", "year": "2014" }, { "authors": "V Kurbalija; C Von Bernstorff; H D Burkhard; J Nachtwei; M Ivanović; L Fodor", "journal": "", "ref_id": "b13", "title": "Time-series mining in a psychological domain", "year": "2012" }, { "authors": "J Large; A Bagnall; S Malinowski; R Tavenard", "journal": "Intelligent Data Analysis", "ref_id": "b14", "title": "On time series classification with dictionary-based classifiers", "year": "2019" }, { "authors": "J Large; E K Kemsley; N Wellner; I Goodall; A Bagnall", "journal": "Springer", "ref_id": "b15", "title": "Detecting forged alcohol non-invasively through vibrational spectroscopy and machine learning", "year": "2018" }, { "authors": "J Lin; E Keogh; L Wei; S Lonardi", "journal": "Data Mining and knowledge discovery", "ref_id": "b16", "title": "Experiencing sax: a novel symbolic representation of time series", "year": "2007" }, { "authors": "J Lin; R Khade; Y Li", "journal": "Journal of Intelligent Information Systems", "ref_id": "b17", "title": "Rotation-invariant similarity in time series using bagof-patterns representation", "year": "2012" }, { "authors": "J Lines; S Taylor; A Bagnall", "journal": "ACM Transactions on Knowledge Discovery from Data", "ref_id": "b18", "title": "Time series classification with hive-cote: The hierarchical vote collective of transformation-based ensembles", "year": "2018" }, { "authors": "Y Liu; Y Wang; A W K Kong", "journal": "Image and Vision Computing", "ref_id": "b19", "title": "Pixel-wise ordinal classification for salient object grading", "year": "2021" }, { "authors": "P Malhotra; L Vig; G Shroff; P Agarwal", "journal": "", "ref_id": "b20", "title": "Long short term memory networks for anomaly detection in time series", "year": "2015" }, { "authors": "P Mccullagh", "journal": "Journal of the Royal Statistical Society: Series B (Methodological)", "ref_id": "b21", "title": "Regression models for ordinal data", "year": "1980" }, { "authors": "M Middlehurst; J Large; G Cawley; A Bagnall", "journal": "Springer", "ref_id": "b22", "title": "The temporal dictionary ensemble (tde) classifier for time series classification", "year": "2020" }, { "authors": "M Middlehurst; J Large; M Flynn; J Lines; A Bostrom; A Bagnall", "journal": "MACHINE LEARNING", "ref_id": "b23", "title": "Hivecote 2.0: a new meta ensemble for time series classification", "year": "2021" }, { "authors": "M Middlehurst; W Vickers; A Bagnall", "journal": "Springer International Publishing", "ref_id": "b24", "title": "Scalable dictionary classifiers for time series classification", "year": "2019" }, { "authors": "P Schäfer", "journal": "Data Mining and Knowledge Discovery", "ref_id": "b25", "title": "The boss is concerned with time series classification in the presence of noise", "year": "2015" }, { "authors": "P Schäfer; U Leser", "journal": "", "ref_id": "b26", "title": "Fast and accurate time series classification with weasel", "year": "2017" }, { "authors": "E Schulz; M Speekenbrink; A Krause", "journal": "Journal of Mathematical Psychology", "ref_id": "b27", "title": "A tutorial on gaussian process regression: Modelling, exploring, and exploiting functions", "year": "2018" }, { "authors": "V M Vargas; P A Gutiérrez; R Rosati; L Romeo; E Frontoni; C Hervás-Martínez", "journal": "Computers in Industry", "ref_id": "b28", "title": "Deep learning based hierarchical classifier for weapon stock aesthetic quality control assessment", "year": "2023" }, { "authors": "Z Wang; W Yan; T Oates", "journal": "IEEE", "ref_id": "b29", "title": "Time series classification from scratch with deep neural networks: A strong baseline", "year": "2017" }, { "authors": "Z Zhou; B Huang; R Zhang; M Yin; C Liu; Y Liu; Z Yi; X Wu", "journal": "IEEE Transactions on Instrumentation and Measurement", "ref_id": "b30", "title": "Methods to recognize depth of hard inclusions in soft tissue using ordinal classification for robotic palpation", "year": "2022" } ]
[ { "formula_coordinates": [ 5, 214.92, 402.4, 261.43, 54.43 ], "formula_id": "formula_0", "formula_text": "A =      real 1,1 imag 1,1 . . . real 1,c imag 1,c real 2,1 imag 2,1 . . . real 2,c imag 2,c . . . . . . . . . . . . . . . real N,1 imag N,1 . . . real N,c imag N,c      , (1" }, { "formula_coordinates": [ 5, 476.35, 424.98, 4.24, 8.74 ], "formula_id": "formula_1", "formula_text": ")" }, { "formula_coordinates": [ 5, 134.77, 558.35, 345.83, 20.72 ], "formula_id": "formula_2", "formula_text": "C im of A is mapped to a symbol α t of Σ if β m,t-1 ≤ C i,m ≤ β m,t , with t ∈ {1, 2, . . . , T }." }, { "formula_coordinates": [ 5, 261.2, 646.03, 219.39, 23.23 ], "formula_id": "formula_3", "formula_text": "i = w l • w r w l + w r (ȳ l -ȳr ),(2)" }, { "formula_coordinates": [ 8, 257.12, 149.02, 223.48, 30.32 ], "formula_id": "formula_4", "formula_text": "MAE = 1 N N i=1 | ŷi -y i |,(3)" }, { "formula_coordinates": [ 8, 249.95, 266.21, 230.64, 31.02 ], "formula_id": "formula_5", "formula_text": "QWK = 1 - N i,j ω i,j O i,j N i,j ω i,j E i,j ,(4)" }, { "formula_coordinates": [ 8, 170.37, 316.56, 55.84, 14.6 ], "formula_id": "formula_6", "formula_text": "E ij = Oi•O•j N" } ]
10.1109/CVPR.2016.90
2023-05-17
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b7", "b9", "b7", "b10" ], "table_ref": [], "text": "In 2013, Szegedy et al. [1] discovered adversarial examples, which can fool DNN by adding an imperceptible perturbation to a clean sample and raised public concerns about DNN's reliability. Since then, many adversarial defense methods, including adversarial training [1][2][3], adversarial example detection [4,5], etc., have been proposed to improve DNN's robustness to adversarial examples, among which adversarial training is considered the most effective.\nBesides improving DNN's adversarial robustness, another advantage that adversarial training has is found. Chalasani et al. [6] showed that l ∞ -adversarial training tends to produce sparse and stable Integrated Gradients-based [7] attribution tensors. They think such sparseness means producing a concise explanation, where only the input features with significant contributions are included. In a word, the suppression of non-robust or non-significant features is considered a benefit that adversarial training brings.\nHowever, a recent study showed that such suppression will result in unrealized threats. Duan et al. [8] found the inequality phenomena on the input attribution map that occur during the l ∞ -adversarial training, and these phenomena make the model less reliable. To be specific, they showed that the l ∞ -adversarially trained model produces attribution maps with higher Gini value [9] compared to the standard-trained model and is more vulnerable than the standard-trained model when a few pixels with high attribution values are perturbed by i.i.d. noise or are occluded. While DNN's adversarial robustness is crucial for safety, sacrificing DNN's robustness to more practical perturbations, like i.i.d. random noise and occlusion, is not wise. Thus, our goals are to improve such practical robustness by releasing the inequality phenomena and to preserve the adversarial robustness gained by l ∞ -adversarial training. In this paper, we proposed a method called Input Gradient Distillation (IGD) to achieve our goals. During l ∞ -adversarial training, IGD uses Cosine Similarity to align input gradients between the fixed standard-trained model and the model being l ∞ -adversarially trained. Also, a theoretical analysis is conducted to prove that a more equal saliency map leads to a smaller deviation in class score brought by input space noise or occlusion. Experimental results show that IGD can dramatically release the inequality phenomena in l ∞ -adversarial training and improve the l ∞ -adversarially trained model's robustness to attacks devised by Duan et al. [8] while preserving the adversarial robustness of the l ∞ -adversarially trained model. Specifically, attacked by inductive noise, IGD decreases l ∞ -adversarially trained model's error rate from around 70% to around 10% on Imagenet-100. After training with IGD, the error rate of the model attacked by inductive occlusion drops from around 40.76% to 24.23% on Imagenet-100, and the IGD-trained model generalizes well to different occlusion colors compared to CutOut [10] used by Duan et al. [8]. We also test our method on noisy images of Imagenet-C [11]. Results show that IGD-trained models have better robustness to Gaussian noise, impulse noise, and shot noise compared to the PGDAT-trained model with up to 21.11% descent in the error rate." }, { "figure_ref": [], "heading": "Main contributions of our paper are as follows:", "publication_ref": [], "table_ref": [], "text": "• We propose a method, called \"Input Gradient Distillation\"(IGD for short), to release the inequality phenomena in l ∞ -adversarial training. Experiments show that the model trained with IGD tends to have a more equal attribution map and better robustness to i.i.d. noise and occlusion compared to the PGDAT-trained model while preserving adversarial robustness. • We formally analyze the relationship between the Gini value(a metric to evaluate inequality) of saliency maps and models' robustness to noise and occlusion and claim that the equal decision pattern promotes the model's robustness by suppressing the deviation of the class score. We also explain why such robustness improvement is not notable on low-resolution datasets like CIFAR100." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b7" ], "table_ref": [], "text": "In this section, we introduce works related to this paper. In subsection 2.1, we introduce the concept of l ∞ -adversarial training. In subsection 2.2, we introduce works that utilized input gradient for adversarial defense. In subsection 2.3, we explain how the inequality phenomena in l ∞ -adversarial training are defined and measured. In subsection 2.4, we introduce attack algorithms devised by Duan et al. [8]." }, { "figure_ref": [], "heading": "L ∞ -adversarial training", "publication_ref": [ "b1", "b1" ], "table_ref": [], "text": "The main idea of adversarial training is adding adversarial examples in the training phase. Madry et al. [2] improved DNN's adversarial robustness by solving a min-max optimization problem:\nmin θ max δ L(f θ (x + δ), y), s.t. δ p ≤(1)\nwhere the inner maximum problem can be solved by PGD algorithm [2]. If we define p = ∞, then the training is called l ∞ -adversarial training. In this paper, we only use PGD to solve the inner maximum problem and set p = ∞." }, { "figure_ref": [], "heading": "Using input gradients for adversarial defense", "publication_ref": [ "b11", "b12", "b13", "b14", "b15" ], "table_ref": [], "text": "Using input gradients for adversarial defense is not a completely new idea. Previous works [12,13] have attempted to improve DNN's adversarial robustness by penalizing the L 2 norm of input gradients.\nOther works, such as [14][15][16], aimed to align the input gradient of the model being trained with some external guidance in order to obtain or enhance adversarial robustness. The studies mentioned above demonstrate that incorporating input gradients into the loss function can effectively enhance the model performance. " }, { "figure_ref": [], "heading": "Inequality phenomena in L ∞ -adversarial training", "publication_ref": [ "b7", "b8", "b7" ], "table_ref": [], "text": "As mentioned in section 1, Duan et al. [8] found that compared to standard training, l ∞ -adversarial training will make DNN produce a more unequal input attribution map(For brevity, we use A f (x), where f stands for DNN and x stands for input). They used Gini value [9] to evaluate such inequality phenomena, formally:\nGini(Φ) = 1 n * (n + 1 -2 * n i=1 (n + 1 -i) * φ i n i=1 φ i )(2)\nwhere Φ = {φ i , i = 1...n | φ i ≤ φ i+1 }. The higher the Gini value is, the more unequal the Φ is.\nDuan et al. [8] defined two types of inequality: Global inequality and regional inequality. These two types of inequality are both measured by the Gini value. For global inequality, they calculate the Gini value of A f (x)(Gini(A f (x)) for short). For regional inequality, they divide A f (x) into blocks with size r × r, sum up values within the block to get a downsampled attribution map(A f r (x) for short) and calculate the Gini value of A f r (x)(Gini(A f r (x)) for short)." }, { "figure_ref": [], "heading": "Attacks devised for the inequality phenomena", "publication_ref": [ "b7" ], "table_ref": [], "text": "To reveal threats brought by the inequality phenomena, Duan et al. [8] devised two attack algorithms to attack l ∞ -adversarially trained models, called Inductive noise attack and Inductive occlusion attack." }, { "figure_ref": [], "heading": "Inductive Noise Attack", "publication_ref": [], "table_ref": [], "text": "Inductive noise attack(INA for short) uses A f (x) as guidance to perturb samples with Gaussian noise σ ∈ N (0, 1) in an order determined by the importance of pixels (features). Formally, we have: \nx = x + M *" }, { "figure_ref": [], "heading": "Inductive Occlusion Attack", "publication_ref": [ "b7", "b7" ], "table_ref": [], "text": "Inductive occlusion attack(IOA) gradually occludes regions with high attribution values. In each iteration, IOA selects the n biggest pixels as regions' central points and occludes regions with (2r + 1) × (2r + 1) pure color. Duan et al. [8] set N and R to limit n and r, and chose black, gray, and white colors to occlude the image. For simplicity, we refer to IOA with black, gray, and white occlusion colors as IOA-B, IOA-G, and IOA-W respectively.\nBesides two inductive methods mentioned above, Duan et al. [8] also used random noise(RN for short) to attack DNN, which randomly selects several pixels and perturbs them with Gaussian noise σ ∈ N (0, 1)." }, { "figure_ref": [ "fig_0" ], "heading": "Methodology", "publication_ref": [ "b11", "b12", "b1" ], "table_ref": [], "text": "Now we introduce our approach, Input Gradient Distillation, which can release the inequality phenomena in l ∞ -adversarial training. The main idea of IGD is to force the l ∞ -adversarially trained model to have attribution maps similar to the standard-trained model's. The overall framework is shown in Figure 1. During the l ∞ -adversarial training, we feed clean inputs into a fixed standardtrained model and our target model, and align their input gradients.\nAs for the choice of the distance metric, our goal is not to change the absolute value of the gradient, but to change the relative value, as we don't want to alter the model's adversarial robustness, and [12,13] claimed that the norm of the input gradient has a strong correlation with adversarial robustness. Thus, when viewing a 2-D attribution map as a 1-D attribution vector, we align the direction of the attribution vector by using cosine similarity because we have Gini(k * φ) = Gini(φ) and\ncosine_similarity(k * v, v) = 1, where k * φ = {k * φ i , i = 1...n | φ i ≤ φ i+1 }.\nFormally, we have our loss function:\nLoss = CE(y, f θ Adv (x )) -λ * cos(F latten( ∂f y θ Adv (x) ∂x ), F latten( ∂f y θ Std (x) ∂x ))(3)\nwhere x is a adversarial example, λ is a coefficient. The first item of Equation 3 is the same as the loss proposed by Madry et al. [2]. By optimizing the second item of Equation 3, we force the l ∞ -adversarially trained model's input gradient to be close to the standard-trained model's." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Experiments", "publication_ref": [ "b16", "b17", "b18", "b7" ], "table_ref": [ "tab_1", "tab_1" ], "text": "In this section, we will introduce the details of our experiment and analyze the performance of Input Gradient Distillation (IGD). In subsection 4.1, we will show that IGD can release the inequality phenomena in l ∞ -adversarial training and visualize attribution maps. In subsection 4.2 and subsection 4.3, we will show IGD-trained models' robustness to INA and IOA introduced in subsection 2.4 and comparisons to other methods. In subsection 4.4, we compare models' robustness to RN and noisy images of Imagenet-C. Our experiments are mainly conducted on CIFAR100 [17] and Imagenet-100 [18] using ResNet18 [19]. We mainly use Saliency map to generate attribution maps, where A f (x) = ∂f y (x) ∂x . The setup of our experiment is shown in subsection A.2. a lot compared to the PGDAT-trained model, which accords with our assumption that IGD aligns the direction and has little influence on the norm of the saliency map. Though IGD can release global inequality well, it can not release regional inequality effectively. This means that pixels with a high attribution value tend to cluster in a few regions. We also report the Gini value of A f (x) and A f r (x) generated by other four attribution methods in subsection A.10, where our findings in Table 1 still hold. As for preserving the model's adversarial robustness, the adversarial accuracy only drops 1-2% on Imagenet-100 and CIFAR100. IGD can even slightly promote the model's standard accuracy on both Imagenet-100 and CIFAR100.\nFor a more intuitive explanation of how IGD works, we visualize attribution maps on Imagenet-100 in Figure 2. Pixels on visualized attribution maps with warm colors have high attribution values and actually dominate the prediction [8]. In Figure 2, warm color pixels in attribution maps of PGDAT are much fewer than Standard's and IGDs', which indicates IGD-trained models have a relatively equal decision pattern and explains IGD-trained models' lower Gini(A f (x)). As for the PGDAT-trained model with CutOut, its attribution map does not have visual differences from the PGDAT-trained model, which is consistent with our findings in Table 1. We can also find that there are fewer warm color pixels on the background in IGDs' attribution maps than in standard's. The warm color pixels mainly lie on perceptually-aligned areas like bird wings. This explains IGD's relatively high Gini(A f r (x)): important pixels tend to cluster in a few regions. However, with large λ, the area of warm color pixels begins to grow, and Gini(A f r (x)) decreases. More visualizations are shown in subsection A.13.\nTo conclude, unlike PGDAT's focusing on \"robust pixels\", models trained with IGD tend to focus on \"robust regions\", which contain many important pixels. The equality within the robust region releases global inequality but may not be effective in releasing regional inequality. As a comparison, CutOut will force the model to find robust pixels in different regions. However, it can't release global inequality, because its prediction still relies on a few pixels. datasets and the gap of Gini(A f (x))(see subsection 5.2). On CIFAR100 and Imagenet-100, all IGD-and PGDAT-trained models have a higher Gini(A f (x)) than the standard-trained model, but some of them still have better robustness to INA. This indicates that a higher Gini(A f (x)) does not necessarily indicates worse robustness to INA, which we will explain in detail in section 5. Error rates of models attacked by INA with different attribution methods are shown in subsection A.11." }, { "figure_ref": [], "heading": "Robustness to IOA", "publication_ref": [ "b7" ], "table_ref": [ "tab_3" ], "text": "In this part, we compare the IOA-robustness of models trained with different methods. Results are shown in Table 3. On CIFAR100, both PGDAT-and IGD-trained models have better robustness to IOA than the standard-trained model and IGD slightly improves IOA-robustness compared to the PGDAT-trained model. On Imagenet-100, the PGDAT-trained model has worse IOA-robustness than the standard-trained, which is consistent with Duan et al. [8]'s conclusion. Because of IGD's releasing the inequality phenomena, most of the models trained with IGD tend to have competitive robustness to IOA. When λ=2 to 4, IGD-trained models have better robustness to IOA-G and IOA-W compared to models trained by other methods. We also notice that with λ = 1, IGD only decreases error rates by up to 2 % or even degrades IOA-robustness(see subsection A.12) compared to the PGDAT-trained model, which we will discuss in subsection 5.1. Another interesting finding is that IOA-robustness gained by IGD has better generalization than that gained by CutOut. The PGDAT-trained model using CutOut has competitive robustness to IOA-B but has similar or even worse robustness to IOA-G and IOA-W compared to the PGDAT-trained model. Error rates of models attacked by IOA with other attribution methods are shown in subsection A.12." }, { "figure_ref": [], "heading": "Robustness to i.i.d. random noise", "publication_ref": [ "b10" ], "table_ref": [ "tab_5", "tab_6", "tab_7" ], "text": "Apart from inductive attacks introduced in subsection 2.4, we use RN to evaluate the robustness of models trained on CIFAR100 and Imagenet-100. We also use noisy images in the subset of Imagenet-C [11], which has the same classes as Imagnet-100, to evaluate the robustness of models trained on Imagenet-100. The main difference between noisy images of Imagenet-C and RN is that the variance of the random noise in Imagenet-C is smaller than RN. Results on RN are shown in Figure 3(c) and Figure 4(c). Results on noisy images of Imagenet-C are shown in Table 5, Table 6, and Table 7.\nFor RN, on Imagenet-100, we can find that though pixels attacked by i.i.d. noise are randomly selected, the PGDAT-trained model is still more vulnerable than the standard-trained model. Like results on INA, IGD can promote l ∞ -adversarially trained model's robustness to RN. On CIFAR100, the improvement in RN-robustness brought by IGD is not as notable as that on Imagenet-100. As for results on Imagenet-C, We can see that all IGD-trained models have lower error rates compared to the standard-trained model and the PGDAT-trained model. IGD can reduce the error rate of models by up to 21.11% compared to the PGDAT-trained model. We also find that CutOut worsens the PGDAT-trained model's robustness to noise and increases the PGDAT-trained model's error rate by up to 4.9%. Unlike results shown in subsection 4.2, the standard-trained model has worse robustness to noisy images in Imagenet-C than the PGDAT-trained model, which we will discuss in subsection 5.1." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_5", "fig_4" ], "heading": "Explaination of IGD-and standard-trained models' robustness to noise and occlusion", "publication_ref": [ "b19" ], "table_ref": [ "tab_4", "tab_2", "tab_4", "tab_5", "tab_6", "tab_7", "tab_4", "tab_3" ], "text": "In this paper, we test two types of noise. One is the additive noise like Gaussian noise and Poisson noise, where x = x + δ * m, and the other is the multiplicative-additive noise like INA2 and impulse noise, where x = x * (1 -m) + δ * m. For conciseness, we also view occlusion as a kind of noise, whose formula of x is the same as the multiplicative-additive noise's.\nFor simplicity, we first consider a linear score model f\n( x) = w T x + b( w, x ∈ R n×1 , and b ∈ R) attacked by some i.i.d. additive noise δ ∈ R n×1 (δ i ∼ D(µ δ , σ 2 δ )) that is masked by m ∈ {0, 1} n×1 having m 0 = k. Formally, we have y = w T ( x + δ * m) + b = y + w T ( δ * m) = y + y δ , where y δ ∼ D(µ δ k i=1 w di , σ 2 δ k i=1 w 2 di )(d i < d i+1 and 1 ≤ d i ≤ n), y\nis the clean sample's score, and w can be viewd as model f 's saliency map [20].\nFor Gaussian noise and Poisson noise, we have µ δ = 0 and y δ ∼ D(0, σ 2 δ k i=1 w 2 di ). We can suppress y δ 's impact in two ways: one is enlarging y to submerge y δ , and the other is directly suppressing y δ . We mainly discuss the latter one. Because IGD does not significantly change clean samples' confidence (see Table 4) and ∂f y (x) ∂x 1\n(see Table 2), we assume that y and w * m 1 are fixed. Thus, we suppress E (y -y) 2 = σ 2 δ k i=1 w 2 di by solving:\nminimize k i=1 w 2 di , s.t. k i=1 |w di | -C = 0\nUsing Lagrange Multiplier Method, we have |w di | = C k . This indicates that if the w * m 1 is fixed or kept on certain scale, then f has the best additive noise robustness when w * m is totally equal, where Gini( w * m) = 0. Also, we have \nC 2 k ≤ k i=1 w 2 di ≤ C 2 ,\nk i=1 w di 2 = k i=1 w 2 di + 2 * ∆ * (∆ -|w b | + |w a |) ≤ k i=1 w 2 di -2 * ∆ 2 < k i=1 w 2 di\nConsequently, when w * m 1 is fixed, decreasing Gini( w * m) means decreasing y σ 's standard deviation. For a more complicated non-linear model like CNN, we can approximate it as a linear score model by computing first-order Taylor expansion:f y (x) ≈ w T x + b, where our proof may still hold.\nAs for the multiplicative-additive noise, we can transform it into:x = x+(δ -x) * m. Formally, given that x ∼ D(µ x , σ 2 x ), we have y = y + y δ + y -x and E (y Multiple sampling and averaging are conducted to reduce randomness. In Figure 6, the difference in ( k i=1 w di ) 2 between the PGDAT-trained and the IGDtrained model is not significant, indicating that when the model is attacked by the multiplicativeadditive random noise, the ( k i=1 w di ) 2 is not the main factor varying robustness of different models. In Figure 5, we observe that all IGD-trained models have lower k i=1 w 2 di than the PGDAT-trained model. This is consistent with IGD-trained model's better robustness to noise and our proof above. We also measure di under all number of attacked pixels. However, results in Figure 3 shows that the standard-trained model has better robustness to noise than the PGDAT-trained model. We think this is because the standard-trained model correctly classifies clean samples with high confidence(see Table 4). Even though the standard-trained model's class score is affected by noises with large deviations, the clean sample's class score is large enough to submerge the noise, allowing the standard-trained model to correctly classify the noisy sample. This also boosts IGD-trained models' robustness, as IGD can slightly promote confidence compared to the PGDAT. On Imagenet-C, however, the smaller σ δ weakens the influence of k i=1 w 2 di . Thus, when tested on Imagenet-C, the noise-robustness of the standard-trained model becomes worse than the PGDAT-trained model's, which is shown in Table 5, Table 6, and Table 7.\n-y) 2 = (µ δ -µ x ) 2 ( k i=1 w di ) 2 + (σ 2 δ + σ 2 x ) k i=1 w 2 di . Unlike\nAs for the occlusion, like the multiplicative-additive noise, we have y = y + y δ + y -x . However, unlike the other two types of noise, the occlusion has\nσ δ = 0, indicating E (y -y) 2 = (µ δ - µ x ) 2 ( k i=1 w di ) 2 + σ 2 x k i=1 w 2 di .\nAs mentioned in subsection 4.1, pixels that are considered important by the IGD-trained model tend to cluster when λ is not high. This may result in high ( k i=1 w di ) 2 (see IGD(λ=1) in Table 4) and worse or non-improved robustness to occlusion, which is shown in Table 3. Moreover, the smaller coefficient of k i=1 w 2 di weakens the robustness gained by the equal decision pattern, which explains why IOA-robustness gained by IGD is not as notable as the noise-robustness. When λ is increased and regional inequality starts releasing, the relatively low\n( k i=1 w di ) 2 and k i=1 w 2\ndi weaken the effect of occlusion, and the IGD-trained models' robustness to all three types of occlusion are improved.\nTo conclude, the IGD-trained model and the standard-trained model gain robustness to i.i.d. noise and occlusion because of their equal decision patterns' suppressing the deviation of class score, and the higher correct classification confidence further boosts their robustness." }, { "figure_ref": [], "heading": "Conjecture on low-resolution dataset", "publication_ref": [ "b20", "b17" ], "table_ref": [ "tab_4" ], "text": "On low-resolution datasets like CIFAR100, though the standard-trained model has a more equal decision pattern, it does not have better robustness to i.i.d. noise and occlusion. Apart from the small gap of Gini(A f (x))(see subsection A.7), we hypothesize that more pixels means the equality of the decision pattern being more important. In subsection 5.1, we prove that the Gini value has the same monotonicity as the deviation of the class score. After changing |w a | and |w b | by δ, we have\n∆ k i=1 w 2 d i ∆Gini( w * m) = (δ-|w b |+|wa|) * k 1 |wd i |\na-b * k, which indicates when the gap of Gini value is fixed, a larger k means a larger descent on the deviation of the class score and greater importance of the equality of decision pattern. In Figure 3(c), when k is low and λ=2, the IGD-trained model has lower error rates than the standard-trained. However, as k grows, the standard-trained model tend to have better RN-robustness compared to the IGD-trained model. This may indicates that the importance of the equal decision pattern grows as the number of pixels grows.\nAdditionally, we acknowledge that the equality of decision pattern and classification confidence are not the only two factors that have influences on the model's noise-and occlusion-robustness. In Table 4, though PGDAT-trained model with CutOut has larger k i=1 w 2 di and ( k i=1 w di ) 2 , and has higher error rate on IOA-G and IOA-W, its error rates on IOA-B is much lower than IGD(λ=1, 2), which suggests that there are other factors, like the frequency of perturbation [21], that influence model's noise-and occlusion-robustness. These factors may overwhelm the equality of decision pattern and classification confidence when the dataset's resolution is low. To validate our conjecture, we do additional experiments on TinyImaganet [18], which contains images with resolution 64*64. Results are shown in subsection A.1 and subsection A.5. We can find that improvement of INArobustness gained by IGD on TinyImagenet is more notable than that on CIFAR100. This may indicates that vulnerability caused by the unequal decision pattern is more notable on high-resolution datasets than on low-resolution datasets." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b7", "b19" ], "table_ref": [], "text": "In this paper, we propose a simple yet effective method, called Input Gradient Distillation (IGD), to release the inequality phenomena in l ∞ -adversarial training. While preserving the model's adversarial robustness, IGD can make l ∞ -adversarially trained model's decision pattern equal and improve the l ∞ -adversarially trained model's robustness to inductive attacks [8], i.i.d. random noise, and occlusion. Such improved robustness has better generalization compared to that gained by CutOut. We also formally explain how IGD-and standard-trained model gain robustness to noise and occlusion and claim that the equal decision pattern promotes the model's robustness by suppressing the deviation of the class score. Also, we find the unsatisfactory performance of IGD on low-resolution datasets and attribute it to the small scale of pixels and the presence of other factors. We hope that this work can inspire future research to alleviate the issue of clustering important pixels and provide a better explanation of how the equality of the decision pattern interacts with other factors. will clarify attack parameters if necessary. When using IOA, we set N=10 and R=4 for CIFAR100 and set R=20 for Imagnet-100. We mainly use Saliency Map [20] to generate attribution maps and will note if other attribution methods are used. When measuring regional inequality, we set r = 16 for Imagenet-100 and r = 4 for CIFAR100." }, { "figure_ref": [], "heading": "A.3 Failure case on CIFAR10", "publication_ref": [], "table_ref": [], "text": "IGD is designed to align the attribution vector's direction rather than to alter the norm of the attribution vector significantly. In most cases, like on Imagenet-100, CIFAR100, and TinyImagenet, IGD does well. On CIFAR10, however, we find that IGD alters the norm of the attribution vector compared to other cases and may worsen the model's robustness. " }, { "figure_ref": [], "heading": "A.6 How to choose λ for IGD", "publication_ref": [ "b27", "b17", "b16" ], "table_ref": [ "tab_1", "tab_1", "tab_9", "tab_1", "tab_9" ], "text": "In subsection 4.1, we show that on Imagenet-100, within a specific interval of λ, the descent of Gini(A f (x)) and error rate to INA and RN is huge, and the adversarial robustness does not drop a lot. Thus, if a user wants to promote the model's INA-and RN-robustness while preserving the adversarial robustness, stop increasing λ when Gini(A f (x)) is no longer rapidly decreasing. If a user wants to promote the model's IOA-robustness, choose the λ when Gini(A f r (x)) stops decreasing or the adversarial robustness is below the user's tolerance. Specifically, we suggest user try λ = 2 on datasets with the similar resolution as Imagenet-100.\nA.7 The small gap of Gini(A f (x)) on low-resolution datasets Some may question whether the small gap of Gini(A f (x)) between the standard-and l ∞adversarially trained model is common on low resolution datasets and will it be influenced by the number of classes? To answer these, we evaluate Gini(A f (x)) and Gini(A f r (x)) of Resnet50 trained on CIFAR10 provided by Engstrom et al. [28] and Resnet18 trained on TinyImagenet [18] and CIFAR100 [17] by us. Results are shown in Table 10. On CIFAR10, CIFAR100, and TinyImagenet, the difference of Gini(A f (x)) between standard-trained and l ∞ -adversarially trained model is still much smaller than that on Imagenet-100(see Table 1 andTable 11). This may indicates that the small gap of Gini(A f (x)) between the standard-trained model and the l ∞ -adversarially trained model is common. Also, we find that the gap of Gini(A f r (x)) in Table 10 is as large as that on Imagenet-100. We believe that on higher resolution datasets like Imagenet-100, it's the inequality A.10 Gini(A f (x)) and Gini(A f r (x)) using other attribution method Table 11: Global Gini value and regional Gini value across different datasets, training methods, and attribution methods." }, { "figure_ref": [], "heading": "Dataset Method", "publication_ref": [], "table_ref": [], "text": "Attr. method Integrated Gradients Input X Graidient GradShap SmoothGrad\nGini(A f (x)) Gini(A f r (x)) Gini(A f (x)) Gini(A f r (x)) Gini(A f (x)) Gini(A f r (x)) Gini(A f (x)) Gini(A f r (x)) \nCIFAR100" }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "A.1 Rsults on TinyImagenet We post error rates of Resnet18 attacked by INA nad RN on TinyImagenet. We also report Average confidence, Gini(A f (x)), and ∂f y (x)\nof Resnet18 across different methods on TinyImagenet." }, { "figure_ref": [], "heading": "A.2 Experiment setting", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.2.1 Datsets, models and attribution methods", "publication_ref": [ "b16", "b17", "b21", "b18", "b22", "b6", "b23", "b24", "b19", "b25" ], "table_ref": [], "text": "We perform our experiment on CIFAR100 [17] and Imagenet-100 [18](same setting as [22]) for lower computation complexity. We use Resnet18 [19] for both CIFAR100 and Imagenet-100. We use Input X Gradients [23], Integrated Gradients [7], Shapley Value [24], SmoothGrad [25], and Saliency map [20], which are provided by Kokhlikyan et al. [26]." }, { "figure_ref": [], "heading": "A.2.2 Training setup", "publication_ref": [ "b1" ], "table_ref": [], "text": "For fairness, all of the models are trained for 150 Epochs by SGD with momentum 0.9 and weight decay 5e-4. The initial learning rate is 0.1 and is reduced on the plateau with factor 0.1. All adversarial examples used during the training phase are generated by PGD [2] with = 8/255, stepsize = 2/255, and iter = 10. We choose the standard-trained model with the best standard accuracy and the l ∞adversarially-or IGD-trained model with the best PGD-10 accuracy for evaluation." }, { "figure_ref": [], "heading": "A.2.3 Evaluation setup", "publication_ref": [ "b26", "b10", "b7" ], "table_ref": [], "text": "To evaluate the adversarial robustness of trained models fairly, we use AutoAttack [27] with = 8/255. We use error rate to evaluate models' robustness to attacks introduced in subsection 2.4 and on noisy images of Imagenet-C [11] where we select classes in Imagenet-100. When we compare the robustness of different models, the error rate is the proportion of misclassified samples among samples correctly classified by all models being compared. We follow the settings of the attack algorithm in [8] and within the region that further exapnds the gap of Gini(A f (x)) and the downsample operation of calculating Gini(A f r (x)) cover up such inequality. Thus, we think that it's the downsample operation, which is usually used in building up low-resolution datasets like CIFAR, that reduces the gap of Gini(A f (x)).\nTable 10: Global Gini value and regional Gini value of standard-trained and l∞-adversarially trained model on CIFAR10 and TinyImagenet. r is set to 4 on CIFAR10 and CIFAR100, and is set to 8 on TinyImagenet. " }, { "figure_ref": [], "heading": "A.8 IGD's training time", "publication_ref": [], "table_ref": [], "text": "Theoretically, compared to PGDAT, IGD only needs extra cost to obtain the standard-trained model's input gradient. The input gradient of the model being l ∞ -adversarially trained can be obtained by the first iteration of the adversarial attack, like PGD. Given that we run k attack's iterations for each batch and the standard-trained model has the same scale as the model being trained, the total time expense increases by above k+1 k for the second-order derivative may has extra cost. In our implementation, we do not make use of the first iteration to get the clean sample's gradient for low coupling. On Imagenet-100, using NVDIA A100 40GB, PGDAT takes 13 minutes for an epoch, and IGD takes 18 minutes for an epoch. Both PGDAT and IGD use PGD-10 to generate adversarial examples." }, { "figure_ref": [], "heading": "A.9 How to decrease Gini value monotonically", "publication_ref": [], "table_ref": [], "text": "For conciseness, let's assume that ∀i ∈ n, w i ≥ 0 and w i ≤ w i+1 , because when calculating the Gini value, we usually take the absolute value of w. First, if we do not change two elements(w a < w b and a < b) positions and we decrease the bigger one by δ while increasing the smaller one by δ, the Gini value is decreased. Saying we have a new popluation w , this can be proved because\nThus, this results in Equation 2's descent.\nNext, we prove that the operation, where we select two elements(w a < w b ), increase w a by ∆, and decrease w b by ∆, which somehow change w a and w b 's position, can be decomposed into a sequence of operations that do not change elements' relative positions.\nSaying after all operations, w a and w b becomes w a and w b (a ≤ a ≤ b ≤ b). In each operation, we try to decrease w b by w a+1 -w a , and add it to w a . This operation will have two results. One is we successfully perform this operation without changing any element's position, which decreases the Gini value, and the next operation between w a and w b is the same as the operation between w a+1 and w b . The other is that decreasing w b will result in w b 's being smaller than w b-1 . Under this condition, we decrease w b by w b -w b-1 , and add it to w a , which also decrease Gini value. In this case, the next operation between w a and w b is the same as the operation between w a and w b-1 .\nThus, we can decompose any operation of changing two elements' values into a sequence of operations mentioned above, and ensure that each operation in the sequence will decrease the Gini value, which proves that the operation of selecting two elements(w a < w b ) and reducing the difference between them will decrease Gini value. We can also prove that by decreasing w a by ∆, and by increasing w b by ∆, we can increase the Gini value monotonically, which is symmetric to the proof above. A.12 Error rate of IOA using other attribution methods We post error rates of models attacked by IOA with attribution methods other than saliency map, where our findings in subsection 4.3 still hold. Results are shown in Table 12, Table 13, Table 14, and Table 15.\nA.13 More visualization of attribution maps We post visualizations of other attribution methods in Figure 23, and results don't vary a lot. This indicates that our methods and analysis will not differ significantly due to differences in attribution methods." } ]
Since adversarial examples appeared and showed the catastrophic degradation they brought to DNN, many adversarial defense methods have been devised, among which adversarial training is considered the most effective. However, a recent work showed the inequality phenomena in l ∞ -adversarial training and revealed that the l ∞ -adversarially trained model is vulnerable when a few important pixels are perturbed by i.i.d. noise or occluded. In this paper, we propose a simple yet effective method called Input Gradient Distillation (IGD) to release the inequality phenomena in l ∞ -adversarial training. Experiments show that while preserving the model's adversarial robustness, compared to PGDAT, IGD decreases the l ∞adversarially trained model's error rate to inductive noise and inductive occlusion by up to 60% and 16.53%, and to noisy images in Imagenet-C by up to 21.11%. Moreover, we formally explain why the equality of the model's saliency map can improve such robustness.
Releasing Inequality Phenomena in L ∞ -Adversarial Training via Input Gradient Distillation
[ { "figure_caption": "Figure 1 :1Figure 1: The framework of IDG.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Visualizations of feature attribution maps of Imagenet-100.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "4. 22Robustness to INA In this part, we compare the INA-robustness of models trained with different methods. Results are shown in Figure 3(a), Figure 3(b), Figure 4(a), and Figure 4(b). On Imagenet-100, all dashed lines are far below the orange line, which indicates that IGD can effectively improve l ∞ -adversarially trained model's robustness to INA. When λ > 1, models trained with IGD even have better robustness to INA than the standard-trained model. On CIFAR100, both PGDAT-and IGD-trained models have better robustness to INA compared to the standard-trained model, but improvement of INArobusstness gained by IGD is minor, which we hypothesize is due to the resolution difference between", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :Figure 4 :34Figure 3: Error rate↓ of Resnet18 trained with different methods on Imagenet-100.", "figure_data": "", "figure_id": "fig_3", "figure_label": "34", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Average k i=1 w 2 d i across different methods on Imagenet-100. Models are attacked by RN.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Average ( k i=1 w d i ) 2 across different methods on Imagenet-100. Models are attacked by RN.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "which means we can suppress the upper bound of the deviation of class score by decreasing w * m 1 . Some may question that when w * m 1 is fixed, does decreasing Gini( w * m) means decreasing y σ 's standard deviation? For two non-zero elements in w * m, saying w a and w b , where |w a | < |w b |, the only way to decrease Gini( w * m) is to increase |w a | by ∆ and decrease |w b | by ∆, where |w b | -|w a | ≥ 2 * ∆. This can be gradually achieved without changing elements' relative positions in each iteration, which ensures a monotonic descent in the Gini value(see subsection A.9). After increasing |w a | and decreasing |w b | by ∆, we obtain new set of w and have:", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "k i=1 w 22di , ( k i=1 w di ) 2 doesn't have the same monotonicity as the Gini value. After increasing |w a | and decreasing |w b | by ∆, ( k i=1 w di ) 2 will vary if w a and w b have different sign. Consequently, we can only analyze ( k i=1 w di ) 2 through statistics. To further validate our thoughts, on Imagenet-100, we measure k i=1 w 2 di and ( k i=1 w di ) 2 of models attacked by RN with different number k of perturbed pixels, where d i = randomChoice(1...n), d i = d j (i = j), and d i < d i+1 .", "figure_data": "", "figure_id": "fig_7", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "k i=1 w 2 di and ( k i=1 w di ) 22of models attacked by INA(see subsection A.4) and on other datasets(see subsection A.5). Also, we find the standard-trained model has larger k i=1 w 2", "figure_data": "", "figure_id": "fig_8", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Error rate↓ of Resnet18 trained with different methods on CIFAR10. Model is attacked by INA1.", "figure_data": "", "figure_id": "fig_9", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Error rate↓ of Resnet18 trained with different methods on CIFAR10. Model is attacked by INA2.", "figure_data": "", "figure_id": "fig_10", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: Error rate↓ of Resnet18 trained with different methods on CIFAR10. Model is attacked by RN.", "figure_data": "", "figure_id": "fig_11", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 13: Average k i=1 w 2 d i across different methods on Imagenet-100 attacked by INA.", "figure_data": "", "figure_id": "fig_12", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 :Figure 15 :1415Figure 14: Average ( k i=1 w d i ) 2 across different methods on Imagenet-100 attacked by INA.", "figure_data": "", "figure_id": "fig_13", "figure_label": "1415", "figure_type": "figure" }, { "figure_caption": "Figure 16 :16Figure 16: Average ( k i=1 w d i ) 2 across different methods on CIFAR100 attacked by RN.", "figure_data": "", "figure_id": "fig_14", "figure_label": "16", "figure_type": "figure" }, { "figure_caption": "Figure 17 :17Figure 17: Average k i=1 w 2 d i across different methods on TinyImagenet attacked by RN.", "figure_data": "", "figure_id": "fig_15", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "Figure 18 :18Figure 18: Average ( k i=1 w d i ) 2 across different methods on TinyImagenet attacked by RN.", "figure_data": "", "figure_id": "fig_16", "figure_label": "18", "figure_type": "figure" }, { "figure_caption": "Figure 19 :19Figure 19: Error rate↓ of models trained with different methods on CIFAR100 attacked by INA1.", "figure_data": "", "figure_id": "fig_18", "figure_label": "19", "figure_type": "figure" }, { "figure_caption": "Figure 20 :20Figure 20: Error rate↓ of models trained with different methods on CIFAR100 attacked by INA2.", "figure_data": "", "figure_id": "fig_20", "figure_label": "20", "figure_type": "figure" }, { "figure_caption": "Figure 21 :21Figure 21: Error rate↓ of models trained with different methods on Imagenet-100 attacked by INA1.", "figure_data": "", "figure_id": "fig_21", "figure_label": "21", "figure_type": "figure" }, { "figure_caption": "The standard accuracy, adversarial accuracy, global Gini value, and regional Gini value of Resnet18 across different datasets and methods.", "figure_data": "Standard77.56%0.00% 0.530(80%)0.342(65%)PGDAT [2]57.62%24.61% 0.666(100%) 0.527(100%)PGDAT+CutOut [8]53.77%23.62% 0.665(100%) 0.524(99%)CIFAR100IGD(λ=1)58.04%24.83% 0.663(100%) 0.526(100%)IGD(λ=2)57.79%24.33% 0.638(96%)0.507(96%)IGD(λ=3)56.39%23.45% 0.617(93%)0.489(93%)IGD(λ=4)57.02%22.43% 0.603(91%)0.475(90%)Standard87.32%0% 0.544(58%)0.328(58%)PGDAT [2]70.74%33.28% 0.933(100%) 0.565(100%)PGDAT+CutOut [8]70.36%32.54% 0.924(99%)0.558(99%)Imagenet-100IGD(λ=1)71.42%33.02% 0.834(89%)0.557(99%)IGD(λ=2)71.92%32.28% 0.737(79%)0.554(98%)IGD(λ=3)72.10%31.82% 0.705(76%)0.536(95%)IGD(λ=4)72.18%31.20% 0.694(74%)0.532(94%)", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": "∂f y (x) ∂x1of Resnet18 across differentdatasets and methods.DatasetMethod∂f y (x) ∂x1Standard27563724(100%)PGDAT [2]660362(2.40%)PGDAT+CutOut [8] 570716(2.07%)CIFAR100IGD(λ=1)652751(2.37%)IGD(λ=2)616606(2.24%)IGD(λ=3)573567(2.08%)IGD(λ=4)629620(2.28%)Standard22982087(100%)PGDAT [2]446562(1.94%)PGDAT+CutOut [8] 455458(1.98%)Imagenet-100IGD(λ=1)463524(2.02%)IGD(λ=2)466847(2.03%)IGD(λ=3)468178(2.04%)IGD(λ=4)460062(2.00%)", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Error rate↓ of Resnet18 across different methods, datasets, and types of IOA.", "figure_data": "DatasetMethodAttackIOA-BIOA-GIOA-WStandard56.17% 36.11% 52.23%PGDAT [2]40.48% 18.04% 40.91%PGDAT+CutOut Duan et al. [8] 12.56% 17.64% 40.53%CIFAR100IGD(λ=1)37.61% 16.48% 41.10%IGD(λ=2)37.08% 16.26% 38.94%IGD(λ=3)37.51% 16.90% 39.20%IGD(λ=4)38.56% 17.17% 39.06%Standard17.59%8.25% 13.71%PGDAT [2]32.31% 10.85% 40.76%PGDAT+CutOut [8]21.17% 10.12% 39.94%Imagenet-100IGD(λ=1)31.49% 10.22% 38.30%IGD(λ=2)22.68%5.98% 28.53%IGD(λ=3)19.82%5.19% 24.62%IGD(λ=4)18.01%5.46% 24.23%", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Average ( k i=1 w d i ) 2 and k", "figure_data": "i=1 w 2 d i ofResnet18 attacked by IOA, and average confidence ofcorrectly classified sample across different methodson Imagenet-100.Methodk i=1 w 2 di(k i=1 wd i ) 2 ConfidenceStandard166.21169.0996.73%PGDAT [2]1.128.7577.07%PGDAT+CutOut [8]1.069.5377.43%IGD(λ=1)0.769.4478.60%IGD(λ=2)0.37.1279.29%IGD(λ=3)0.246.0479.29%IGD(λ=4)0.215.7378.50%", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Error rate↓ on Imagenet-C's Gaussian noise.", "figure_data": "ModelSeverity12345Standard15.19% 35.50% 66.31% 88.23% 96.45%PGDAT[2]5.06% 14.07% 38.17% 69.43% 91.98%PGDAT+CutOut[8]4.54% 14.73% 42.37% 74.33% 93.26%IGD(λ=1)3.02% 10.06% 28.47% 60.32% 88.79%IGD(λ=2)3.42%9.30% 26.07% 56.67% 86.92%IGD(λ=3)2.86%8.25% 24.03% 52.93% 85.44%IGD(λ=4)3.62%9.50% 23.64% 51.31% 82.74%", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Error rate↓ on Imagenet-C's impulse noise.", "figure_data": "ModelSeverity12345Standard34.45% 57.53% 72.65% 90.60% 96.09%PGDAT [2]13.77% 33.43% 53.19% 81.20% 94.61%PGDAT+CutOut[8] 14.73% 37.84% 56.71% 84.62% 95.00%IGD(λ=1)9.99% 24.95% 40.86% 73.60% 91.35%IGD(λ=2)7.69% 21.83% 37.57% 69.79% 90.43%IGD(λ=3)6.21% 18.34% 32.08% 65.38% 88.82%IGD(λ=4)7.50% 19.76% 32.71% 63.25% 85.93%", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Error rate↓ on Imagenet-C's shot noise.", "figure_data": "ModelSeverity12345Standard18.28% 39.78% 65.78% 88.53% 94.25%PGDAT[2]7.65% 20.68% 45.13% 77.22% 90.11%PGDAT+CutOut[8]6.96% 21.79% 49.05% 81.33% 91.72%IGD(λ=1)4.80% 14.83% 33.89% 70.02% 86.16%IGD(λ=2)4.67% 14.07% 31.20% 66.47% 83.83%IGD(λ=3)3.91% 12.36% 29.36% 65.15% 83.73%IGD(λ=4)4.34% 13.31% 29.16% 62.00% 81.07%", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "∂f y (x)", "figure_data": "∂x1of Resnet18 across methods onCIFAR10.DatasetMethod∂f y (x) ∂x1Standard12175720(100%)PGDAT [2]591025(4.85%)PGDAT+CutOut [8] 555282(4.56%)CIFAR10IGD(λ=1)428836(3.52%)IGD(λ=2)308085(2.53%)IGD(λ=3)258002(2.12%)IGD(λ=4)254231(2.09%)", "figure_id": "tab_8", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "A.11 Error rate of INA using other attribution methods", "figure_data": "Standard0.6110.3040.5810.4120.5870.4240.3880.320PGDAT [2]0.6720.3770.7020.5690.6960.5840.6400.554PGDAT+CutOut [8]0.6730.3430.7010.5680.6930.5800.6050.508IGD(λ=1)0.6760.3810.6970.5650.6980.5880.6390.559IGD(λ=2)0.6550.3670.6770.5510.6740.5680.6170.543IGD(λ=3)0.6420.3620.6600.5340.6600.5570.6070.542IGD(λ=4)0.6360.3610.6480.5230.6500.5480.6070.543Standard0.6330.2850.6050.4070.6100.4130.5640.502PGDAT [2]0.9360.3820.9420.6010.9470.6330.9410.594PGDAT+CutOut [8]0.9230.3500.9350.5960.9410.6240.9330.562Imagenet-100IGD(λ=1)0.8290.3990.8540.5920.8650.6270.9150.696IGD(λ=2)0.7390.3950.7720.5890.7910.6250.8150.698IGD(λ=3)0.7150.3750.7440.5730.7640.6090.8070.692IGD(λ=4)0.7010.3650.7350.5710.7540.6060.7960.700See", "figure_id": "tab_9", "figure_label": "11", "figure_type": "table" } ]
Junxi Chen; Junhao Dong; Xiaohua Xie
[ { "authors": "C Szegedy; W Zaremba; I Sutskever; J Bruna; D Erhan; I Goodfellow; R Fergus", "journal": "Computer Science", "ref_id": "b0", "title": "Intriguing properties of neural networks", "year": "2013" }, { "authors": "Aleksander Madry; Aleksandar Makelov; Ludwig Schmidt; Dimitris Tsipras; Adrian Vladu", "journal": "", "ref_id": "b1", "title": "Towards deep learning models resistant to adversarial attacks", "year": "2017" }, { "authors": "Yisen Wang; Difan Zou; Jinfeng Yi; James Bailey; Xingjun Ma; Quanquan Gu", "journal": "", "ref_id": "b2", "title": "Improving adversarial robustness requires revisiting misclassified examples", "year": "2020" }, { "authors": "Nicholas Yao Qin; Sara Frosst; Colin Sabour; Garrison Raffel; Geoffrey Cottrell; Hinton", "journal": "", "ref_id": "b3", "title": "Detecting and diagnosing adversarial images with class-conditional capsule reconstructions", "year": "2019" }, { "authors": "Shasha Li; Shitong Zhu; Sudipta Paul; Amit Roy-Chowdhury; Chengyu Song; Srikanth Krishnamurthy; Ananthram Swami; Kevin S Chan", "journal": "Springer", "ref_id": "b4", "title": "Connecting the dots: Detecting adversarial perturbations using context inconsistency", "year": "2020" }, { "authors": "Prasad Chalasani; Jiefeng Chen; Amrita Roy Chowdhury; Xi Wu; Somesh Jha", "journal": "PMLR", "ref_id": "b5", "title": "Concise explanations of neural networks using adversarial training", "year": "2020" }, { "authors": "Mukund Sundararajan; Ankur Taly; Qiqi Yan", "journal": "", "ref_id": "b6", "title": "Axiomatic attribution for deep networks", "year": "2017" }, { "authors": "Ranjie Duan; Yuefeng Chen; Yao Zhu; Xiaojun Jia; Rong Zhang", "journal": "", "ref_id": "b7", "title": "Inequality phenomenon in l ∞ -adversarial training, and its unrealized threats", "year": "" }, { "authors": "Robert Dorfman", "journal": "", "ref_id": "b8", "title": "A formula for the gini coefficient", "year": "1979" }, { "authors": "Terrance Devries; Graham W Taylor", "journal": "", "ref_id": "b9", "title": "Improved regularization of convolutional neural networks with cutout", "year": "2017" }, { "authors": "Dan Hendrycks; Thomas Dietterich", "journal": "", "ref_id": "b10", "title": "Benchmarking neural network robustness to common corruptions and perturbations", "year": "" }, { "authors": "Andrew Ross; Finale Doshi-Velez", "journal": "", "ref_id": "b11", "title": "Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients", "year": "2018" }, { "authors": "Chris Finlay; Adam M Oberman", "journal": "", "ref_id": "b12", "title": "Scaleable input gradient regularization for adversarial robustness", "year": "2019" }, { "authors": "Alvin Chan; Yi Tay; Yew-Soon Ong", "journal": "", "ref_id": "b13", "title": "What it thinks is important is important: Robustness transfers through input gradients", "year": "2020-06" }, { "authors": "Rulin Shao; Jinfeng Yi; Pin-Yu Chen; Cho-Jui Hsieh", "journal": "", "ref_id": "b14", "title": "How and when adversarial robustness transfers in knowledge distillation?", "year": "2021" }, { "authors": "Maksym Andriushchenko; Nicolas Flammarion", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b15", "title": "Understanding and improving fast adversarial training", "year": "2020" }, { "authors": "Alex Krizhevsky", "journal": "", "ref_id": "b16", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Li Fei-Fei", "journal": "IEEE", "ref_id": "b17", "title": "Imagenet: A largescale hierarchical image database", "year": "2009" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b18", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Karen Simonyan; Andrea Vedaldi; Andrew Zisserman", "journal": "", "ref_id": "b19", "title": "Deep inside convolutional networks: Visualising image classification models and saliency maps", "year": "2013" }, { "authors": "Dong Yin; Raphael Gontijo Lopes; Jon Shlens; Ekin Dogus Cubuk; Justin Gilmer", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b20", "title": "A fourier perspective on model robustness in computer vision", "year": "2019" }, { "authors": "Rahul Rade; Seyed-Mohsen Moosavi-Dezfooli", "journal": "", "ref_id": "b21", "title": "Reducing excessive margin to achieve a better accuracy vs. robustness trade-off", "year": "2022" }, { "authors": "Avanti Shrikumar; Peyton Greenside; Anna Shcherbina; Anshul Kundaje", "journal": "", "ref_id": "b22", "title": "Not just a black box: Learning important features through propagating activation differences", "year": "2016" }, { "authors": "M Scott; Su-In Lundberg; Lee", "journal": "", "ref_id": "b23", "title": "A unified approach to interpreting model predictions", "year": "2017" }, { "authors": "Daniel Smilkov; Nikhil Thorat; Been Kim; Fernanda Viégas; Martin Wattenberg", "journal": "", "ref_id": "b24", "title": "Smooth-Grad: removing noise by adding noise", "year": "2017-06" }, { "authors": "Narine Kokhlikyan; Vivek Miglani; Miguel Martin; Edward Wang; Bilal Alsallakh; Jonathan Reynolds; Alexander Melnikov; Natalia Kliushkina; Carlos Araya; Siqi Yan; Orion Reblitz-Richardson", "journal": "", "ref_id": "b25", "title": "Captum: A unified and generic model interpretability library for pytorch", "year": "2020" }, { "authors": "Francesco Croce; Matthias Hein", "journal": "", "ref_id": "b26", "title": "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks", "year": "2020" }, { "authors": "Logan Engstrom; Andrew Ilyas; Shibani Hadi Salman; Dimitris Santurkar; Tsipras", "journal": "", "ref_id": "b27", "title": "Robustness (python library)", "year": "2019" } ]
[ { "formula_coordinates": [ 2, 226.47, 575.19, 277.53, 14.66 ], "formula_id": "formula_0", "formula_text": "min θ max δ L(f θ (x + δ), y), s.t. δ p ≤(1)" }, { "formula_coordinates": [ 3, 198.2, 413.14, 305.8, 27.9 ], "formula_id": "formula_1", "formula_text": "Gini(Φ) = 1 n * (n + 1 -2 * n i=1 (n + 1 -i) * φ i n i=1 φ i )(2)" }, { "formula_coordinates": [ 3, 108, 673.03, 57.91, 8.74 ], "formula_id": "formula_2", "formula_text": "x = x + M *" }, { "formula_coordinates": [ 4, 108, 487.01, 336.67, 9.65 ], "formula_id": "formula_3", "formula_text": "cosine_similarity(k * v, v) = 1, where k * φ = {k * φ i , i = 1...n | φ i ≤ φ i+1 }." }, { "formula_coordinates": [ 4, 141.39, 515.56, 362.61, 26.27 ], "formula_id": "formula_4", "formula_text": "Loss = CE(y, f θ Adv (x )) -λ * cos(F latten( ∂f y θ Adv (x) ∂x ), F latten( ∂f y θ Std (x) ∂x ))(3)" }, { "formula_coordinates": [ 7, 108, 514.18, 396.67, 53.36 ], "formula_id": "formula_5", "formula_text": "( x) = w T x + b( w, x ∈ R n×1 , and b ∈ R) attacked by some i.i.d. additive noise δ ∈ R n×1 (δ i ∼ D(µ δ , σ 2 δ )) that is masked by m ∈ {0, 1} n×1 having m 0 = k. Formally, we have y = w T ( x + δ * m) + b = y + w T ( δ * m) = y + y δ , where y δ ∼ D(µ δ k i=1 w di , σ 2 δ k i=1 w 2 di )(d i < d i+1 and 1 ≤ d i ≤ n), y" }, { "formula_coordinates": [ 7, 215.61, 661.59, 180.78, 30.32 ], "formula_id": "formula_6", "formula_text": "minimize k i=1 w 2 di , s.t. k i=1 |w di | -C = 0" }, { "formula_coordinates": [ 8, 273.05, 218.04, 93.18, 15.1 ], "formula_id": "formula_7", "formula_text": "C 2 k ≤ k i=1 w 2 di ≤ C 2 ," }, { "formula_coordinates": [ 8, 143.53, 322.62, 324.7, 30.32 ], "formula_id": "formula_8", "formula_text": "k i=1 w di 2 = k i=1 w 2 di + 2 * ∆ * (∆ -|w b | + |w a |) ≤ k i=1 w 2 di -2 * ∆ 2 < k i=1 w 2 di" }, { "formula_coordinates": [ 8, 106.83, 420.22, 399.1, 28.45 ], "formula_id": "formula_9", "formula_text": "-y) 2 = (µ δ -µ x ) 2 ( k i=1 w di ) 2 + (σ 2 δ + σ 2 x ) k i=1 w 2 di . Unlike" }, { "formula_coordinates": [ 9, 108, 84.49, 396, 27.06 ], "formula_id": "formula_10", "formula_text": "σ δ = 0, indicating E (y -y) 2 = (µ δ - µ x ) 2 ( k i=1 w di ) 2 + σ 2 x k i=1 w 2 di ." }, { "formula_coordinates": [ 9, 106.83, 170.01, 108.86, 14.11 ], "formula_id": "formula_11", "formula_text": "( k i=1 w di ) 2 and k i=1 w 2" }, { "formula_coordinates": [ 9, 109.2, 334.78, 150.81, 17.88 ], "formula_id": "formula_12", "formula_text": "∆ k i=1 w 2 d i ∆Gini( w * m) = (δ-|w b |+|wa|) * k 1 |wd i |" }, { "formula_coordinates": [ 16, 112.59, 170.25, 23.59, 4.68 ], "formula_id": "formula_13", "formula_text": "CIFAR100" } ]
10.18653/v1/N19-1121
[ { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "Introduction", "publication_ref": [ "b7", "b5", "b0", "b1", "b6", "b2", "b3", "b3" ], "table_ref": [], "text": "Multilingual neural machine translation (MNMT) enables translation between unseen language pairs, i.e., zero-shot translation (ZST) (Johnson et al., 2017;Firat et al., 2017). Prior studies have explored techniques such as language tags (Wu et al., 2021), residual connections (Liu et al., 2021), and novel training objectives (Al-Shedivat and Parikh, 2019;Pham et al., 2019;Arivazhagan et al., 2019;Gu et al., 2019;Zhu et al., 2020;Zhang et al., 2020;Wang et al., 2021;Yang et al., 2021) for improving ZST. They primarily used the Transformer architecture (Vaswani et al., 2017), which has two variations depending on the position of layer normalization (LayerNorm) (Ba et al., 2016), namely, PreNorm (applied at the input of layers) (Baevski and Auli, 2019) and PostNorm (applied after residual connections), as shown in Fig. 1. As previous studies showed that PreNorm can result in more stable training and faster convergence compared to PostNorm for MNMT (Xiong et al., 2020), most ZST works (Pham et al., 2019;Wu et al., 2021;Liu et al., 2021) use PreNorm as the default setting following those MNMT studies. However, Xu et al. (2019) revealed that PreNorm carries the risk of overfitting the training data. We thus hypothesize that in a multilingual scenario, PreNorm may overfit supervised directions and have poor ZST generalizability. We systematically explore PreNorm and PostNorm's effect on ZST to verify this.\nUsing the OPUS, IWSLT, and Europarl datasets and a total of 54 ZST directions, we show that PostNorm consistently outperforms PreNorm by up to 12.3 BLEU points. Following previous work, we also evaluate different language tag (Wu et al., 2021) and residual connection (Liu et al., 2021) settings, as they have been shown to impact ZST but we observe that PostNorm continues to be superior arXiv:2305.09312v1 [cs.CL] 16 May 2023 thereby lending credibility to our hypothesis.\nTo better understand the performance differences, we introduce a novel analysis approach called layer-wise language recognition (LLR), which tracks the off-target rates for each encoder and decoder layer by training token-level classifiers to recognize the source or target language. This analysis shows that PreNorm is more sensitive to language tag settings than PostNorm, negatively impacting ZST performance. Additionally, by examining the unraveled view of PreNorm (Fig. 1) inspired by Veit et al. (2016), we reveal structural flaws in PreNorm for ZST. Our analysis demonstrates that the order of LayerNorm and selfattention/feed-forward network in PreNorm is the main factor affecting its ZST performance.\nGiven the prevalent use of PreNorm as the default setting in ZST baselines and frameworks such as Fairseq (Ott et al., 2019) 1 and Ten-sor2Tensor (Vaswani et al., 2018), our study emphasizes the importance of careful consideration in the LayerNorm setting for ZST.\n2 Background: LayerNorm LayerNorm (Ba et al., 2016) normalizes the input x by zero-centering and scaling to have a unit standard deviation, followed by an additional trainable transformation, including a gain and bias adjustment. Specifically, it is formulated as:\nLayerNorm(x) = x -E(x) V(x) • g + b,(1)\nwhere g and b are trainable gain and bias. E and V indicate expectation and variance. Lay-erNorm is commonly used in two positions in the Transformer, as shown in Fig. 1. PostNorm, which is the originally proposed setting of the Transformer (Vaswani et al., 2017), involves applying LayerNorm after each sub-module (i.e., selfattention or feed-forward network) and residual connections. PreNorm (Baevski and Auli, 2019), on the other hand, involves applying LayerNorm directly before each sub-module and is known to stabilize Transformer training. While variants of Transformer LayerNorm like RMSNorm (Zhang and Sennrich, 2019) have been proposed, the vanilla PreNorm and PostNorm are still the most widely adopted settings in current multilingual NMT literature. Therefore, we only focus on PreNorm and PostNorm in this work.\nNguyen and Salazar ( 2019) have explored the impacts of normalization and initialization choices on supervised low-resource NMT settings, however, we delve deeper and focus on the significance of the positioning of LayerNorm for zero-shot NMT. We expect this to complete the understanding of LayerNorm's role in multilingualism, particularly in the context of zero-shot translation." }, { "figure_ref": [], "heading": "Experiments and Results", "publication_ref": [], "table_ref": [], "text": "We evaluate the performance of PreNorm and Post-Norm for ZST on various datasets and language pairs. We then analyze the off-target rates and structural discrepancies between PreNorm and Post-Norm to understand performance differences." }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [ "b4" ], "table_ref": [ "tab_0" ], "text": "Datasets We perform ZST experiments on three datasets: OPUS (Zhang et al., 2020), IWSLT (Cettolo et al., 2017), andEuroparl (Koehn, 2005). The statistics of the datasets are summarized in Table 1. We include 7, 4, and 5 languages for each dataset.\nThe training data consists of only English-centric sentence pairs, resulting in 30, 6, and 12 ZST directions for each dataset. The total number of parallel sentences for each dataset is 12.00M, 1.38M, and 15.78M, respectively. We apply BPE (Sennrich et al., 2016) with merge operations of 50k, 40k, and 50k to create a joint vocabulary for each dataset. Training We employ Transformer-base model for OPUS and IWSLT, and Transformer-big for Europarl, in accordance with the distinct sizes of training data. We consider the following settings:\n(1) PreNorm or PostNorm: PreNorm involves LayerNorm directly before each sub-module (i.e., self-attention or feed-forward network), while Post-Norm applies LayerNorm after each sub-module " }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "We evaluate ZST systems using SacreBLEU (Post, 2018) and off-target rates. We report in Table 2 BLEU scores for both zero-shot and supervised directions. For ZST, we also present pivot-based translation results as a reference. Implementation details of evaluation can be found in Appendix B. Our findings are as follows: PreNorm vs. PostNorm: We find that Post-Norm consistently yields better BLEU scores than PreNorm for ZST across various language tag and residual connection settings, while their performance is comparable for supervised directions. Impact of Language Tag and Residual Connection: We observe that using the \"T-ENC\" language tag and \"w/ Res.\" improves ZST performance for IWSLT, which aligns with the findings of Wu et al. (2021) andLiu et al. (2021). Nevertheless, the best performance is achieved using \"w/ Res.\" for Post-Norm with \"S-ENC-T-DEC\" and \"T-ENC\" tags for OPUS and Europarl, respectively (#2 and #4). Given that Wu et al. (2021) andLiu et al. (2021) used PreNorm as the default setting (#2, #4, #6 and #8 are unreported results in their work), our results emphasize the need to consider PostNorm as the default setting for ZST, while the language tag and residual connection settings have less impact. Off-target Rates: Off-target rates help understand the different BLEU score gaps between PreNorm and PostNorm, which ranges from 0.5 to 12.3 BLEU points. For PreNorm and PostNorm with the \"T-ENC\" language tag (#3, #4, #7, and #8), they have similar off-target rates, with a discrepancy ranging from -0.61% to 2.02%, which results in narrow BLEU score gaps, ranging from 0.5 to 1.8 points. However, for PreNorm and PostNorm with the \"S-ENC-T-DEC\" language tag (#1, #2, #5, and #6), the off-target rates show a more considerable discrepancy, ranging from 5.40% to 54.23%, resulting in BLEU score gaps from 1.7 to 12.3 points. Further analysis of the nature of Transformer hidden states in the next section explores the reason for these different off-target rates in translations." }, { "figure_ref": [], "heading": "Tracking Off-targets within Transformer", "publication_ref": [], "table_ref": [ "tab_1", "tab_1" ], "text": "We probe the language independence of hidden states to track off-targets within Transformer and reveal the differences between PreNorm and PostNorm. In previous work, language independence was primarily analyzed using either SVCCA (Raghu et al., 2017) or language classification accuracy (LCA) (Liu et al., 2021). However, Figure 2: The LLR results of #1 and #2 (Table 2) for both ZST and supervised directions for each dataset. We report the average accuracy of three seeds and all the supervised or zero-shot directions. \"Pre-Src\" and \"Pre-Tgt\" indicate the layer-wise source and target language recognition for a PreNorm system (#1), while \"Post-Src\" and \"Post-Tgt\" denote similary for a PostNorm system (#2). \"L1\" to \"L6\" are 6 encoder layers and \"L7\" to \"L12\" are 6 decoder layers. We present the figures of other systems (#3 -#8) in Appendix F.\nwe provide evidence in Appendix C that SVCCA, which measures the cosine similarity between hidden states, are not suitable for ZST systems. Instead, LCA trains a classifier to inspect the hidden states on top of the encoder, but it does not simulate the training of a ZST system, which may introduce bias in the analysis for ZST. 3 In this work, we propose a novel approach for ZST based on LCA: LLR tailors classifiers for each layer to recognize the source or target language. We train a tokenlevel linear classifier for each layer to utilize hidden states in each layer as features to identify the source or target language. We use hidden states obtained by feeding sentence pairs in supervised directions to simulate the training of ZST. We then test each layer's classifer's ability to recognize the source or target language for supervised or zeroshot directions. This approach enables the trained classifier to best represent the language recognition ability of hidden states in a ZST system.\nWe train two types of linear classifiers for each encoder and decoder layer. One is for recognizing the source language, and the other is for the target language. Each linear classifier is a linear transformation from the dimension of the hidden states (512 or 1, 024) to the number of source or target languages (e.g., 7 for OPUS). We use the validation set of all supervised directions to obtain the hidden state of each token in each layer and set their source language tag or target language tag as the gold labels. Note that the decoder hidden state of each token in each layer is obtained auto-regressively without teacher-forcing. We train each classifier for 3 epochs4 with a learning rate of 1e-3 and a batch size of 64 sentences. For inference, we utilize the test sets of all supervised or zero-shot directions for computing the LLR results for corresponding directions, respectively.\nThe LLR results for #1 and #2 in Table 2 are presented in Fig. 2. First, we find that the encoder and decoder hidden states are highly correlated with the target and source languages, respectively, for supervised directions (L1 to L6 of Pre/Post-Tgt and L7 to L12 of Pre/Post-Src of 3 upper sub-figures), which may impact the generalizability for ZST. Second, we see that the encoder hidden states of Post-Norm are less dependent on the source language than PreNorm (L6 of Pre/Post-Src of 3 lower subfigures). Third, we observe that the hidden states in all the decoder layers of PostNorm are more dependent on the target language and less on the source language than PreNorm (L7 to L12 of 3 lower subfigures). The latter two points contribute to the observed gaps in off-target rates between PreNorm and PostNorm. Conclusions for #5 and #6 with the \"S-ENC-T-DEC\" tag are identical (Appendix G).\nFor systems using \"T-ENC,\" we find that the LLR are similar between PreNorm and PostNorm (Appendix G) and attribute the BLEU score gaps to translation quality (i.e., adequacy and fluency)." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Unraveling Structural Flaws of PreNorm", "publication_ref": [], "table_ref": [], "text": "We investigate the structural differences between PreNorm and PostNorm to explain the observed differences in hidden states for models trained with the \"S-ENC-T-DEC\" tag. Inspired by Veit et al. (2016), we present an \"unraveled view\" for PreNorm, which decomposes the residual connections by the summation of several sub-networks, as shown in Fig. 1 (paths with different colors indicate sub-networks). However, this is not applicable to PostNorm, as LayerNorm is located after residual connections. Based on this analysis, the structural characteristic of PreNorm is:\n(1) Shallow Sub-network Nature: PreNorm includes shallow sub-networks, such as the embedding layer output fed through encoder layers without any operation except for the final LayerNorm (red path in Fig. 1), but PostNorm does not.\n(2) LayerNorm Before SA/FFN: In PreNorm, LayerNorm is placed directly before the selfattention (SA) or feed-forward module (FFN) within the residual connection module.\nTo analyze the impact of these structural characteristics on the generalizability of PreNorm in ZST, we swap the order of LayerNorm and SA/FFN within the residual connection module (Swap-PreNorm), while keeping the shallow sub-network nature of PreNorm. Refer to Appendix D for specific illustrations of Swap-PreNorm. The results, presented in Fig 3 , show that PreNorm can be sig-nificantly improved through Swap-PreNorm, with Swap-PreNorm approaching the performance of PostNorm. This demonstrates that ZST is more sensitive to the position of LayerNorm in PreNorm than its shallow sub-network nature." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we comprehensively explored the effects of LayerNorm on ZST performance. Our results demonstrate that PostNorm consistently outperforms PreNorm for ZST, regardless of the language tag and residual connection settings used. Through in-depth analysis of off-target rates and structural flaws in the PreNorm model, we were able to identify the underlying factors that contribute to the performance discrepancy. Our study suggests that care should be taken when selecting the LayerNorm setting for ZST in future research." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "According to us there are 3 limitations of our work which will be addressed in future work.\n• The impact of LayerNorm, language tags, and residual connection settings on ZST was analyzed in this study. However, other factors, such as the number of layers of the Transformer model, may also have an effect and should be further investigated.\n• Our conclusions were based on overall scores across all ZST directions. Further examination of how LayerNorm impacts specific language pairs is necessary.\n• We explored the setting of LayerNorm for ZST systems trained from scratch. Exploration of how the LayerNorm setting of multilingual pre-trained models such as mBART (Liu et al., 2020) impacts the finetuning for ZST will be needed." }, { "figure_ref": [], "heading": "Ethical Considerations", "publication_ref": [], "table_ref": [], "text": "In this study, we utilized only publicly accessible datasets for model training. " }, { "figure_ref": [], "heading": "B Evaluation Details", "publication_ref": [], "table_ref": [], "text": "For OPUS, we use the test sets following (Zhang et al., 2020), while for IWSLT and Europarl, we choose the test sets following (Wu et al., 2021). We select the checkpoint with the lowest validation loss for evaluation. The inference is performed on the trained models using a beam size of 5. For calculating SacreBLEU,9 we utilize the \"zh\" tokenization mode for Chinese, and the \"13a\" tokenization mode for other languages. We use the model of setting #410 ( We report the mean of all the direction pairs. FastText (Joulin et al., 2016). 11 Our experiment has revealed that this tool is slightly more accurate than another tool called \"langdetect,\"12 as it can achieve an accuracy of 98% when decoding reference English sentences in the test set, whereas \"langdetect\" only achieves accuracy of around 92%." }, { "figure_ref": [ "fig_3", "fig_3", "fig_4" ], "heading": "C Discussion about SVCCA score", "publication_ref": [], "table_ref": [], "text": "In previous work (Wu et al., 2021;Liu et al., 2021), the SVCCA score (Raghu et al., 2017), a cosine similarity measure between the hidden states of neural models, was used to compare two ZST models. However, we demonstrate that this method is unsuitable for comparing different ZST systems through an experiment. We removed the final LayerNorm from the PreNorm encoder, denoting it as \"PreNorm w/o Enc-Last.\" We then evaluated the BLEU scores of PreNorm, PostNorm, and \"PreNorm w/o Enc-Last\" on the OPUS dataset, as reported in Table 3. We subsequently calculated the encoder layer-wise SVCCA score for each Layer-Norm setting using the mean-pooled hidden states L1 L2 L3 L4 L5 L6 L7 L8 L9 L10 L11 L12 |<---------Encoder--------->||<---------Decoder-------- of each encoder layer. The average SVCCA score between all the \"en-xx\" and \"xx-en\" directions is reported in Fig. 4. When comparing Fig. 4 with Table 3, we observe that PostNorm has a higher SVCCA score on top of the encoder (L6) than PreNorm, which suggests that the encoder of Post-Norm is more language-agnostic and thus has a higher ZST BLEU score in Table 3, aligning with the results found in Wu et al. (2021) andLiu et al. (2021). However, \"PreNorm w/o Enc-Last\" shows an extremely high SVCCA score on top of the encoder, whereas its ZST BLEU performance is significantly lower than PostNorm by 6.3 BLEU points. This reveals the significant inconsistency between the SVCCA score and the performance of ZST models. Therefore, it is crucial to carefully consider how to leverage SVCCA for ZST analysis in the future.\nOn the other hand, our proposed LLR score is consistent with the ZST BLEU score, as shown in Fig. 5. Specifically, we observe the lowest LLR score on top of the encoder of PostNorm for the source language and the highest LLR scores in all the decoder layers, which aligns with its best ZST performance among the three systems. sub-network characteristics of PreNorm, which is the main difference compared with PostNorm. " }, { "figure_ref": [], "heading": "D Swap-PreNorm", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "E LayerNorm without Trainable Parameters", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_23" ], "heading": "F Details of the LLR Results", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "We show the LLR results of #3 -#8 (Table 2) for ZST and supervised directions in Fig. 7. " }, { "figure_ref": [], "heading": "G Details of the Main Results", "publication_ref": [], "table_ref": [ "tab_14", "tab_1", "tab_0" ], "text": "We report the specific BLEU score for each translation direction and each random seed in Tables 5, 6, 7, 8, 9, and 10. 13 In addition to BLEU scores, we present model-based evaluation results obtained using BLEURT (Sellam et 2020)14 in Table 11. The results trend is consistent with those obtained from BLEU scores. --------Encoder--------->||<---------Decoder--------- ---------Encoder--------->||<---------Decoder--------- ---------Encoder--------->||<---------Decoder--------- ---------Encoder--------->||<---------Decoder--------- ---------Encoder--------->||<---------Decoder--------- ---------Encoder--------->||<---------Decoder--------- --------Encoder--------->||<---------Decoder--------- --------Encoder--------->||<---------Decoder--------- --------Encoder--------->||<---------Decoder--------- ---------Encoder--------->||<---------Decoder-------- ---------Encoder--------->||<---------Decoder-------- ---------Encoder--------->||<---------Decoder-------- ---------Encoder--------->||<---------Decoder-------- ---------Encoder--------->||<---------Decoder-------- ---------Encoder--------->||<---------Decoder-------- ---------Encoder--------->||<---------Decoder-------- --------Encoder--------->||<---------Decoder-------- --------Encoder--------->||<---------Decoder-------- 2) for both ZST and supervised directions for each dataset. We report the average accuracy of three seeds and all the supervised or zero-shot directions. \"Pre-Src\" and \"Pre-Tgt\" indicate the layer-wise source and target language recognition for a PreNorm system (#3, #5, or #7), while \"Post-Src\" and \"Post-Tgt\" denote similary for a PostNorm system (#4, #6, or #8). \"L1\" to \"L6\" are 6 encoder layers and \"L7\" to \"L12\" are 6 decoder layers. 37.6 37.1 37.3 37.3 37.5 37.1 37.5 37.4 37.4 37.2 36.9 37.2 36.4 36.7 37.0 36.7 en-de 29.7 30.1 30.4 30.1 30.4 29.6 30.4 30.1 30.1 30.1 30.1 30.1 30.3 30.5 30.7 30.5 de-en 34.3 34.5 34.2 34.3 34.5 34.1 34.3 34.3 35.0 34.7 34.3 34.7 33.8 34.1 34.4 34.1 en-fr 33.5 33.7 33.6 33.6 33.4 33.8 33.6 33.6 33.7 33.1 33.8 33.5 33.0 33.6 -nl 25.8 26.0 25.9 25.9 27.5 27.7 27.5 27.6 25.7 25.6 25.5 25.6 27.8 27.6 27.5 27.6 nl-de 23.5 23.4 23.9 23.6 24.2 24.6 24.4 24.4 23.6 23.5 23.2 23.4 24.4 24.5 24.5 24.5 fr-nl 25.3 25.8 25.6 25.6 27.4 27.4 27.3 27.4 25.5 25.5 25.3 25.4 27.8 27.6 27.5 37.6 37.4 37.4 37.5 37.5 37.4 37.7 37.5 37.5 37.5 37.4 37.5 37.5 37.5 37.3 37.4 es-en 39.3 38.9 39.0 39.1 39.0 39.0 38.9 39.0 38.8 39.0 39.1 39.0 38.6 39.0 38.9 38.8 en-fr 36.2 36.6 36.5 36.4 36.5 36.4 36.8 36.6 36.3 36.4 36.5 36.4 36.7 36.7 31.5 31.6 31.7 31.6 32.1 31.7 31.7 31.8 31.7 31.9 31.5 31.7 31.7 31.4 31.4 31.5 avg. 34.5 34.5 34.5 34.5 34.5 34.5 34.5 34.5 34.5 34.4 34.5 34.4 34.4 34.4 34.3 34.4 Table 10 \nL1 L2 L3 L4 L5 L6 L7 L8 L9 L10 L11 L12 |<-" }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was supported by JSPS KAKENHI Grant Number 22KJ1843." }, { "figure_ref": [], "heading": "L12 |< ", "publication_ref": [], "table_ref": [], "text": "---------Encoder--------->||<----\n-L9 L10 L11 L12 |<---------Encoder--------->||<---------Decoder-------Tgt Pre-Tgt L1 L2 L3 L4 L5 L6 L7 L8 L9 L10 L11 L12 |<- --------Encoder--------->||<---------Decoder--------->| Accuracy (%) Europarl's Supervised Directions Post-Src Pre-Src Post-Tgt Pre-Tgt L1 L2 L3 L4 L5 L6 L7 L8 L9 L10 L11 L12 |<- --------Encoder--------->||<---------Decoder--------->| Accuracy (%) OPUS's Zero-shot Directions Post-Src Pre-Src Post-Tgt Pre-Tgt L1 L2 L3 L4 L5 L6 L7 L8 L9 L10 L11 L12 |< ---------Encoder--------->||<---------Decoder--------->| Accuracy (%) IWSLT's Zero-shot Directions Post-Src Pre-Src Post-Tgt Pre-Tgt L1 L2 L3 L4 L5 L6 L7 L8 L9 L10 L11 L12 |<- --------Encoder--------->||<---------Decoder--------->| Accuracy (%) Europarl's Zero-shot Directions Post-Src Pre-Src Post-Tgt Pre-Tgt" } ]
This paper studies the impact of layer normalization (LayerNorm) on zero-shot translation (ZST). Recent efforts for ZST often utilize the Transformer architecture as the backbone, with LayerNorm at the input of layers (PreNorm) set as the default. However, Xu et al. ( 2019) has revealed that PreNorm carries the risk of overfitting the training data. Based on this, we hypothesize that PreNorm may overfit supervised directions and thus have low generalizability for ZST. Through experiments on OPUS, IWSLT, and Europarl datasets for 54 ZST directions, we demonstrate that the original Transformer setting of LayerNorm after residual connections (PostNorm) consistently outperforms PreNorm by up to 12.3 BLEU points. We then study the performance disparities by analyzing the differences in offtarget rates and structural variations between PreNorm and PostNorm. This study highlights the need for careful consideration of the Layer-Norm setting for ZST.
Exploring the Impact of Layer Normalization for Zero-shot Neural Machine Translation
[ { "figure_caption": "Figure 1 :1Figure 1: PostNorm, PreNorm, and an unraveled view of PreNorm in a Transformer encoder layer. \"Norm,\" \"SA,\" and \"FFN\" denote LayerNorm, selfattention, and feed-forward network. ⊕ is residual connection. Paths with different colors in the unraveled view of PreNorm indicate respective sub-networks.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: BLEU scores of systems with \"S-ENC-T-DEC\" for ZST. We report the mean of three seeds.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure4: Encoder layer-wise SVCCA scores of PreNorm, PostNorm, and \"PreNorm w/o Enc-Last\" between \"en-xx\" and \"xx-en\" translation directions. We report the mean of all the direction pairs.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: The LLR results of PreNorm, PostNorm, and \"PreNorm w/o Enc-Last.\"We report the mean of all the ZST directions. \"-Src\" and \"-Tgt\" indicate the LLR results for the source and target languages, respectively. \"L1\" to \"L6\" are 6 encoder layers and \"L7\" to \"L12\" are 6 decoder layers.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 Figure 6 :66Fig. 6 illustrates the implementation of Swap-PreNorm, which incorporates LayerNorm following the SA/FFN layers within the residual connection block. Compared with PostNorm, Swap-PreNorm alters the order of LayerNorm and residual connections. As depicted in the unraveled view of Swap-PreNorm in Fig. 6, it preserves the shallow", "figure_data": "", "figure_id": "fig_5", "figure_label": "66", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: The LLR results of #3 -#8 (Table2) for both ZST and supervised directions for each dataset. We report the average accuracy of three seeds and all the supervised or zero-shot directions. \"Pre-Src\" and \"Pre-Tgt\" indicate the layer-wise source and target language recognition for a PreNorm system (#3, #5, or #7), while \"Post-Src\" and \"Post-Tgt\" denote similary for a PostNorm system (#4, #6, or #8). \"L1\" to \"L6\" are 6 encoder layers and \"L7\" to \"L12\" are 6 decoder layers.", "figure_data": "", "figure_id": "fig_23", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Statistics of the training data. N zero and S train denote number of the ZST directions and size of the training data, respectively. base and big indicate Transformer-base and Transformer-big.", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "BLEU scores and off-target rates (shown in brackets). We report the average score of three seeds; refer to Appendix G for BLEU score of each translation direction and seed. \"Res.\" indicates the residual connection of self-attention in the 4 th encoder layer. We mark lower off-target rates and significantly higher BLEU scores(Koehn, 2004) between PreNorm and PostNorm in bold for ZST.", "figure_data": "#Layer NormLanguage TagRes.OPUSZero-shot IWSLTSupervised Europarl OPUS IWSLT Europarl0Pivot21.820.029.5---1 PreNorm S-ENC-T-DEC w/10.1 (42.19%)4.9 (64.84%) 24.9 (07.73%)33.731.534.32 PostNorm S-ENC-T-DEC w/16.8 (08.59%) 12.4 (10.61%) 29.2 (00.34%)33.931.534.53 PreNorm T-ENCw/13.3 (22.99%) 13.7 (03.98%) 29.5 (00.23%)33.731.634.44 PostNorm T-ENCw/14.0 (22.86%) 15.5 (04.59%) 30.8 (00.11%)34.131.534.55 PreNorm S-ENC-T-DEC w/o14.3 (20.67%)8.0 (50.16%) 16.7 (41.87%)33.630.934.36 PostNorm S-ENC-T-DEC w/o 16.0 (15.27%) 17.4 (01.83%) 29.0 (00.41%)33.830.734.47 PreNorm T-ENCw/o13.4 (27.15%) 16.2 (01.54%) 29.9 (02.15%)33.530.934.38 PostNorm T-ENCw/o 13.9 (26.68%) 17.8 (01.50%) 30.8 (00.13%)33.930.634.4", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": ") for pivot-basedtranslation. To calculate the off-target rates, weutilize the language identification tool provided by", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "BLEU scores of LayerNorm-simple. We report the average score of three seeds. \"Res.\" indicates the residual connection of self-attention in the 4 th encoder layer. We mark better scores between PreNorm-simple and PostNorm-simple in bold. For each setting, significantly better or worse BLEU scores (Koehn, 2004) compared with the results in Table2are marked in blue or red.", "figure_data": "", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "BLEU scores of OPUS in ZST directions. Scores in bold are the results reported in Table2. \"1,\" \"10,\" and \"20\" indicates three random seeds. \"Res.\" indicates the residual connection of self-attention in the 4 th encoder layer.", "figure_data": "Layer Direction NormS-ENC-T-DEC w/ Res. 1 10 20 avg.1T-ENC w/ Res. 10 20 avg.S-ENC-T-DEC w/o Res. 1 10 20 avg.1T-ENC w/o Res. 10 20 avg.en-ar23.6 24.1 23.2 23.6 23.7 23.9 24.1 23.9 24.0 23.2 23.1 23.4 22.8 23.8 23.8 23.5ar-enPre.", "figure_id": "tab_8", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "BLEU scores of OPUS in supervised directions. Scores in bold are the results reported in Table2. \"1,\" \"10,\" and \"20\" indicates three random seeds. \"Res.\" indicates the residual connection of self-attention in the 4 th encoder layer.", "figure_data": "33.1 33.2", "figure_id": "tab_9", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "BLEU scores of IWSLT in ZST directions. Scores in bold are the results reported in Table2. \"1,\" \"10,\" and \"20\" indicates three random seeds. \"Res.\" indicates the residual connection of self-attention in the 4 th encoder layer.", "figure_data": "Layer Direction NormS-ENC-T-DEC w/ Res. 1 10 20 avg.1T-ENC w/ Res. 10 20 avg.S-ENC-T-DEC w/o Res. 1 10 20 avg.1T-ENC w/o Res. 10 20 avg.en-it33.9 33.8 33.6 33.8 33.7 33.4 33.7 33.6 33.6 32.9 33.3 33.3 32.4 33.3 33.4 33.0it-en37.5 37.1 37.1 37.2 37.4 37.2 37.0 37.2 35.8 36.3 36.5 36.2 35.8 36.7 36.5 36.3en-nl29.6 29.5 29.4 29.5 29.6 29.5 29.6 29.6 29.2 29.7 29.5 29.5 29.0 29.2 29.2 29.1Pre.nl-en31.9 32.4 32.0 32.1 32.0 32.1 31.9 32.0 30.9 31.3 31.7 31.3 31.2 31.5 31.5 31.4en-ro24.4 25.1 25.1 24.9 25.2 25.1 25.4 25.2 24.4 24.6 24.4 24.5 24.6 24.7 24.6 24.6ro-en31.3 31.6 31.3 31.4 32.1 31.6 31.4 31.7 30.3 30.7 30.9 30.6 30.3 31.2 31.2 30.9avg.31.4 31.6 31.4 31.5 31.7 31.5 31.5 31.6 30.7 30.9 31.1 30.9 30.6 31.1 31.1 30.9en-it33.9 33.3 33.5 33.6 33.8 34.0 33.5 33.8 33.1 33.2 32.6 33.0 32.4 32.6 33.4 32.8it-en37.1 36.9 37.0 37.0 37.1 37.1 36.9 37.0 35.7 35.4 36.1 35.7 36.4 35.7 35.8 36.0en-nl29.6 30.1 30.1 29.9 30.4 30.4 30.0 30.3 29.2 29.0 29.0 29.1 29.2 29.0 29.5 29.2Post.nl-en31.9 32.0 31.6 31.8 31.3 31.9 31.8 31.7 31.0 31.1 31.7 31.3 30.9 30.7 31.3 31.0en-ro25.4 25.2 24.6 25.1 25.3 25.2 25.5 25.3 24.7 25.0 24.6 24.8 24.4 24.4 25.0 24.6ro-en31.5 31.6 31.6 31.6 30.8 31.4 31.1 31.1 30.4 29.6 30.8 30.3 30.4 30.1 30.4 30.3avg.31.6 31.5 31.4 31.5 31.5 31.7 31.5 31.5 30.7 30.6 30.8 30.7 30.6 30.4 30.9 30.6", "figure_id": "tab_10", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "BLEU scores of IWSLT in supervised directions. Scores bold are the results reported in Table2. \"1,\" \"10,\" and \"20\" indicates three random seeds. \"Res.\" indicates the residual connection of self-attention in the 4 th encoder layer.", "figure_data": "Layer Direction NormS-ENC-T-DEC w/ Res. 1 10 20 avg.1T-ENC w/ Res. 10 20 avg.S-ENC-T-DEC w/o Res. 1 10 20 avg.1T-ENC w/o Res. 10 20 avg.es-de23.2 22.0 16.1 20.4 26.7 26.9 27.3 27.06.2 14.1 11.2 10.5 24.9 28.5 28.3 27.2de-es30.3 30.0 27.6 29.3 32.4 32.0 32.3 32.2 15.5 25.7 18.7 20.0 32.9 33.1 33.4 33.1es-fr35.0 35.6 34.0 34.9 38.8 38.8 39.3 39.0 27.8 29.8 28.2 28.6 39.9 39.8 39.9 39.9fr-es36.0 35.5 32.8 34.8 38.6 38.7 38.7 38.7 18.7 30.7 22.3 23.9 39.7 39.7 40.0 39.8es-nl22.7 23.0 14.2 20.0 26.4 26.3 26.3 26.37.0 12.8 15.0 11.6 23.2 27.7 27.5 26.1nl-es27.2 27.1 24.9 26.4 29.1 29.1 29.1 29.1 13.9 23.0 16.9 17.9 29.6 29.7 29.8 29.7Pre.de-fr28.6 28.1 26.9 27.9 31.4 31.3 31.7 31.5 21.9 23.0 22.5 22.5 31.9 32.3 32.2 32.1fr-de23.5 22.0 15.9 20.5 26.3 26.5 26.8 26.56.3 14.3 11.5 10.7 25.0 28.1 28.2 27.1de-nl23.2 23.4 15.0 20.5 26.3 26.2 26.0 26.27.0 12.8 16.2 12.0 22.5 27.5 27.2 25.7nl-de21.4 20.3 14.3 18.7 23.2 23.8 23.5 23.56.4 13.3 11.9 10.5 21.6 24.6 24.6 23.6fr-nl22.9 23.3 14.1 20.1 26.0 25.9 25.8 25.96.8 12.2 15.3 11.4 21.6 27.4 27.1 25.4nl-fr26.0 25.9 25.0 25.6 28.1 28.3 28.2 28.2 19.9 20.9 19.9 20.2 28.9 28.8 28.7 28.8avg.26.7 26.4 21.7 24.9 29.4 29.5 29.6 29.5 13.1 19.4 17.5 16.7 28.5 30.6 30.6 29.9es-de26.0 26.9 26.8 26.6 28.2 28.4 28.7 28.4 26.1 26.3 26.1 26.2 28.7 28.7 28.7 28.7de-es32.3 32.6 32.1 32.3 33.2 33.7 33.5 33.5 32.7 31.9 32.1 32.2 33.5 33.3 33.5 33.4es-fr37.7 38.8 37.5 38.0 40.2 40.0 40.1 40.1 37.9 37.8 37.7 37.8 40.1 39.9 40.5 40.2fr-es37.8 38.5 38.2 38.2 40.0 39.9 40.1 40.0 38.4 37.7 38.0 38.0 39.7 39.7 40.1 39.8es-nl25.6 26.0 26.2 25.9 27.9 27.7 27.8 27.8 26.0 25.7 25.5 25.7 27.8 28.0 27.9 27.9nl-es29.3 29.3 29.1 29.2 29.8 30.0 29.6 29.8 29.4 29.0 29.2 29.2 29.7 29.8 29.8 29.8Post.de-fr30.6 31.7 30.8 31.0 32.8 32.8 33.1 32.9 31.0 30.7 30.8 30.8 32.9 32.4 33.3 32.9fr-de25.9 26.4 26.6 26.3 27.8 28.6 28.8 28.4 26.3 26.0 25.1 25.8 28.2 28.5 28.3 28.3de", "figure_id": "tab_11", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "BLEU scores of Europarl in ZST directions. Scores in bold are the results reported in Table2. \"1,\" \"10,\" and \"20\" indicates three random seeds. \"Res.\" indicates the residual connection of self-attention in the 4 th encoder layer.", "figure_data": "27.6", "figure_id": "tab_12", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "36.2 36.5 fr-en 38.2 38.2 38.0 38.1 38.0 38.2 38.0 38.1 38.0 37.9 38.2 38.0 37.8 38.2 38.0 38.0 en-nl 28.5 28.8 28.7 28.7 28.8 28.7 28.6 28.7 28.5 28.6 28.6 28.6 28.3 28.6 28.3 28.4 nl-en 31.7 31.6 31.5 31.6 31.5 31.7 31.9 31.7 31.6 31.3 31.6 31.5 31.3 31.7 31.6 31.5 avg. 34.3 34.3 34.3 34.3 34.3 34.3 34.4 34.4 34.2 34.2 34.4 34.3 34.2 34.4 34.2 34.3 .4 28.7 28.5 28.6 28.7 29.0 28.8 28.5 28.2 28.4 28.4 28.7 28.5 28.3 28.5 de-en 35.2 35.0 35.5 35.2 34.8 35.1 34.9 34.9 35.2 35.2 35.0 35.1 35.1 35.1 34.7 35.0 en-es 37.6 37.8 37.5 37.6 37.6 37.7 37.6 37.6 37.6 37.5 37.6 37.6 37.3 37.4 37.5 37.4 es-en 39.4 39.0 39.0 39.1 39.0 39.3 38.8 39.0 39.2 38.9 39.1 39.1 39.0 39.1 39.1 39.1 en-fr 36.8 36.8 36.7 36.8 36.7 37.0 36.8 36.6 36.5 37.1 36.7 36.9 36.8 36.7 36.8 fr-en 38.3 38.2 38.4 38.3 38.2 38.2 38.4 38.3 38.2 38.1 38.2 38.2 38.1 38.3 37.9 38.1 en-nl 28.8 28.8 28.6 28.7 28.7 28.7 28.9 28.8 28.6 28.6 28.9 28.7 28.7 28.7 28.5 28.6 nl-en", "figure_data": "en-de28.4 28Post.", "figure_id": "tab_13", "figure_label": "", "figure_type": "table" }, { "figure_caption": ": BLEU scores of Europarl supervised directions. Scores in bold are the results reported in Table2. \"1,\" \"10,\" and \"20\" indicates three random seeds. \"Res.\" indicates the residual connection of self-attention in the 4 th encoder layer. BLEURT scores. We report the mean of three seeds and all the translation directions. \"Res.\" indicates the residual connection of self-attention in the 4 th encoder layer. We mark better scores between PreNorm and PostNorm in bold for ZST.", "figure_data": "#Layer NormLanguage TagRes.Zero-shot OPUS IWSLT Europarl OPUS IWSLT Europarl Supervised0Pivot55.864.673.8---1 PreNorm S-ENC-T-DEC w/35.934.666.563.870.674.92 PostNorm S-ENC-T-DEC w/49.151.273.064.170.675.03 PreNorm T-ENCw/42.553.073.063.770.674.94 PostNorm T-ENCw/43.856.073.864.070.775.05 PreNorm S-ENC-T-DEC w/o44.541.750.363.770.074.86 PostNorm S-ENC-T-DEC w/o47.660.872.964.069.774.97 PreNorm T-ENCw/o42.557.172.563.669.974.88 PostNorm T-ENCw/o43.160.273.864.069.774.9", "figure_id": "tab_14", "figure_label": "11", "figure_type": "table" } ]
Zhuoyuan Mao; Raj Dabre; Qianying Liu; Haiyue Song; Chenhui Chu; Sadao Kurohashi
[ { "authors": "Maruan Al; -Shedivat ; Ankur Parikh", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Consistency by agreement in zero-shot neural machine translation", "year": "2019" }, { "authors": "Naveen Arivazhagan; Ankur Bapna; Orhan Firat; Roee Aharoni; Melvin Johnson; Wolfgang Macherey", "journal": "", "ref_id": "b1", "title": "The missing ingredient in zero-shot neural machine translation", "year": "2019" }, { "authors": "Jimmy Lei; Jamie Ba; Geoffrey E Ryan Kiros; Hinton", "journal": "", "ref_id": "b2", "title": "Layer normalization", "year": "2016" }, { "authors": "Alexei Baevski; Michael Auli", "journal": "", "ref_id": "b3", "title": "Adaptive input representations for neural language modeling", "year": "2019-05-06" }, { "authors": "Mauro Cettolo; Marcello Federico; Luisa Bentivogli; Jan Niehues; Sebastian Stüker; Katsuhito Sudoh; Koichiro Yoshino; Christian Federmann", "journal": "", "ref_id": "b4", "title": "Overview of the IWSLT 2017 evaluation campaign", "year": "2017" }, { "authors": "Orhan Firat; Kyunghyun Cho; Baskaran Sankaran; T Fatos; Yoshua Yarman-Vural; Bengio", "journal": "Comput. Speech Lang", "ref_id": "b5", "title": "Multi-way, multilingual neural machine translation", "year": "2017" }, { "authors": "Jiatao Gu; Yong Wang; Kyunghyun Cho; O K Victor; Li", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Improved zero-shot neural machine translation via ignoring spurious correlations", "year": "2019" }, { "authors": "Melvin Johnson; Mike Schuster; Quoc V Le; Maxim Krikun; Yonghui Wu; Zhifeng Chen; Nikhil Thorat; Fernanda Viégas; Martin Wattenberg; Greg Corrado; Macduff Hughes; Jeffrey Dean", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b7", "title": "Google's multilingual neural machine translation system: Enabling zero-shot translation", "year": "2017" }, { "authors": "Armand Joulin; Edouard Grave; Piotr Bojanowski; Matthijs Douze; Hervé Jégou; Tomás Mikolov", "journal": "", "ref_id": "b8", "title": "Fasttext.zip: Compressing text classification models", "year": "2016" } ]
[ { "formula_coordinates": [ 2, 98.05, 487.36, 191.08, 25.98 ], "formula_id": "formula_0", "formula_text": "LayerNorm(x) = x -E(x) V(x) • g + b,(1)" }, { "formula_coordinates": [ 11, 89.17, 183.95, 128.14, 14.56 ], "formula_id": "formula_1", "formula_text": "L1 L2 L3 L4 L5 L6 L7 L8 L9 L10 L11 L12 |<-" } ]
10.1561/1500000079
2023-05-16
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9" ], "table_ref": [], "text": "Among 1 the seven key requirements to achieve trustworthy AI proposed by the High-Level Expert Group on Artificial Intelligence (AI-HLEG) established by the European Commission (EC), the fifth requirement (\"Diversity, non-discrimination and fairness\") declares: \"In order to achieve Trustworthy AI, we must enable inclusion and diversity throughout the entire AI system's life cycle. [...] This requirement is closely linked with the principle of fairness\"[Chapter 2, Section 1.5, AI-HLEG, 2019]. Hereafter, we try to shed light on how closely these two distinct concepts, diversity and fairness, may be treated by focusing on information access systems [3] and ranking literature [4,5,6]. These concepts should not be used interchangeably because they do represent two different values, but what we argue is that they also cannot be considered totally unrelated or divergent. Having diversity does not imply fairness, but fostering diversity can effectively lead to fair outcomes, an intuition behind several methods proposed to mitigate the disparate impact of information access systems, i.e. recommender systems and search engines [7,8,9,10]." }, { "figure_ref": [], "heading": "Links between Fairness and Diversity", "publication_ref": [ "b10", "b11", "b12", "b4", "b3", "b14", "b12" ], "table_ref": [], "text": "The first link can be found between the concepts of group fairness and egalitarian diversity [11,12]. Indeed, the former, often referred to as demographic or statistical parity, is achieved when different groups, e.g., with regard to certain demographics, receive similar treatments. To maximise egalitarian diversity, hence having a population uniformly distributed among different groups [13], is identical to enforcing group fairness, wherein every group has equal representation i.e. similar treatment. This idea is behind the use of diversity constraints while intervening in the outcome of an automated decision-making system [5]. Moreover, group EWAF'23: European Workshop on Algorithmic Fairness, June 06-08, 2023, Zurich, Switzerland [email protected] (L. Porcaro); [email protected] (C. Castillo); [email protected] (E. Gómez); [email protected] (J. Vinagre) 0000-0003-0218-5187 (L. Porcaro); 0000-0003-4544-0416 (C. Castillo); 0000-0003-4983-3989 (E. Gómez); 0000-0001-6219-3977 (J. Vinagre) fairness relates to the concept of coverage-based diversity, an aggregated diversity metric often used in Recommender Systems literature. Indeed, such metric is maximised when different groups of items are represented in the most heterogeneous way.\nSecond, both fairness and diversity relate to the treatment of, and consequently the impact on, protected/disadvantaged/minority groups (or classes). The definition of protected class is usually dependent upon laws and policies which may vary between countries, aiming at preventing any form of discrimination towards such classes. For instance, the EU Charter of Fundamental Rights states that: \"Any discrimination based on any ground such as sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation shall be prohibited\" [Article 21, EC, 2012].\nAs argued by Castillo [4], ensuring fairness can be seen as \"emphasising not the presence of various groups but ensuring that those in protected groups are effectively included\". Under this lens, it is evident that the construction of a group diverse in egalitarian terms may not result in a fair representation if disadvantaged classes are not effectively included. However, if we consider the exposure diversity with adversarial perspective as defined by Helberger [15], it explicitly aims at \"promoting exposure to critical voices and disadvantaged views that otherwise might be silenced in the public debate\". If defined as above, we notice that both fairness and diversity stress the importance of targeting a representation that is not only equal in terms of distribution but also that may give exposure to historically disadvantaged groups. We can further relate these concepts with the idea of normative diversity [13]. Indeed, if we imagine a scenario where the non-diverse norm coincides with the privileged group -for instance, the STEM community where the old-white-male represents the stereotype of the scientist -increasing the diversity in a normative sense would result in a wider inclusion of marginalised voices, which is what the exposure diversity under an adversarial perspective would target." }, { "figure_ref": [], "heading": "Differences and Limitations", "publication_ref": [ "b15", "b16", "b17", "b18", "b19", "b20" ], "table_ref": [], "text": "So far we have discussed some intersections between diversity and fairness concepts, but in order to better clarify their nature it is useful to focus also on the differences between them. Early quantitative definitions of both values have been proposed several decades ago, but in their rationale we note a substantial difference. Indeed, whilst since the beginning fairness metrics have been proposed to tackle societal issues [16], most of the diversity indexes still widely used have been proposed in disparate fields, e.g., Simpson's Index in Ecology [17], and they have been originally formulated to measure diversity intended as heterogeneity, variety or entropy, e.g., Shannon's Index [18]. Even if this does not undermine their use in measuring diversity, it is also true that their application needs to be contextualised for supporting the validity of the inferred results. Similarly, a lack of a value-oriented approach can be found in the design of the diversification techniques [19,20]. Indeed, looking at the early proposals of the Information Retrieval and Recommender Systems communities, the main goal for diversifying is to tackle the problem of ambiguity of a query or the redundancy of the results, and also to deal with uncertainty. Great advancements have been made in this direction [21], but this utility-oriented definition of diversity has partly created ambiguity over the concept of diversity itself, at least in the communities where such approaches have been applied." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b21", "b22" ], "table_ref": [], "text": "Whilst the aforementioned points are just a few among the several aspects that link diversity and fairness, we conclude by stressing their relevance in recent policies proposed in the European context. The Digital Service Act (DSA) [22] mandates that digital services powered by technologies such as recommender systems and search engines should be monitored to guarantee the avoidance of unfair or arbitrary outcomes.\nUnder a different lens, the Artificial Intelligence Act (AI Act) proposal [23] also refers to the need for bias monitoring as part of the mandatory requirements for high-risk AI systems. Moreover, in terms of diversity the AI Act explicitly states that providers of AI systems should be encouraged to create code of conduct covering aspects such as accessibility, stakeholders participation and ensuring diversity of development teams. These two goals considered above, i.e. system-centric (ensuring bias and fairness in algorithmic systems) and a people-centric view (ensuring diversity of persons involved in the AI design process), are strongly related. Only fostering the diversity of development teams, and therefore embedding different perspectives, could lead to a future where Information Access Systems act in a trustworthy and fair way." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This work is partially supported by the HUMAINT programme (Human Behaviour and Machine Intelligence), Joint Research Centre, European Commission. The project leading to these results received funding \"la Caixa\" Foundation (ID 100010434), under agreement LCF/PR/PR16/51110009, an from EU-funded projects \"SoBigData++\" (grant agreement 871042) and \"FINDHR\" (grant agreement 101070212)." } ]
Fairness and Diversity in Information Access Systems
[]
Lorenzo Porcaro; Carlos Castillo; Emilia Gómez; João Vinagre
[ { "authors": "L Porcaro", "journal": "", "ref_id": "b0", "title": "Assessing the impact of music recommendation diversity on listeners", "year": "2022" }, { "authors": "Ai-Hleg ", "journal": "", "ref_id": "b1", "title": "High-Level Expert Group on Artificial Intelligence, Ethics guidelines for trustworthy AI", "year": "2019" }, { "authors": "M D Ekstrand; A Das; R Burke; F Diaz", "journal": "Foundations and Trends® in Information Retrieval", "ref_id": "b2", "title": "Fairness in information access systems", "year": "2022" }, { "authors": "C Castillo", "journal": "ACM SIGIR Forum", "ref_id": "b3", "title": "Fairness and transparency in ranking", "year": "2018" }, { "authors": "M Zehlike; K Yang; J Stoyanovich", "journal": "ACM Computing Surveys", "ref_id": "b4", "title": "Fairness in ranking, part I: Score-based ranking", "year": "2022" }, { "authors": "G K Patro; L Porcaro; L Mitchell; Q Zhang; M Zehlike; N Garg", "journal": "", "ref_id": "b5", "title": "Fair ranking: A critical review, challenges, and future directions", "year": "2022" }, { "authors": "L E Celis; A Deshpande; T Kathuria; N K Vishnoi", "journal": "", "ref_id": "b6", "title": "How to be fair and diverse?", "year": "2016" }, { "authors": "P.-R Lhérisson; F Muhlenbach; P Maret", "journal": "", "ref_id": "b7", "title": "Fair recommendations through diversity promotion", "year": "2017" }, { "authors": "W Liu; R Burke", "journal": "", "ref_id": "b8", "title": "Personalizing fairness-aware re-ranking", "year": "2018" }, { "authors": "G Mcdonald; C Macdonald; I Ounis", "journal": "Information Retrieval Journal", "ref_id": "b9", "title": "Search results diversification for effective fair ranking in academic search", "year": "2022" }, { "authors": "M Drosou; H Jagadish; E Pitoura; J Stoyanovich", "journal": "Big Data", "ref_id": "b10", "title": "Diversity in big data: A review", "year": "2017" }, { "authors": "M Mitchell; D Baker; N Moorosi; E Denton; B Hutchinson; A Hanna; T Gebru; J Morgenstern", "journal": "", "ref_id": "b11", "title": "Diversity and inclusion metrics in subset selection", "year": "2020" }, { "authors": "D Steel; S Fazelpour; K Gillette; B Crewe; M Burgess", "journal": "European Journal for Philosophy of Science", "ref_id": "b12", "title": "Multiple diversity concepts and their ethical-epistemic implications", "year": "2018" }, { "authors": " Ec", "journal": "", "ref_id": "b13", "title": "European Commission, The charter of fundamental rights, Official Journal of the European Communities", "year": "2012" }, { "authors": "N Helberger; K Karppinen; L D'acunto", "journal": "Information, Communication and Society", "ref_id": "b14", "title": "Exposure diversity as a design principle for recommender systems", "year": "2018" }, { "authors": "B Hutchinson; M Mitchell", "journal": "", "ref_id": "b15", "title": "50 years of test (un)fairness: Lessons for machine learning", "year": "2019" }, { "authors": "J Simpson", "journal": "Nature", "ref_id": "b16", "title": "Measurements of diversity", "year": "1949" }, { "authors": "C Shannon", "journal": "The Bell System Technical Journal", "ref_id": "b17", "title": "A Mathematical Theory of Communication", "year": "1948" }, { "authors": "J Carbonell; J Goldstein", "journal": "", "ref_id": "b18", "title": "Use of MMR, diversity-based reranking for reordering documents and producing summaries", "year": "1998" }, { "authors": "B Smyth; P Mcclave", "journal": "", "ref_id": "b19", "title": "Similarity vs. Diversity", "year": "2001" }, { "authors": "P Castells; N J Hurley; S Vargas", "journal": "Springer", "ref_id": "b20", "title": "Novelty and diversity in recommender systems", "year": "2022" }, { "authors": "", "journal": "EC, European Commission", "ref_id": "b21", "title": "European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and", "year": "2022" }, { "authors": " Ec", "journal": "", "ref_id": "b22", "title": "Proposal for a Regulation laying down harmonised rules on artificial intelligence", "year": "2021" } ]
[]
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b26", "b54", "b58", "b29", "b65", "b31", "b50", "b56", "b66", "b9", "b5", "b40", "b7", "b8", "b53", "b0", "b10", "b45" ], "table_ref": [], "text": "As a cornerstone of natural language processing, event detection (ED) supports numerous downstream tasks, e.g., event extraction (Liu, Liang, and Xu 2022;Wang et al. 2022;Zhang et al. 2021), text classification (Liu 2022;Zheng et al. 2020), information retrieval (Madisetty and Desarkar 2022;Voskarides et al. 2021;Zhao et al. 2020a;2020b),dialogue recognition (Wei et al. 2022), etc. However, due to the datahungry trait of deep learning, traditional ED models often struggle in the scenario where annotating sufficient labeled data is unaffordable. In this light, few-shot event detection (FSED) (Zheng et al. 2021) is proposed, which aims at making predictions with few labeled data.\nRecently, FSED work achieves much progress by the virtue of meta learning (Fei-Fei, Fergus, and Perona 2003), which trains and captures the meta knowledge of event type on the tremendous labeled data of old event types, thereby helping FSED models generalize quickly to novel event types with scarce labeled data. These FSED approaches can mainly be classified into metric-based FSED methods (Lai, Dernoncourt, and Nguyen 2021;Cong et al. 2021) and optimization-based FSED ones (Lai et al. 2021). However, such ambitious data demands in old event types cannot be satisfied completely in some application scenarios. In this paper, we therefore focus on the true few-shot training setting (i.e., only providing a few labeled data regardless of old or new event types) (Perez, Kiela, and Cho 2021) for event detection.\nIn addition, existing FSED approaches are typically exposed to the trigger bias stemming from the ED datasets. Taking a look at the FewEvent dataset (Deng et al. 2020), whether the trigger words of any event type or the event types triggered by the same word, their frequencies strictly follow the long-tail distribution. For instance, for the event type \"Life.Marry\", the percentage of its top-3 trigger words reaches 64.97%. While for the trigger word \"work\", the percentage of top-3 event types triggered by this word amounts to 99.53%. Such long-tail distributions will easily lead to the following issues for event detection. The first is the context-bypassing issue. In the few-shot scenario, the FSED models are easily tempted to have over-confidence in highfrequency trigger words or event types, thereby simply taking the trigger words as clues to determine the event types without considering any event context. The second is the generalization disability issue. Due to the scanty evidence brought by the context-bypassing issue, the FSED models cannot generalize to low-frequency trigger words or event types. These aforementioned issues can be further validated by the performance comparison on biased and unbiased test sets. As shown in Fig. 1, the performance on unbiased test sets (i.e., TUS and COS) is drastically lower than that on the biased one (i.e., IUS). Such performance drops indicate that the trigger bias in the ED datasets can make the FSED models obtain inflated performance. Similar phenomena can also be found on the ACE-2005 dataset (Doddington et al. 2004).\nIn this paper, we attempt to accommodate the lowresource scenarios and overcome the trigger bias in few-shot event detection by proposing a multi-step prompt learn- COnfusion Sampling (COS) construct the test sets without the trigger bias, respectively. These results are produced by Wang et al. (2021).\ning model (MsPrompt), which consists of three main components, including an under-sampling module, a multi-step prompt module, and a prototypical module. In particular, the under-sampling module aims to construct the training data abiding by the true few-shot format. Under such a radical data setting, recent popular prompt learning (Brown et al. 2020;Gao, Fisch, and Chen 2021) which elicits the latent knowledge from the pretrained language models (PLMs) by some prompts can provide sufficient latent information for the FSED models and thus forms the multi-step prompt module. This module extends the existing one-step prompt to a multi-step one, which refines the FSED process into two consecutive subprompts, i.e., the trigger recognizer and the event classifier. More specifically, such two subprompts resort to the PLMs to locate the trigger word and predict the event type, where the generation format forces the FSED models to concentrate on the event context, thereby mitigating the context-bypassing risk. In addition, this module introduces the knowledge-enhanced ontology to enrich the prompt semantics. Finally, the prototypical module, dealing with the generalization disability issue, employs the prototypical networks (Snell, Swersky, and Zemel 2017) to obtain the representation of event types by clustering, which removes the noise of high-frequency labels and enhances the discrimination ability of low-frequency labels.\nExtensive experiments for few-shot event detection are conducted on ACE-2005 and FewEvent. The results show that our proposed MsPrompt achieves obvious improvements in terms of weighted F1-score over the state-of-theart baselines in the strict low-resource scenarios. Moreover, the evaluations on the biased and unbiased test sets verify the debiasing potentiality and strong generalization of MsPrompt.\nOur key contributions in this paper can be summarized in the following three folds. 1. To address the context-bypassing problem, we extend the one-step prompt to a multi-step one by disassembling the FSED process into two consecutive subprompts, which can efficiently aggregate the predicted trigger, the knowledge-enhanced ontology, and the latent knowledge in PLMs to focus on the context and predict accurately. 2. We design a novel prototypical module to mitigate the disabled generalization issue by strengthening the discrimination of low-frequency labels. 3. We verify the effectiveness of MsPrompt against the state-of-the-art baselines on ACE-2005 and FewEvent for the true few-shot event detection and find that MsPrompt can not only achieve better model performance but make progress in debiasing triggers." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "We review the related works from two main aspects, i.e., event detection and debiasing methods." }, { "figure_ref": [], "heading": "Event detection", "publication_ref": [ "b1", "b32", "b14", "b27", "b34", "b39", "b23", "b57", "b7", "b5", "b66" ], "table_ref": [], "text": "According to the accessibility of data resources, the task of event detection can be roughly divided into two categories: data-rich and few-shot ED. In data-rich ED, sufficient data provides a guarantee for the traditional neural networks, such as convolutional neural network (CNN) (Chen et al. 2015;Nguyen and Grishman 2015), recurrent neural network (RNN) (Jagannatha and Yu 2016;Nguyen, Cho, and Grishman 2016), graph neural network (GNN) (Liu, Luo, and Huang 2018;Nguyen and Grishman 2018;Peng et al. 2022), and self-attention network (Liu et al. 2018;2017). In particular, to reduce the cost of tagging triggers, Liu et al. (2019) propose a type-aware bias neural network to encode a sentence with the target event types. Xie and Tu (2022) develop an approach based on Graph Convolutional Network (GCN) for information extraction and aggregation on the graph to alleviate the heavy reliance on a fixed syntactic parse tree structure. Few-shot ED aims to alleviate the problems such as generalization bottlenecks caused by insufficient data and maintain the excellent detection performance in low-resource scenarios. Deng et al. (2020) construct a dynamic-memorybased prototypical network to generate a robust event representation through the multi-hop mechanism. Cong et al. (2021) propose the prototypical amortized conditional random field to handle the label dependency issue. Lai et al. (2021) introduce a regulating representation based on graphs and word sense disambiguation to improve the generalization performance. Zheng et al. (2021) model the taxonomyaware distance relations and integrate the Poincaré embeddings into a TapNet.\nHowever, either data-rich or few-shot ED requires abundant held-out classes to achieve high performance, which is divorced from most application scenarios in which labeled data is difficult to obtain. Therefore we follow the true few-shot training format, i.e., a small validation set of the same size as the few-shot training set. To capture event information with extremely sparse data, we further employ a prompt-based method for the true few-shot event detection, which can drive huge PLMs and evoke the inherent knowledge therein to enrich the semantics of event representation." }, { "figure_ref": [], "heading": "Debiasing methods", "publication_ref": [ "b43", "b43", "b41", "b25", "b63", "b12", "b46", "b53", "b48", "b3" ], "table_ref": [], "text": "As predictive biases emerge in various tasks of NLP, two serious consequences, i.e., outcome disparities and error disparities, can not be ignored (Shah, Schwartz, and Hovy 2020). Therefore, we summarize a series of debiasing methods in detail below.\nDebiasing methods in NLP According to the bias source, Shah, Schwartz, and Hovy (2020) divide the biases in NLP into four categories: label bias, selection bias, model overamplification, and semantic bias. To release the label bias attributed to the incorrect annotation, Qian et al. (2021) design a counterfactual-based debiasing framework for text classification. For the selection bias, i.e., the phenomenon in which the training observations differ from the target distribution due to the non-representative training data, Liu et al. (2021) propose an additional saliency selection layer and an optimization method to mitigate the implicit bias of deep text classifiers. Zhao et al. (2021) observe the majority label bias and recency bias existing in the prompt model of GPT-3, both of which are the over-amplifying bias. That is, models rely on imperfect evidence for predictive shortcuts. Guo, Yang, and Abbasi (2022) automatically mitigate the semantic biases, embodied in undesired social stereotypes carried by PLMs.\nDebiasing methods in ED As mentioned in Section 1, the severe trigger bias is widely appeared in event detection, which essentially belongs to the selection bias and overamplifying bias (Song et al. 2022). To address the contextbypassing problem caused by the trigger bias, Wang et al. (2021) employ adversarial training and trigger reconstruction techniques to ease the over-reliance on the trigger. Although the context-bypassing is alleviated, in few-shot occasions, the addition of noise to the trigger embedding may induce the misclassification of the model, thus aggravating the disabled generalization. Tong et al. (2020) provide an enrichment knowledge distillation model to reduce the inherent bias of common trigger words by inducting open-domain trigger knowledge. However, a large number of unlimited candidate triggers from unlabeled data imported by opendomain knowledge cause a great interference to the event detection model and harm the generalization performance, especially for FSED. In addition, Chen et al. (2021) perform a causal intervention on the context by a backdoor adjustment to mitigate overfitting induced by the trigger bias.\nAlbeit much progress, these debiasing strategies often ignore the context-bypassing or disabled generalization issue, and thus cannot be applied to real low-resource scenarios. We argue these two issues should be dealt with jointly to improve the prediction and generalization performance. Hence, we introduce a task-oriented and knowledgeenhanced ontology without adding noise and develop a novel debiasing proposal MsPrompt for few-shot event detection." }, { "figure_ref": [], "heading": "Approach", "publication_ref": [], "table_ref": [], "text": "In this section, we first formulate the task of event detection and detail the MsPrompt model in Section 3.1, which consists of three main components, including an undersampling module (see Section 3.2), a multi-step prompt module (see Section 3.3), and a prototypical module (see Section 3.4)." }, { "figure_ref": [], "heading": "Task formulation and model framework", "publication_ref": [], "table_ref": [], "text": "ED. The task of event detection can be formulated as x → (x, t) → y, where x represents the event mention, and y ∈ Y represents the event type predicted from the predefined event label set Y . In the intermediate step, the trigger t is identified from the input x, in the form of a word or a phrase which triggers the corresponding event label y (Nguyen et al. 2016). Few-shot ED. Following the archetypal N -way K-shot training format, a meta task is constructed by a support set S with N novel event labels that contain K instances per label and a query set Q that includes unlabeled instances to be predicted from S. Typically, a meta task can accurately detect the event type of each query instance from S with only a few labeled data. However, few-shot ED requires a large amount of old event label data to extract the meta knowledge, making it unsuitable for the real scenarios. True few-shot ED. Formally, given a group of instances with their corresponding trigger t and event label y ∈ Y , each label contains K instances, making up the true fewshot training set. True few-shot ED, typically trained from the true few-shot training set and evaluated by a validation set of the same size, targets to identify the trigger word and detect the predefined event label in the low-resource scenarios. Framework of MsPrompt. Based on the true few-shot ED formulation, we propose a novel approach MsPrompt. The workflow of MsPrompt is shown in Fig. 2. First, through an under-sampling module, a data matrix X ∈ R N ×K is constructed from the initial annotated dataset to satisfy the true few-shot training setting, where N = |Y | and K is the number of samples per event type. Then an instance x i , randomly sampled from X, is fed into the next multistep prompt module to obtain the trigger t i , which is accordingly input into the event classifier together with x i to generate a d-dimension event embedding e 0 ∈ R d . Finally, the event embedding e 0 is mapped into the event vector space E = {e 1 , e 2 , . . . , e N } in the prototypical network, where e i ∈ R d is a d-dimension vector of event type y i . Thus, we can get the probability p i of each event label, and the label with the largest probability is the predicted one." }, { "figure_ref": [ "fig_2" ], "heading": "Under-sampling", "publication_ref": [], "table_ref": [], "text": "As the frequency of labels in the event label set Y is extremely unbalanced and obeys a long-tail distribution, shown in Fig. 3, it brings an unpredictable deviation to event detection. To avoid such deviation and enhance the generalization ability of scarce event types, we utilize an undersampling module into ED, which selects the same number of instances with each event type to form a novel training and validation set without label deviation.\nIn detail, given an ED dataset, we randomly sample\nK 1 1 x 2 1 x 1 K x 1 2 x 2 2 x 2 K x 1 N x 2 N x K N x\nRandom Sample instances for each event type to form a K-shot training set:\nX train =      x 1 1 x 2 1 • • • x K 1 x 1 2 x 2 2 • • • x K 2 . . . . . . . . . . . . x 1 N x 2 N • • • x K N      . (1\n)\nAfter that, we repeat this operation on the rest of instances to generate a K-shot validation set X valid that does not intersect with X train , i.e., X train ∩ X valid = ∅. Finally, the remaining unsampled instances consist the test set X test = X -X valid -X train .\nThe newly constructed training, validation, and test sets can accommodate the true few-shot learning paradigm that meets the low resource scenarios in reality to effectively evaluate the performance of FSED models." }, { "figure_ref": [ "fig_5", "fig_5" ], "heading": "Multi-step prompt", "publication_ref": [], "table_ref": [], "text": "Since the conventional ED paradigm x → (x, t) → y is a multi-step process, we extend the general one-step prompt into the multi-step prompt, performing the two subtasks coherently in one iteration and synchronously training the two consecutive subprompts. We depict how the multi-step prompt module works in Fig. 4. It mainly contains two steps, i.e., trigger recognition (see Section 3.3) and event classification (see Section 3.3). In addition, a supplemental knowledge-enhanced ontology used in each step is discussed in Section 3.3.\nTrigger recognition Identifying triggers can be regarded as a text annotation task. First, we manually construct \"Trigger word is [MASK].\" as the prefixed prompt template T 1 , where [MASK] is the masked position to match the trigger word t i in an event mention x i . Then we concatenate the template T 1 with each event mention x (e.g., \"And I agree that we shouldn't send people over there.\" in Fig. 4) to obtain the modified prompt f 1 as:\nf 1 (x) = [CLS]T rigger word is [M ASK].[SEP ]x[SEP ].\n(2) In detail, given an event mention x, a sequence w = (w 1 , w 2 , • • • , w L ) is obtained after word segmentation, where L is the mention length. The trigger t can be represented by the embedding of [MASK] that is filled via a masked language modeling process. Then the trigger probability distribution is obtained by mapping the vocabulary list of PLMs to w, denoted as P t :\nP t = P ( [M ASK] = t| f 1 (x)) = P ( t = w j | w) .(3)\nThe candidate word w j ∈ w with the highest probability will be recognized as the trigger word. Then the annotated sequence of the event mention is produced, i.e.,\nAnd I agree that we shouldn't send people over there.\n[CLS]\n[SEP]" }, { "figure_ref": [], "heading": "Input Text", "publication_ref": [], "table_ref": [], "text": "Trigger word is [MASK] .\nTemplate STEP 1: Trigger Recognition STEP 2: Event Classification . This is event about [CLS] [SEP]" }, { "figure_ref": [ "fig_5" ], "heading": "Input Text", "publication_ref": [], "table_ref": [], "text": "Trigger word is send.\n[MASK] And I agree that we shouldn't send people over there. \nw = (w 1 , w 2 , • • • , t, • • • , w L ).\nThe loss function of trigger recognition is defined as a cross-entropy loss L t as:\nL t = - L j=1 tj log (P ( t = w j | w)),(4)\nwhere the gold trigger label is extend into a one-hot vector t = { tj } L j=1 . Event classification After obtaining the predicted trigger word, we can classify the mentions into a predefined event label, dubbed as event classification. The prefixed prompt template T 2 used here is \"This is event about [MASK].\", where [MASK] can be regarded as the event embedding e 0 ∈ R d to represent the event context. Next, for generating e 0 , the new assembled prompt f 2 is fed into the masked language model (MLM) as:\nf 2 (x ) = [CLS]T his is event about [M ASK].[SEP ]x [SEP ],\n(5) where x is the initial event mention x integrated with the corresponding trigger word (e.g., \"And I agree that we shouldn't send people over there. [SEP] Trigger word is send.\" in Fig. 4). Note that the trigger word is obtained by the trigger recognizer at the validation or test stage, while it is the ground-truth trigger in the training stage.\nThen, given a set of event labels Y = {y 1 , y 2 , . . . , y N } and the generated event embedding e 0 , the event probability distribution can be symbolized as P y :\nP y = P ( [M ASK] = y| f 2 (x )) = P ( y = y j | Y ) = (p j ) N ,(6)\nwhere p j (j = 1, 2, • • • , N ) is the predicting probability of event label y j , and arg max yj p j is the target event label. To train the event classifier, we employ a cross-entropy loss as the optimization objective:\nL y = - N j=1 ŷj log (p j ),(7)\nwhere the actual event label is represented as a one-hot vector ŷ = (ŷ 1 , ŷ2 , • • • , ŷN ). To sum up, the total loss function of MsPrompt can be expressed as:\nL = αL t + βL y ,(8)\nwhere α, β ∈ R are the adjustable parameters of the trigger recognizer and the event classifier, respectively.\nKnowledge-enhanced ontology In the trigger recognizer and the event classifier, we employ the distinct prompt template T 1 and T 2 as clues to detect the target trigger and the event type by the process of masked language modeling. However, the \"trigger word\" and \"event\" are still hard to understand for the PLMs. Therefore, we introduce the knowledge-enhanced ontology to extend the semantics of these key words to well elicit the latent knowledge from PLMs and connect with the current task.\nIn particular, for the trigger recognizer, we add an ontology text O 1 \"Trigger word: a word that can trigger an event, usually a verb or noun in the sentence.\" after the event mention to further elaborate the meaning of \"trigger word\". Analogously, in the event classifier, another ontology text O 2 \"Event: which type the sentence or trigger belongs to.\" is placed between the mention and the trigger word, which helps the prompt model clarify the objective of event classification." }, { "figure_ref": [], "heading": "Prototypical network", "publication_ref": [], "table_ref": [], "text": "Due to the disabled generalization caused by the severe trigger bias, classifying scarce labels is challenging in the true few-shot setting. Therefore, when generating the probability distribution of the predicted event label in Section 3.3, we abandon the convention of applying a verbalizer to map the embedding e 0 from the vocabularies of PLMs to the label space Y simply. Instead, we introduce a prototypical network that clusters all instances to obtain the centroid of each cluster as the representation of the event labels. With a strong generalization ability, this module can reduce the interference of edge instances in dense clusters, i.e., the noise of high-frequency labels, and can also effectively capture the inter-class relationship as well as the spatial information of all samples in the sparse clusters to boost the discrimination ability of the low-frequency labels.\nIn particular, given an event embedding e 0 obtained from the event classifier and the randomly initialized event vector space E = {e 1 , e 2 , . . . , e N }, we gauge the distance between e 0 and e i ∈ E by Euclidean Metric. The cluster centroid is further calculated as the label representation e i ∈ R d , all of which form a prototypical network E. Then, the prototypical network is updated synchronously with the prompt model by the optimization objective. Thus, the predicted probability p j ∈ R of label y j can be expressed as:\np j = exp (-D (e 0 , e j )) N n=1 exp (-D (e 0 , e n )) ,(9)\nwhere D(•, •) returns the Euclidean distance. Then we obtain the probability distribution P y in Eq.( 6) to identify the predicted event label with the maximal probability, which can be regarded as the centroid closest to e 0 in the prototypical network." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In this section, we first introduce two widely-used datasets for event detection in Section 4.1. Then we detail the research questions and experimental configurations in Section 4.2. Finally, we list several baselines in Section 4.3." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b59", "b10" ], "table_ref": [ "tab_0", "tab_1", "tab_2" ], "text": "We evaluate the performance of MsPrompt and the baselines on two datasets, i.e., ACE-20051 and FewEvent2 . We follow Zhang et al. (2022) to check whether the trigger words are consistent with the annotated index range and delete the inconsistent samples. After that, the statistics of ACE-2005 and FewEvent are provided in Table 1. Since the average trigger length of both datasets is close to 1, we set the trigger recognizer to select one word per input.\nAs shown in Table 2, to evaluate the FSED performance of our proposal and the baselines, we adopt the true few-shot training settings (Gao, Fisch, and Chen 2021) with different sample size K ∈ {4, 8, 16, 32} to obtain the K-shot training set X train and validation set X valid . The test set is formed with the remaining instances. It is worth noting that we ignore the event types that the number of mentions is less than or equal to 2K and can not be divided into the train/valid/test sets under our true few-shot setting. On a general full-data set, we divide the train/valid/test sets at a ratio of 8:1:1.\nSince X test is still perturbed by the trigger bias, the most impartial evaluation should be based on an unbiased test set. For brevity, we omit \"weighted\" in the following tables and figures. In addition, the metric used in Table 3, 4, ?? is weighted F1-score." }, { "figure_ref": [], "heading": "Model summary", "publication_ref": [ "b47", "b19", "b59" ], "table_ref": [], "text": "The following models are discussed. The first group of baselines is based on different metric learning methods as follows.\n• Neural-Bert applies a traditional neural network to map the output embedding from the hidden dim to the event label space directly, and thus obtain the label probability distribution of the embedding. • Relation-Bert (Sung et al. 2018) follows the idea of the relation network to perform event detection. The embedding module and the relation module used here are the prototypical network and a single fully connected layer respectively. • Proto-Bert (Snell, Swersky, and Zemel 2017) employs the prototypical network as the event classifier to calculate the Euclidean distance between the event embedding and each event type. The nearest event type is then the prediction label. The second group of baselines is prompt-based methods as follows.\n• KiPT (Li et al. 2022) utilizes T5 as the event encoder to obtain the soft prompt template, and introduces the external knowledge bases to construct the knowledge-injected prompts. • ZEOP (Zhang et al. 2022) is based on prompt learning and ordered contrastive learning. Here, bert-base-uncased is applied as the encoder. It applies a one-step prompt to obtain the trigger token and event embedding." }, { "figure_ref": [], "heading": "Results and Discussion", "publication_ref": [], "table_ref": [], "text": "First, we discuss the overall FSED performance of our proposal and the baselines in Section 5.1, and then explore their performance under the strict low-resource scenarios in Section 5.2. Next, we conduct the comprehensive experiments to evaluate the performance of MsPrompt under different sampling methods (Section 5.3), input length (Section 5.4), and input sequence (Section 5.5). After that, we perform an ablation study to explore the effect of each part in our proposal in Section 5.6. Finally, we conduct a case study to verify the contribution of our model in mitigating the trigger bias in Section 5.7." }, { "figure_ref": [], "heading": "Overall performance", "publication_ref": [ "b19" ], "table_ref": [ "tab_2", "tab_1" ], "text": "For answering RQ1, we present the event detection performance of our proposal and the baselines on two public datasets: ACE-2005 and FewEvent. Following (Li et al. 2022), we evaluate the models under a few-shot setting with K ∈ {4, 8, 16, 32} in Table 3.\nFor the baselines, in general, shown in Table 2, as the number of instances increases, the performance of mentioned models consistently goes up in the low-resource scenarios. In detail, we observe that most models perform better on FewEvent than on ACE-2005. It can be explained that FewEvent contains more train samples than ACE-2005 to help the models classify the event type correctly. In addition, the prompt-based learning models (e.g., ZEOP and MsPrompt) generally outperform the three traditional baselines based on metric learning, which confirms the high applicability of prompt learning to few-shot scenarios. Among three metric learning models, Proto-Bert achieves a better performance on FewEvent than Neural-Bert and Relation-Bert generally. However, Relation-Bert outperforms the other two models on ACE-2005, and performs relatively stable under various K-shot settings on both datasets. This is due to the insensitivity of Relation-Bert to the label space of the datasets and the size of the training set.\nNext, for the prompt-based models, MsPrompt achieves a notable improvement against the baselines for most cases. For instance, MsPrompt performs the best for the cases under K = 4, 8, 32 on ACE-2005 and K = 4, 16, 32 on Few-Event. Instead, ZEOP performs well on ACE-2005 with K = 16 and on FewEvent with K = 8, which can be attributed to the high similarity of the inter-class samples under such settings. Besides, ZEOP obtains more supervised signals than our proposal by introducing additional contrastive samples for training. Nevertheless, MsPrompt is implemented in the true few-shot training setting without any supplementary samples. In particular, the improvements of MsPrompt over the best-performing baseline on ACE-2005 under the 4-shot, 8-shot, and 32-shot settings are 0.80%, 1.37%, and 0.28%, respectively, and 3.04%, 1.94%, and 0.11% on FewEvent with 4-shot, 16-shot, 32-shot, respectively." }, { "figure_ref": [], "heading": "Strict low-resource performance", "publication_ref": [], "table_ref": [], "text": "To answer RQ2, we conduct an extensive experiment to evaluate the performance of MsPrompt and the best-performing baseline ZEOP in the strict low-resource scenarios, as shown in Table 4.\nClearly, MsPrompt consistently outperforms ZEOP in the few-shot settings with K = 1, 2, 3, 4 on both datasets. For instance, on ACE-2005, MsPrompt presents 6.94%, 5.93%, 6.08%, 1.78% improvements in terms of weighted F1-score against ZEOP under the 1-shot, 2-shot, 3-shot, 4-shot setting, respectively. For FewEvent, the corresponding improvements are 1.39%, 5.20%, 11.43% and 3.04%. It can be explained by the outstanding performance of prompt learning in the strict low-resource scenarios, i.e., exploring the potential knowledge from PLMs to enhance the model training. In contrast with ZEOP, MsPrompt employs the whole process prompt model for two consecutive sub-tasks in event detection, which maximizes the advantages of prompt learning in few-shot learning, and makes full use of the latent knowledge in PLMs.\nAs shown in Table 4, when K increases, the performance of MsPrompt and ZEOP improves on both datasets. A similar pattern of results still exists in this experiment, i.e., both MsPrompt and ZEOP achieve a higher performance in terms of weighted F1-score on FewEvent than that on ACE-2005. Additionally, we find that an obvious performance improvement of MsPrompt can be observed when the number of training samples achieves a moderate point. For instance, when switching from 2-shot to 3-shot on ACE-2005, MsPrompt presents the largest improvement gap, i.e., 10.44% from 32.66% to 43.10%. This implies that increasing the number of instances can indeed accelerate the rate of sample utilization and thus enhance the model performance." }, { "figure_ref": [ "fig_7", "fig_7" ], "heading": "Debiasing Performance", "publication_ref": [], "table_ref": [], "text": "To answer RQ3, we evaluate the debiasing performance of MsPrompt and the best-performing baseline ZEOP under the IUS, TUS, COS, and initial full-test set X test (dubbed as Full-Test) in Fig. 5. As shown in Fig. 5, for both models, we observe that the results on IUS, i.e., the event-uniform test set, are better than that on Full-Test with a long-tail distribution of event types. It may be deduced that the datasets where the labels exhibit a long-tailed distribution limit the few-shot event detection performance. Compared with ZEOP, our MsPrompt model not only performs well on the unbalanced datasets but also on the event-uniform datasets, which reflects an outstanding robustness.\nIn detail, for the unbiased test set, the results of TUS and COS on ACE-2005 and FewEvent have different degrees of decline compared with the results of IUS and Full-Test. This indicates that the trigger bias in datasets makes the event detection model highly rely on such trigger clues, and the actual FSED performance is overestimated. Moreover, MsPrompt continues to outperform ZEOP on TUS and COS, which reveals that MsPrompt still has a robust and outstanding performance improvement in a fair debiasing scenario. This improvement is more intuitive on COS than TUS, presenting about 5% and 10% on ACE-2005 and FewEvent respectively, indicating a better prediction for confusing triggers of MsPrompt than ZEOP. This is due to that MsPrompt focuses on the context rather than on misclassifying the confusing trigger words by trusting the trigger bias.\nIn addition, compared with the results on Full-Test, the results of MsPrompt on TUS are fluctuated by no more than " }, { "figure_ref": [ "fig_9" ], "heading": "Impact of Input length", "publication_ref": [], "table_ref": [], "text": "In order to discuss the impact of input length in RQ4, we compare the performance of MsPrompt and the bestperforming baseline ZEOP under a variety of length intervals on ACE-2005 and FewEvent under the 32-shot setting. Since the average mention length in (10,20]\" are composed of samples with the mention length between 10 and 20. Thus, we obtain five segmented test sets from the corresponding test set of 32-shot on ACE-2005 and FewEvent, respectively. The results are shown in Fig. 6. We find that an overall trend is that the longer the input length, the lower the model performance of MsPrompt and ZEOP present on both datasets. It may be attributed to the fact that long sentences are more complicated and the semantics are more difficult to understand. And with the increase of mention length, more noise can be carried in the input and thus the challenge to the trigger recognizer will rise, which will affect the performance of the few-shot event detection. This decline is more obvious on FewEvent, where more indistinguishable event labels are included.\nIn addition, on ACE-2005, compared with ZEOP, MsPrompt first loses the competition in the group of (10,20] and then overtakes it in the groups of (20,30], (30,40], (40,50] and (50,60]. In addition, its advantages become more prominent as the length increases. This trend also exists on FewEvent and MsPrompt outperforms ZEOP from the length interval of (30,40]. Therefore, we can conclude that MsPrompt has an advantage against ZEOP on long inputs. Furthermore, as the length and complexity of sentence increase, MsPrompt has more obvious advantages in fewshot event detection than ZEOP. This may benefit from the excellent semantic modeling ability of the MLM. Even in the long event mentions, the trigger recognizer can identify the key information and trigger words accurately. In addition, the event classifier can also combine the long mention, ontology text, and the identified trigger information together effectively improve the event detection performance. In particular, the supplement of ontology text enables the model to understand the current event detection task easily, such as understanding the definition of trigger word and event. It helps the model to effectively avoid the interference of some invalid information of long mentions, and better seize the key information of the mentions." }, { "figure_ref": [], "heading": "Impact of Input sequence", "publication_ref": [ "b63" ], "table_ref": [ "tab_4" ], "text": "Next, for answering RQ5, we evaluate the performance of MsPrompt under all combinations of input sequences in the trigger recognizer and event classifier. We present the results in Table 5.\nFor the trigger recognizer, when the order of event mention and ontology text is changed to \"O + M\", the accuracy of trigger recognition decreases by 8.47%, which drops more than that of other sequence combinations in the event classifier. Notably, the performance of event detection also decreases obviously. For instance, the result of the weighted F1-score is reduced by 3.17% compared to the default sequential combination in MsPrompt, i.e., \"M + O\".\nFor the event classifier, the different sequence combinations of event mention, ontology text, and trigger word have little effect on the accuracy of trigger recognition with a small fluctuation around 1%. This slight fluctuation comes from the joint optimization of trigger loss and event loss in Eq. 9. However, for event detection, changing the order causes an obvious performance decline in terms of weighted F1-score. For instance, compared with the default sequence of the event classifier in MsPrompt, i.e., \"M + O + T\", the weighted F1-score of other sequence combinations in Table 5 decreased by 12.06%, 4.25%, 52.69%, 41.11%, and 52.65%, respectively. Among them, the combination of \"O + T + M\" returns the greatest decline.\nIt is worth noting that the performance of both the \"O + T + M\" and \"T + O + M\" combinations decrease by more than 50% when the event mention is placed at the end of the input. According to the Recency Bias proposed in (Zhao et al. 2021), the prompt models have the tendency to obtain the information in the text closest to the prompt template. Therefore, it can be deduced that the event mention plays a dominant role for the event detection performance, while the trigger word and ontology text play a relatively complementary role in the comparison. This phenomenon suggests that we need to pay particular attention to the overall semantic information in the event mention for enhancing the performance of event detection, rather than using the simple trigger recognition and classification to cover the whole event information included in the sentence." }, { "figure_ref": [ "fig_5" ], "heading": "Ablation study", "publication_ref": [], "table_ref": [ "tab_5", "tab_5" ], "text": "For RQ6, to check the contribution of different modules in MsPrompt to the event detection performance, we perform an ablation study using our proposal under the 32-shot setting ACE-2005. In the ablation study, we separately remove three specific modules to explore their effects on MsPrompt, namely \"-Trigger recognizer\", \"-Event classifier\", and \"-Ontology text\". Among them, \"-Trigger recognizer\" and \"-Event classifier\" imply ignoring the trigger recognizer and the event classifier in the multi-step prompt model in Fig. 4. Correspondingly, we use [CLS] to semantically model the raw event mentions directly and obtain the predicted trigger words as well as the event types. In addition, \"-Ontology text\" means to delete the implanted ontology text in both trigger recognizer and event classifier. The ablation results are shown in Table 6.\nAs shown in Table 6, when the trigger recognizer is removed, the trigger recognition performance of the whole model decreases most severely, with a 1.48% drop in accuracy from 66.80% to 65.32%. This decline confirms the effectiveness of the trigger recognizer and its indispensability to our proposal MsPrompt. For the event classifier, the removal of this module causes a sharp drop in terms of the weighted F1-score, which dropped by 6.77%. It is obvious that the event classifier plays a prominent role in boosting the performance of few-shot event detection. When turning to \"-Ontology text\", we observe that all metrics of event detection show the greatest decline. That is, the weighted precision, recall, and F1-score are decreased by 6.87%, 8.56%, and 10.17%, respectively. This fully demonstrates the outstanding contribution of ontology text to few-shot event detection, driving the prompt model to quickly learn the goal of event detection tasks in low-resource scenarios and truly guiding the training of PLMs with human prior knowledge." }, { "figure_ref": [ "fig_10" ], "heading": "Case study", "publication_ref": [], "table_ref": [], "text": "To answer RQ7 and verify the ability of our model to alleviate the context-bypassing problem caused by the trigger bias, we investigate some cases where the predicted trigger words are consistent with the annotated trigger labels.\nIn ACE-2005, we find that the event mentions with the trigger word \"war\" are almost all marked as the event type \"Conflict:Attack\". However, among the event mentions that MsPrompt recognizes the trigger word is \"war\", many other sparse event labels are predicted as well in addition to the dense event type \"Conflict:Attack\".\nWe pick an instance from ACE-2005 and present the event mention in Fig. 7. Although both the trigger word predicted by MsPrompt and the ground-truth trigger word labeled manually are \"war\", the event detection results for the event type are not the same. We can observe that the event label manually marked is still \"Conflict:Attack\", however, our model identifies the mention with the event type \"Personnel:Elect\". Combined with the semantic understanding, it can be found that the original mention mainly describes the latter event type, which indicates MsPrompt can perform well.\nIn general, during the construction of event detection datasets, it is inevitable that there will be annotation inertia in human labeling of event types, i.e., habitually classifying the trigger words as the same event type. This exposes the context-bypassing problem, which is exacerbated by the trigger bias and can lead to wrong predictions. In contrast, our model MsPrompt pays an attention to not only the trigger word but also the original event mention in the event classifier. Therefore, it greatly avoids this labeling inertia and mitigates the context-bypassing problem caused by the trigger bias, which confirms the debiasing effect of our proposal." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b51" ], "table_ref": [], "text": "To address the data-poor dilemma and the trigger bias in event detection, we propose a novel approach MsPrompt based on the true few-shot paradigm. We first apply an under-sampling module to adapt to the low-resource scenarios and the true few-shot setting. Then, for the severe context-bypassing and disabled generalization caused by the trigger bias, a multi-step prompt module combined with a knowledge-enhanced ontology is employed to alleviate the context-bypassing problem. In addition, a prototypical module is utilized to efficiently capture event label features in the low-resource occasions to further mitigate the generalization disability. The experimental results show that our model achieves notable performance advantages, especially in the strict low-resource scenarios, and can effectively deal with the debiasing issue for few-shot event detection.\nRegarding future work, on the one hand, we will consider evaluating our model in a challenging zero-shot scenario and expand to the open domain to investigate its generalization. On the other hand, we plan to study the sensitive multi-trigger event detection models using the MAVEN dataset (Wang et al. 2020)." }, { "figure_ref": [], "heading": "Research questions and configurations", "publication_ref": [ "b30" ], "table_ref": [], "text": "Research questions To evaluate the performance of MsPrompt, we focus on the following research questions: (RQ1) Does our model MsPrompt improve the performance for the true few-shot event detection compared to the state-of-the-art baselines? (RQ2) Can our model perform better than the best baseline in strict low-resource scenarios? (RQ3) How is the impact of sampling methods on the model performance, i.e., IUS, TUS, and COS? (RQ4) How is the performance of MsPrompt affected by the input length? (RQ5) How is the performance of MsPrompt affected by the input sequence? (RQ6) Which part of the model has the greatest contribution to the task? (RQ7) Does our model solve the issues caused by the trigger bias?\nModel configurations We make an under-sampling operation at a fixed random seed 42, and use the bert-baseuncased 3 to obtain the representation of each event mention. The parameters α and β in Eq.( 8) are set to 1. The batch size is set to 32 and 128 at the training and test stages respectively. In addition, AdamW (Loshchilov and Hutter 2019) is utilized as the model optimizer. For ACE-2005 and FewEvent, the epoch number is 500, 100, the learning rate of bert-base-uncased is 1e -6 , 1e -5 , and the learning rate of other part is 1e -3 , 1e -2 , respectively. Moreover, when the model loss has no more reduction after 1,000 iterations, we terminate the training process according to the early stop strategy. We implement MsPrompt under several random seeds to average the results, containing accuracy," } ]
Event detection (ED) is aimed to identify the key trigger words in unstructured text and predict the event types accordingly. Traditional ED models are too data-hungry to accommodate real applications with scarce labeled data. Besides, typical ED models are facing the context-bypassing and disabled generalization issues caused by the trigger bias stemming from ED datasets. Therefore, we focus on the true few-shot paradigm to satisfy the low-resource scenarios. In particular, we propose a multi-step prompt learning model (MsPrompt) for debiasing few-shot event detection, that consists of the following three components: an under-sampling module targeting to construct a novel training set that accommodates the true few-shot setting, a multi-step prompt module equipped with a knowledge-enhanced ontology to leverage the event semantics and latent prior knowledge in the PLMs sufficiently for tackling the context-bypassing problem, and a prototypical module compensating for the weakness of classifying events with sparse data and boost the generalization performance. Experiments on two public datasets ACE-2005 and FewEvent show that MsPrompt can outperform the state-of-the-art models, especially in the strict low-resource scenarios reporting 11.43% improvement in terms of weighted F1-score against the best-performing baseline and achieving an outstanding debiasing performance.
MsPrompt: Multi-step Prompt Learning for Debiasing Few-shot Event Detection
[ { "figure_caption": "Figure 1 :1Figure1: Model performance on FewEvent under different sampling methods, where the Instance Uniform Sampling (IUS) produces the original trigger-biased test set, while the Trigger Uniform Sampling (TUS) and COnfusion Sampling (COS) construct the test sets without the trigger bias, respectively. These results are produced byWang et al. (2021).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2: The workflow of MsPrompt.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Distribution of the number of instances with different event types on ACE-2005 and FewEvent. The numbers (y-axis) are exponentially distributed and the event types (x-axis) are ordered according to their corresponding frequency.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Trigger word is [MASK]. [SEP] And I agree that we shouldn't send people over there. [SEP] [Ontology Text 1] [SEP] [CLS] This is event about [MASK]. [SEP] And I agree that we shouldn't send people over there. [SEP] [Ontology Text 2] [SEP] Trigger word is send. [SEP] Trigger word: a word that can trigger an event, usually a verb or noun in the sentence. Ontology Text 2: Event: which type the sentence or trigger belongs to.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: The architecture of the multi-step prompt module.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Model performance on ACE-2005 and Few-Event under the 4-shot setting on the full or sampled test set. For IUS, TUS, and COS, we sample 4 mentions from each event type.", "figure_data": "", "figure_id": "fig_7", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Impact of input length on ACE-2005 and Few-Event under the 32-shot setting.", "figure_data": "", "figure_id": "fig_9", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: An instance of event mention on ACE-2005.The red marker \"war\" represents the predicted trigger word, which is also the ground-truth trigger label. The yellow arrow in the lower left corner points to the actual event label \"Conflict:Attack\", while the green arrow in the lower right corner points to the event type \"Personnel:Elect\" predicted by MsPrompt.", "figure_data": "", "figure_id": "fig_10", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Statistics of ACE-2005 and FewEvent used in our experiments. Compared with the biased full-test set X test and the dataset after IUS, the datasets under the aforementioned sampling methods TUS and COS, are stripped of the trigger bias that leads to exaggerated performance.", "figure_data": "StatisticsACE-2005 FewEvent# Event types33100# Event mentions3,65368,694# Average number of event mentions 110.70686.94# Average mention length27.3132.78# Average trigger length1.231.01Therefore, following Wang et al. (2021), we employ threesampling methods to construct a novel test set as follows:1. Instance Uniform Sampling (IUS) selects K mentionsfrom each event type randomly.2. Trigger Uniform Sampling (TUS) samples K mentionsuniformly from each trigger of one event type .3. COnfusion Sampling (COS) picks K mentions uni-formly from confusing triggers, i.e., similar to triggersof other event types, of one event type.", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Size of the train/valid/test sets under K-shot and full-data setting.", "figure_data": "StatisticsACE-2005 4-shot 8-shot 16-shot 32-shot Full-data 4-shot 8-shot 16-shot 32-shot Full-data FewEvent# Event type30272113331001005634100# Train instances 1202163364162,9214008008961,088 54,954# Valid instances 1202163364163664008008961,0886,870# Test instances 3,405 3,176 2,791 2,23236667,894 67,094 65,692 64,323 6,870weighted precision, weighted recall, and weighted F1-score.", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Overall performance. The results of the best-performing baseline and the best performer in each column are underlined and boldfaced, respectively.", "figure_data": "Model4-shotACE-2005 8-shot 16-shot32-shot4-shotFewEvent 8-shot 16-shot32-shotNeural-Bert18.4321.5442.8646.6513.8727.1155.6973.32Relation-Bert44.0947.9552.1760.3341.5344.3056.0356.84Proto-Bert15.2031.2443.2457.1343.0958.6170.0772.95KiPT45.5050.1054.2056.4054.8761.3465.2368.72ZEOP44.5258.9262.9969.6657.6365.7171.4173.79MsPrompt46.3060.2961.8869.9460.6760.8373.3573.90Table 4: The performance of MsPrompt and ZEOP inthe strict low resource scenarios under the K-shot set-ting, where K ∈ {1, 2, 3, 4}.DatasetModel1-shot 2-shot 3-shot 4-shotACE-2005MsPrompt 24.16 32.66 43.10 46.30 ZEOP 17.22 26.73 37.02 44.52FewEventMsPrompt 29.53 41.49 51.75 60.67 ZEOP 28.14 36.29 40.32 57.63", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "", "figure_data": "is", "figure_id": "tab_3", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Impact of input sequence on ACE-2005 under the 32-shot setting. \"M\", \"O\", and \"T\" means the event mention, ontology text, and trigger word, respectively. The results of MsPrompt are boldfaced.denotes the largest decline of performance in each column compared with MsPrompt.", "figure_data": "Trigger recognitionEvent detectionSequenceAccuracyPrecision Recall F1-scoreTrigger recognizer# M + O66.8073.0072.09 69.94# O + M58.3370.5269.44 66.77Event classifier# M + O + T66.8073.0072.09 69.94# M + T + O66.8565.1962.19 57.88# O + M + T66.8969.4668.10 65.69# O + T + M65.9154.4522.49 17.25# T + M + O66.0854.8735.35 28.83# T + O + M66.0454.3922.31 17.29", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Ablation study on ACE-2005 under the 32-shot setting. \"-\" means removing the module from our proposal MsPrompt. The results of MsPrompt are boldfaced.denotes the largest decline of performance in each column compared with MsPrompt.", "figure_data": "Trigger recognitionEvent detectionModelAccuracyPrecision Recall F1-scoreMsPrompt66.8073.0072.09 69.94-Trigger recognizer65.3268.8067.20 64.28-Event classifier66.7667.2566.31 63.17-Ontology text66.5866.1363.53 59.77", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" } ]
Siyuan Wang; Jianming Zheng; Xuejun Hu; Fei Cai; Chengyu Song; Xueshan Luo
[ { "authors": "T B Brown; B Mann; N Ryder; M Subbiah; J Kaplan; P Dhariwal; A Neelakantan; P Shyam; G Sastry; A Askell; S Agarwal; A Herbert-Voss; G Krueger; T Henighan; R Child; A Ramesh; D M Ziegler; J Wu; C Winter; C Hesse; M Chen; E Sigler; M Litwin; S Gray; B Chess; J Clark; C Berner; S Mccandlish; A Radford; I Sutskever; D Amodei", "journal": "NeurIPS", "ref_id": "b0", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Y Chen; L Xu; K Liu; D Zeng; J Zhao", "journal": "", "ref_id": "b1", "title": "Event extraction via dynamic multi-pooling convolutional neural networks", "year": "2015" }, { "authors": " ", "journal": "", "ref_id": "b2", "title": "", "year": "" }, { "authors": "J Chen; H Lin; X Han; L Sun", "journal": "", "ref_id": "b3", "title": "Honey or poison? solving the trigger curse in few-shot event detection via causal intervention", "year": "2021" }, { "authors": " ", "journal": "", "ref_id": "b4", "title": "", "year": "" }, { "authors": "X Cong; S Cui; B Yu; T Liu; Y Wang; B Wang", "journal": "", "ref_id": "b5", "title": "Few-shot event detection with prototypical amortized conditional random field", "year": "2021" }, { "authors": " ", "journal": "", "ref_id": "b6", "title": "", "year": "" }, { "authors": "S Deng; N Zhang; J Kang; Y Zhang; W Zhang; H Chen", "journal": "ACM", "ref_id": "b7", "title": "Meta-learning with dynamic-memorybased prototypical network for few-shot event detection", "year": "2020" }, { "authors": "G R Doddington; A Mitchell; M A Przybocki; L A Ramshaw; S M Strassel; R M Weischedel", "journal": "", "ref_id": "b8", "title": "The automatic content extraction (ACE) programtasks, data, and evaluation", "year": "2004" }, { "authors": "L Fei-Fei; R Fergus; P Perona", "journal": "IEEE Computer Society", "ref_id": "b9", "title": "A bayesian approach to unsupervised one-shot learning of object categories", "year": "2003" }, { "authors": "T Gao; A Fisch; D Chen", "journal": "", "ref_id": "b10", "title": "Making pre-trained language models better few-shot learners", "year": "2021" }, { "authors": " ", "journal": "", "ref_id": "b11", "title": "", "year": "" }, { "authors": "Y Guo; Y Yang; A Abbasi", "journal": "", "ref_id": "b12", "title": "Auto-debias: Debiasing masked language models with automated biased prompts", "year": "2022" }, { "authors": " ", "journal": "", "ref_id": "b13", "title": "", "year": "" }, { "authors": "A N Jagannatha; H Yu", "journal": "", "ref_id": "b14", "title": "Bidirectional rnn for medical event detection in electronic health records", "year": "2016" }, { "authors": " ", "journal": "", "ref_id": "b15", "title": "", "year": "" }, { "authors": "V D Lai; M V Nguyen; T H Nguyen; F Dernoncourt", "journal": "ACM", "ref_id": "b16", "title": "Graph learning regularization and transfer learning for few-shot event detection", "year": "2021" }, { "authors": "V D Lai; F Dernoncourt; T H Nguyen", "journal": "", "ref_id": "b17", "title": "Learning prototype representations across few-shot tasks for event detection", "year": "2021" }, { "authors": " ", "journal": "", "ref_id": "b18", "title": "", "year": "" }, { "authors": "H Li; T Mo; H Fan; J Wang; J Wang; F Zhang; W Li", "journal": "", "ref_id": "b19", "title": "Kipt: Knowledge-injected prompt tuning for event detection", "year": "2022" }, { "authors": "S Liu; Y Chen; K Liu; J Zhao", "journal": "", "ref_id": "b20", "title": "Exploiting argument information to improve event detection via supervised attention mechanisms", "year": "2017" }, { "authors": " ", "journal": "", "ref_id": "b21", "title": "", "year": "" }, { "authors": "J Liu; Y Chen; K Liu; J Zhao", "journal": "AAAI Press", "ref_id": "b22", "title": "Event detection via gated multilingual attention mechanism", "year": "2018" }, { "authors": "S Liu; Y Li; F Zhang; T Yang; X Zhou", "journal": "", "ref_id": "b23", "title": "Event detection without triggers", "year": "2019" }, { "authors": " ", "journal": "", "ref_id": "b24", "title": "", "year": "" }, { "authors": "H Liu; W Jin; H Karimi; Z Liu; J Tang", "journal": "ACL", "ref_id": "b25", "title": "The authors matter: Understanding and mitigating implicit bias in deep text classification", "year": "2021" }, { "authors": "J Liu; C Liang; J Xu", "journal": "Knowledge-Based Systems", "ref_id": "b26", "title": "Document-level event argument extraction with self-augmentation and a crossdomain joint training mechanism", "year": "2022" }, { "authors": "X Liu; Z Luo; H Huang", "journal": "", "ref_id": "b27", "title": "Jointly multiple events extraction via attention-based graph information aggregation", "year": "2018" }, { "authors": " ", "journal": "", "ref_id": "b28", "title": "", "year": "" }, { "authors": "B Liu", "journal": "IEEE Access", "ref_id": "b29", "title": "GCN-BERT and memory network based multi-label classification for event text of the chinese government hotline", "year": "2022" }, { "authors": "I Loshchilov; F Hutter", "journal": "", "ref_id": "b30", "title": "Decoupled weight decay regularization", "year": "2019" }, { "authors": "S Madisetty; M S Desarkar", "journal": "World Wide Web", "ref_id": "b31", "title": "A reranking-based tweet retrieval approach for planned events", "year": "2022" }, { "authors": "T H Nguyen; R Grishman", "journal": "", "ref_id": "b32", "title": "Event detection and domain adaptation with convolutional neural networks", "year": "2015" }, { "authors": " ", "journal": "", "ref_id": "b33", "title": "", "year": "" }, { "authors": "T H Nguyen; R Grishman", "journal": "AAAI Press", "ref_id": "b34", "title": "Graph convolutional networks with argument-aware pooling for event detection", "year": "2018" }, { "authors": "T H Nguyen; L Fu; K Cho; R Grishman", "journal": "", "ref_id": "b35", "title": "A two-stage approach for extending event detection to new types via neural networks", "year": "2016" }, { "authors": " ", "journal": "", "ref_id": "b36", "title": "", "year": "" }, { "authors": "T H Nguyen; K Cho; R Grishman", "journal": "", "ref_id": "b37", "title": "Joint event extraction via recurrent neural networks", "year": "2016" }, { "authors": " ", "journal": "", "ref_id": "b38", "title": "", "year": "" }, { "authors": "H Peng; R Zhang; S Li; Y Cao; S Pan; P Yu", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b39", "title": "Reinforced, incremental and cross-lingual event detection from social messages", "year": "2022" }, { "authors": "E Perez; D Kiela; K Cho", "journal": "NeurIPS", "ref_id": "b40", "title": "True few-shot learning with language models", "year": "2021" }, { "authors": "C Qian; F Feng; L Wen; C Ma; P Xie", "journal": "", "ref_id": "b41", "title": "Counterfactual inference for text classification debiasing", "year": "2021" }, { "authors": " ", "journal": "", "ref_id": "b42", "title": "", "year": "" }, { "authors": "D Shah; H A Schwartz; D Hovy", "journal": "", "ref_id": "b43", "title": "Predictive biases in natural language processing models: A conceptual framework and overview", "year": "2020" }, { "authors": " ", "journal": "", "ref_id": "b44", "title": "", "year": "" }, { "authors": "J Snell; K Swersky; R S Zemel", "journal": "NeurIPS", "ref_id": "b45", "title": "Prototypical networks for few-shot learning", "year": "2017" }, { "authors": "C Song; F Cai; J Zheng; X Zhao; T Shao", "journal": "Information Processing & Management", "ref_id": "b46", "title": "Augprompt: Knowledgeable augmented-trigger prompt for few-shot event classification", "year": "2022" }, { "authors": "F Sung; Y Yang; L Zhang; T Xiang; P H S Torr; T M Hospedales", "journal": "", "ref_id": "b47", "title": "Learning to compare: Relation network for few-shot learning", "year": "2018" }, { "authors": "M Tong; B Xu; S Wang; Y Cao; L Hou; J Li; J Xie", "journal": "", "ref_id": "b48", "title": "Improving event detection via open-domain trigger knowledge", "year": "2020" }, { "authors": " ", "journal": "", "ref_id": "b49", "title": "", "year": "" }, { "authors": "N Voskarides; E Meij; S Sauer; M De Rijke", "journal": "ACM", "ref_id": "b50", "title": "News article retrieval in context for event-centric narrative creation", "year": "2021" }, { "authors": "X Wang; Z Wang; X Han; W Jiang; R Han; Z Liu; J Li; P Li; Y Lin; J Zhou", "journal": "", "ref_id": "b51", "title": "MAVEN: A massive general domain event detection dataset", "year": "2020" }, { "authors": " ", "journal": "", "ref_id": "b52", "title": "", "year": "" }, { "authors": "P Wang; R Xu; T Liu; D Dai; B Chang; Z Sui", "journal": "ACM", "ref_id": "b53", "title": "Behind the scenes: An exploration of trigger biases problem in few-shot event classification", "year": "1969" }, { "authors": "S Wang; M Yu; S Chang; L Sun; L Huang", "journal": "", "ref_id": "b54", "title": "Query and extract: Refining event extraction as typeoriented binary decoding", "year": "2022" }, { "authors": " ", "journal": "", "ref_id": "b55", "title": "", "year": "" }, { "authors": "Y Wei; S Liu; J Lv; X Xi; H Yan; W Ye; T Mo; F Yang; G Wan", "journal": "International Committee on Computational Linguistics", "ref_id": "b56", "title": "DESED: dialogue-based explanation for sentence-level event detection", "year": "2022" }, { "authors": "Z Xie; Y Tu", "journal": "AAAI Press", "ref_id": "b57", "title": "A graph convolutional network with adaptive graph generation and channel selection for event detection", "year": "2022" }, { "authors": "J Zhang; W Huang; D Ji; Y Ren", "journal": "Information Processing & Management", "ref_id": "b58", "title": "Globally normalized neural model for joint entity and event extraction", "year": "2021" }, { "authors": "S Zhang; T Ji; W Ji; X Wang", "journal": "", "ref_id": "b59", "title": "Zero-shot event detection based on ordered contrastive learning and promptbased prediction", "year": "2022" }, { "authors": " ", "journal": "", "ref_id": "b60", "title": "", "year": "" }, { "authors": "L Zhao; M Li; J Kou; J Zhang; Y Zhang", "journal": "ACM", "ref_id": "b61", "title": "A framework for event-oriented text retrieval based on temporal aspects: A recent review", "year": "2020" }, { "authors": "L Zhao; W Qian; L Zang; F Zhu; Y Lu; R Li; J Han; S Hu", "journal": "ACM", "ref_id": "b62", "title": "An event-oriented neural ranking model for news retrieval", "year": "2020" }, { "authors": "Z Zhao; E Wallace; S Feng; D Klein; S Singh", "journal": "", "ref_id": "b63", "title": "Calibrate before use: Improving few-shot performance of language models", "year": "2021" }, { "authors": " Pmlr", "journal": "", "ref_id": "b64", "title": "", "year": "" }, { "authors": "J Zheng; F Cai; H Chen; M De Rijke", "journal": "Information Processing & Management", "ref_id": "b65", "title": "Pretrain, interact, fine-tune: a novel interaction representation for text classification", "year": "2020" }, { "authors": "J Zheng; F Cai; W Chen; W Lei; H Chen", "journal": "ACM", "ref_id": "b66", "title": "Taxonomy-aware learning for few-shot event detection", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 548.83, 695.2, 8.46, 8.74 ], "formula_id": "formula_0", "formula_text": "K 1 1 x 2 1 x 1 K x 1 2 x 2 2 x 2 K x 1 N x 2 N x K N x" }, { "formula_coordinates": [ 4, 98.74, 538.03, 189.89, 54.33 ], "formula_id": "formula_1", "formula_text": "X train =      x 1 1 x 2 1 • • • x K 1 x 1 2 x 2 2 • • • x K 2 . . . . . . . . . . . . x 1 N x 2 N • • • x K N      . (1" }, { "formula_coordinates": [ 4, 288.63, 560.94, 3.87, 8.64 ], "formula_id": "formula_2", "formula_text": ")" }, { "formula_coordinates": [ 4, 319.5, 532.73, 242.4, 9.65 ], "formula_id": "formula_3", "formula_text": "f 1 (x) = [CLS]T rigger word is [M ASK].[SEP ]x[SEP ]." }, { "formula_coordinates": [ 4, 376.11, 640.06, 181.89, 23.6 ], "formula_id": "formula_4", "formula_text": "P t = P ( [M ASK] = t| f 1 (x)) = P ( t = w j | w) .(3)" }, { "formula_coordinates": [ 5, 54, 300.63, 129.02, 9.72 ], "formula_id": "formula_5", "formula_text": "w = (w 1 , w 2 , • • • , t, • • • , w L )." }, { "formula_coordinates": [ 5, 102.38, 329.53, 190.12, 30.32 ], "formula_id": "formula_6", "formula_text": "L t = - L j=1 tj log (P ( t = w j | w)),(4)" }, { "formula_coordinates": [ 5, 54, 491.22, 262.02, 9.65 ], "formula_id": "formula_7", "formula_text": "f 2 (x ) = [CLS]T his is event about [M ASK].[SEP ]x [SEP ]," }, { "formula_coordinates": [ 5, 107.75, 620.88, 184.75, 39.4 ], "formula_id": "formula_8", "formula_text": "P y = P ( [M ASK] = y| f 2 (x )) = P ( y = y j | Y ) = (p j ) N ,(6)" }, { "formula_coordinates": [ 5, 391.1, 318.09, 166.91, 30.32 ], "formula_id": "formula_9", "formula_text": "L y = - N j=1 ŷj log (p j ),(7)" }, { "formula_coordinates": [ 5, 404.03, 396.16, 153.97, 9.65 ], "formula_id": "formula_10", "formula_text": "L = αL t + βL y ,(8)" }, { "formula_coordinates": [ 6, 109.12, 280.79, 183.38, 26.63 ], "formula_id": "formula_11", "formula_text": "p j = exp (-D (e 0 , e j )) N n=1 exp (-D (e 0 , e n )) ,(9)" } ]
2023-07-30
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "Many financial positions involve decision-making and prediction-making, like in stock price forecasting or sales and demand planning. Such careers require stakeholders to have a general understanding of the past, current and future market, and how one thing leads to another. Many stakeholders keep abreast of market trends by reading news. However, given the volume of the text online today, and even more if we were to consider historical news, it is impossible for any individual to consume all available information effectively.\nIn the past decade, knowledge graphs (KGs) have emerged as a useful way to store and represent knowledge. By performing end-to-end causal text mining (CTM) and then representing the causal relations through a KG, it is possible to summarize the past and current events succinctly for stakeholders to learn from effectively. We define end-to-end CTM as the identification of Cause and Effect arguments in any given text, if present.\nIn this paper, we focus on the application of summarizing and tracking causal relations in industry news to help individuals who frequently monitor the news for market research and decision making. Therefore, the final KG constructed must be useful by being: (1) recall-focused: it captures a large proportion of the causal relationships present in the news, (2) precision-focused: the causal relationships captured are truly causal, and (3) interpretable: it can be used by humans to learn causal relationships. Our methodology comprises of two broad steps, shown in Figure 1: (1) Extraction of Causal Relations, and (2) Argument Clustering and Representation into Knowledge Graph.\nOur contributions are as follows: • Although many earlier works investigate construction of causal KGs from text, most utilize pattern-based methods. In our work, we employ both pattern-based and neural network-based approaches. Our findings show that the pattern-based approach drastically misses out on extracting valid causal relations compared to the neural network-based approach (1:19 ratio). • Graphs built directly off extracted Cause and Effect arguments are sparse and hence, hard to interpret. To mitigate this, we investigate a simple but effective solution to cluster our arguments based on semantics to create a more connected KG that enables more causal relationships to be drawn. • We evaluate our methodology on a small set of data annotated by the users, demonstrate industry use cases and discuss users' feedback on the final KG. We intend to deploy our system as a regular service to the Sales Division.\nThe subsequent portions of the paper are outlined as follows: We first discuss related work. Subsequently, we introduce our data and describe our methodology. Next, we present our experimental results, demonstrate multiple use cases, and finally, conclude." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b34", "b3", "b23", "b0", "b19", "b16", "b24", "b20", "b29", "b16", "b27", "b15", "b31", "b3", "b16", "b20", "b14", "b26", "b30" ], "table_ref": [], "text": "In the recent years, many SOTA NLP solutions have been created to beat the leaderboards (Chen et al. 2022a;Zuo et al. 2020Zuo et al. , 2021a,b;,b;Cao et al. 2021;Chen et al. 2022b;Nik et al. 2022;Aziz, Hossain, and Chy 2022). Consistent with the general trend in NLP, the best models all use neural network architectures and pre-trained language models. Compared to pattern-based methods, neural network-based approaches can be trained to recognize more causal constructions, and therefore, in application, have a much higher recall. Yet, many papers working on constructing causal KGs still revert to rudimentary pattern-based solutions (Ittoo and Bouma 2013;Heindorf et al. 2020;Radinsky, Davidovich, and Markovitch 2012;Izumi and Sakaji 2019;Xu and Dang 2022). Recall is important in our context of monitoring news: our users have to be aware of latest causal events, and a pattern-based extraction tool with limited coverage will not be effective in identifying a high proportion of the true causal relations. In our work, we employ both pattern (Heindorf et al. 2020) and neural network-based (Tan, Zuo, and Ng 2023) methodologies based on previous works. We investigate the differences in quality and quantity of the extracted causal relations using these two extraction approaches.\nKGs can serve as a taxonomy or knowledge source to guide natural language models to make better predictions (He et al. 2021;Zhang et al. 2021;Cao et al. 2021). Most causal KGs are built off the extracted Cause and Effect arguments by casting them directly as nodes (Heindorf et al. 2020;Izumi and Sakaji 2019;Hassanzadeh 2021). If we followed suit, we will obtain a large and poorly connected graph. In studying causality, it is beneficial to have a highly connected graph because it allows us to detect more causal relations, especially transitive ones. Additionally, generalizing over objects, actions and events allow users to make predictions of upcoming Effects even for unseen events (Radinsky, Davidovich, and Markovitch 2012). Therefore, in our work, we condense our graphs by grouping nodes that refer to the same topic together using previous topic modelling solutions (Sia, Dalmia, and Mielke 2020;Zhang et al. 2022)." }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [], "table_ref": [], "text": "We worked on 6,384 article summaries, comprising of 62,151 sentences published between 2017 and 2022. We focus on the electronics and supply-chain industry news. The articles were extracted through Google News using a webscraping tool, Scrapy1 on September to October 2022. We focused on the Japan, China, Europe and Global regions. The article summaries and titles were obtained using newspaper3K2 , which returns top 10 sentences of an article, scored and ranked using features such as sentence length, sentence position, title status, and frequency of keywords appearing in the sentence." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "To briefly introduce, our approach is to extract causal relations, cluster semantically similar arguments, and store causal relations in a KG to be used for various applications." }, { "figure_ref": [], "heading": "Extraction of Causal Relations", "publication_ref": [ "b16", "b9", "b2", "b19", "b5", "b25", "b16" ], "table_ref": [], "text": "Pattern-based We replicated CauseNet's (Heindorf et al. 2020) methodology of using linguistic patterns to detect causal relations. The patterns identify the shortest path between a Cause noun and an Effect noun using the dependency graph of a sentence (Culotta and Sorensen 2004;Bunescu and Mooney 2005;Ittoo and Bouma 2013). The enhanced dependency graphs were obtained using the Stanford NLP Parser (Chen and Manning 2014;Schuster and Manning 2016). The original authors extracted 53 linguistic patterns after two bootstrapping rounds on their Wikipedia dataset which we used directly. 3To obtain more patterns, we utilized the Wikipedia dataset from (Heindorf et al. 2020) Finally, the original 53 patterns were merged with the additional 477 patterns. Since 43 patterns were repeated across the two lists, the final number of patterns was 487. Most of these patterns contain causal connectives like 'caused', 'causing', 'resulted in' and 'leading to'. Equipped with these linguistic patterns, we extracted causal relations from news as follows:\n1. Extract all nouns in a sentence. We use Stanford NLP parser to obtain these part-of-speech (POS) tags.\n2. For every combination of noun pairs, identify the shortest dependency path tying the two nouns together. Format the path as a pattern string.\n3. Check if the pattern string matches with any of our causal linguistic patterns. If there is a match, the noun pair is identified to be causal." }, { "figure_ref": [ "fig_0" ], "heading": "Post-processing", "publication_ref": [ "b27", "b18", "b11", "b4", "b28", "b17", "b10" ], "table_ref": [], "text": "We merged arguments that have the same pattern and either Cause or Effect argument, since they refer to the same relation. To illustrate, in Table 1, the three Effect has the same Cause and the same causal pattern. Therefore, the final causal relation was processed to be \"shortage\" caused \"impact of a fall in output\". Similarly, in Figure 1, the pattern-based example's six causal relations was simplified into one causal relation: \"number of vehicles\" caused \"demand for automotive smart display systems\".\nPattern-based arguments tend to be short and lack the context needed for clustering. Therefore, we converted the arguments from the pattern-based approach into words from the original span up to (if occurring before) or until (if occurring after) the signal words. For example, in Table 1, the Cause argument was altered to start right before the signal word 'brought'. Since the Effect argument already spans up to the signal word, it remains the same.\nTo conclude this subsection, the pattern-based matching approach allowed us to identify 1,006 sentences and 975 causal relations from 611 unique sentences.\nBERT-based UniCausal (Tan, Zuo, and Ng 2023) 5 is a causal text mining repository that consolidated six datasets (AltLex (Hidey andMcKeown 2016), BECAUSE 2.0 (Dunietz, Levin, andCarbonell 2017), CausalTimeBank (Mirza et al. 2014;Mirza and Tonelli 2014), EventStoryLine (Caselli and Vossen 2017), Penn Discourse Treebank V3.0 (Webber et al. 2019) and SemEval2010Task8 (Hendrickx et al. 2010)) for three tasks (Causal Sentence Classification, Causal Pair Classification, and Cause-Effect Span Detection). Pre-trained models were created and made available online. These models were trained on each task independently, builds on BERT-based pre-trained encoders (Devlin et al. 2019), and used Cross Entropy Loss. All six datasets were used for training and testing. In our work, we utilized three pre-trained models developed by UniCausal:\n1. Causal Sentence Classification (CSC): Model identifies if a sentence contains causal relations or not. After passing the sentence through BERT-encoder layers, the embeddings of the [CLS] token are processed through a dropout layer, followed by a classification layer to generate predicted logits. The pre-trained model reported 70.10% Binary F1 score." }, { "figure_ref": [], "heading": "Causal Pair Classification (CPC):", "publication_ref": [], "table_ref": [], "text": "Model identifies if a pair of arguments (ARG0, ARG1) that are marked in the sentence are causally related or not, such that ARG0 causes ARG1. It follows the same architecture as CSC.\nThe pre-trained model reported 84.68% Binary F1 score.\n3. Cause-Effect Span Detection (CESD): Model identifies the consecutive span of words that refer to the Cause and Effect arguments. Framed as a token classification task, after the BERT-encoder layers, the sequence output is fed through a dropout then classification layer to obtain the predicted logits per token. The pre-trained model reported 52.42% Macro F1 score.\nTo extract causal relations from text, we applied both CSC and CESD predictors to all sentences. For causal sentences identified by CSC, we retained the cause and effect arguments identified by CESD.\nPost-processing One limitation of UniCausal's CESD is that it was designed to predict only one causal relation per example. However, in our investigations, many instances had multiple ARG0 and ARG1 predictions. Without additional information, the relationship between the multiple causes and effects was unclear. Therefore, we implemented a postprocessing procedure involving three steps: (1) Merge sequential arguments, (2) Keep longest argument for examples with three arguments, and (3) Keep multiple causal relations based on CPC predictions. Additionally, we utilized CPC to identify and retain additional causal examples. Details about these procedures can be found in the Appendix.\nAltogether, the BERT-based method identified 19,250 sentences with 19,192 causal relations from 15,702 unique sentences." }, { "figure_ref": [], "heading": "Argument Clustering", "publication_ref": [ "b26", "b30", "b12", "b13", "b30", "b30" ], "table_ref": [], "text": "We wish to cluster the arguments that have similar meaning, both in terms of the topic mentioned in the argument (E.g. supply, profits, automotives, etc.) and the impact on it (E.g. positive, negative, etc.). We used the approach by (Sia, Dalmia, and Mielke 2020;Zhang et al. 2022) to generate word embeddings from sequences and cluster the embeddings directly.\nNeutralizing named-entities We are not interested to cluster arguments that refer to the same organization, location, or date. Thus, we used the 7-class Stanford Named Entity Recognition (NER) Tagger (Finkel, Grenager, and Manning 2005) 6 to extract named-entities for locations, persons, organizations, times, money, percents, and dates. Subsequently, we remove the words corresponding to any of these entities in the argument spans. For example, the Cause argument \"to jointly produce premium EVs in China\" was converted to \"to jointly produce premium EVs in\". Note that in the final KG, the original arguments were used.\nWord embeddings To generate word embeddings that clusters semantically similar arguments together and semantically different arguments apart, we used the supervised pre-trained language model by SimCSE (Gao, Yao, and Chen 2021) to encode our NER-neutralized arguments into embeddings. SimCSE was trained to identify whether the relationship betweent two sentences suggests entailment, neutral, or contradition. SimCSE was evaluated against standard semantic textual similarity tasks, and achieved an average 81.6% Spearman's correlation, a 2.2% improvement compared to previous best results. Our embeddings had a feature dimension of 786 because the model is built on the BERT model, bert-base-uncased.\nClustering and getting keywords per cluster Similar to (Sia, Dalmia, and Mielke 2020), we used K-Means to cluster the 35,230 embeddings into 3,000 topics. We remove relations where the Cause and Effect fall under the same topic so that we do not have nodes that connected to itself. To obtain the top keywords per topic, we used the TFIDF × IDF method proposed by (Zhang et al. 2022):\nT F IDF d = n w w n w • log( |D| |{d ∈ D|w ∈ d}| )(1)\nIDF k = log( |K| |{w ∈ K}| ) (2) T F IDF × IDF = T F IDF d • IDF k (3)\nwhere n w is the frequency of word w in each document d, D refers to the whole corpus, |K| is the number of clusters, and |{w ∈ K}| is the number of clusters where word w appears in. This approach helps to identify the important words to each cluster compared to the rest of the dataset (T F IDF d ), while penalizing frequent words that appear in multiple clusters (IDF k ). Empirical findings demonstrate this method significantly outperforms regular TF or TFIDF methods in 6 https://nlp.stanford.edu/software/CRF-NER.shtml selecting topic words (Zhang et al. 2022). In the end, we extract 5 keywords per topic, which corresponds to the text displayed in the nodes of a graph. If the cluster contains only one argument, then the argument text is displayed instead." }, { "figure_ref": [], "heading": "Knowledge Graph", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "We define our knowledge graph G = (V, E) as a collection of nodes\nV = {(v 1 , v 2 , ..., v n )} and directed edges E = {(v 1 , v 2 ), (v 2 , v 3 ), ...}. A directed edge (v i , v j ) rep-\nresents the presence of causality between the two nodes, where v i is the Cause and v j is the Effect. The edges are also weighted by s, which represents the number of sentences in our dataset that has been identified to convey that v i causes v j .\nTable 2 shows the statistics of our extracted relations and constructed KG. Earlier, out of 62,151 sentences, we identified 15,902 unique sentences containing 20,086 causal relations. Before clustering, a KG built directly on these extracted relations would have 35,230 unique nodes and 20,086 unique edges, with an average support per edge of 1.008. By performing argument clustering, we created a highly connected KG, with 3,000 unique nodes, 17,801 unique edges, and an average support per edge of 1.122. Table 3 displays some graph statistics before and after clustering. Again, we observe that the KG after clustering is denser and more connected. In fact, instead of 15,686 subgraphs, our KG is now represented by 1 connected graph. Visualizations of the KGs in the later sections are performed using Cytoscape 7 , an open-source software for visualizing and interacting with graphs." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Extraction of Causal Relations", "publication_ref": [], "table_ref": [ "tab_6", "tab_2" ], "text": "Quantitative evaluation We asked users to randomly select 15 articles from the Google News dataset and annotate the Causes and Effects per sentence. 49 causal relations from 43 sentences were identified. 6 sentences had more than one causal relation. A correct case is one where the model and human annotations have ≥ 1 word(s) overlapping for both Cause and Effect spans. We could then calculate the number of True Positives (TP), False Positives (FP) and False Negatives (FN) by treating the human annotations as the gold standard. True Negative (TN) is 0 in all cases because we do not evaluate on non-causal sentences and relations. Appendix Table 6 provides some examples comparing model predictions to human annotations. Subsequently, we calculate Precision (P), Recall (R), and F1 scores using the following formulas: Although the pattern-based has very good precision (100%), it could only identify 2/49 causal relations, resulting in an extremely low recall score of 4%. In the application of monitoring news and trends, such a low recall is unacceptable as it would miss out on many key happenings. The BERT-based approach extracted much more causal relations than the pattern-based method, scoring a higher recall of 71.43%. This is because UniCausal is trained on a large dataset, and its architecture also allows models to learn from varied linguistic structures, including implicit causal relations. Consistent with our findings, we observe in Table 2 that the BERT-based approach extracted 19x more causal relations than the pattern-based approach for our whole dataset.\nP = T P T P + F P , R = T P T P + F N , F 1 = 2 × P × R P + R(\nThe model proposed 12 causal relations that were not annotated by the humans. Upon checking, 5/11 were correct in that the Cause and Effect spans suggested are causal. However, they are duplicates arising from the post-processing done for BERT-based extraction that accepts any pair of arguments that CPC detects as causal. Therefore, when comparing against the human annotated test set, these duplicates were treated as spurious relations. If we consider these five examples as Correct, then the overall precision would increase to 85.42%." }, { "figure_ref": [], "heading": "Argument Clustering", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "Quantitative evaluation From the 36 causal relations where the model and users were in agreement with, we asked users to group the arguments with similar meaning and give a topic label to each group. The users clustered 72 arguments into 50 topics. Some example topics are: 'increased competition', 'taxation', 'cost reduction', 'business hurdles' and 'raw material shortage'. To obtain the model's predictions, we filtered out the nodes of the 36 causal relations from the whole KG (described in the \"Knowledge Graph\" Section). The 72 arguments were clustered into 70 topics. We compare the model's and user's clustering using Normalized Mutual Information (NMI), an entropy-based evaluation metric. Because most clusters only contain one span from both the model's and user's clustering, NMI is high at 93.62%. However, given the small sample size, this score can be misleading. More annotated data is needed to evaluate clustering performance.\nQualitative evaluation Due to limited space, we summarize our qualitative experiments and findings in this subsection. Details are available in the Appendix.\nIn Table 3, we show that clustering helps to increase the average edge weight and node centrality. Our argument clustering solution helps to create a highly connected causal KG, which is more insightful to infer causal relationships from. For example, before clustering, we could only infer that 'pandemic' causes 'disruptions'. After clustering, our subgraph detected that 'pandemic' causes supply chain disruptions, chip and semiconductor shortages, sales decreases, and general interferences and disruptions.\nWe also found that argument clusters are much more defined on a 2D plot after we remove the named-entities from arguments. Named-entity removal allows the clustering process to focus on more meaningful words referring to the event, sentiment, or topic instead." }, { "figure_ref": [], "heading": "Applications in the Industry", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Use Cases", "publication_ref": [], "table_ref": [], "text": "Summarization Our causal KG is useful for summarizing reported causal relations in news. In the earlier section about qualitative evaluation, we constructed a 'pandemic' subgraph and demonstrated how we can swiftly learn about the reported effects of pandemic.\nAnswering causal questions and predicting future events Our KG is also useful for answering causal questions. In Figure 1, users that learn that that Event A (\"the growing number of electric vehicles\") causes Event B (\"an increased demand for automotive smart display systems\") might ask:\nWhat might happen next as a result of Event B? By setting the target node to be Event B, we identify that the next two likely events (based on edge support) are that display systems will \"become an integral part of the automotive supply chain\" ('automotive industry shaft xa siness') and that this \"trend is expected to continue during the forecast period\" ('forecast period during anticipated analysts'). Other subsequent Effects are: \"so does demand for wiring harnesses and related electronic sub-assemblies\", \"forcing European car makers to rely on Asian suppliers\", \"the task of designing today's cars much more difficult\", and many more meaningful predictions. We can also ask other causal questions like \"Are there other causes of Event B?\" and \"What caused Event A in the first place?\"\nInferring transitive causal relations In transitive relations, if Event A causes Event B, and Event B causes Event C, then Event A can also be said to cause Event C. For causal relations, the transitive property fails if the relations are too specific 8 . Referring to the example from the earlier paragraph, we observe that transitivity does hold: \"the growing number of electric vehicles\" (Event A) does help make automotive smart display systems \"become an integral part of the automotive supply chain\" (Event C) through \"an increased demand for automotive smart display systems\" (Event B). This observation has strong implications on how insightful our KG can be for inferring causal relationships that were not otherwise stated directly in the news. Further analysis will be needed to identify the cases where transitivity fails, and how we should handle them.\nTrend monitoring We demonstrate the potential of our KG to reflect trends over time. To conduct the experiment, we split our dataset into articles that are published across three time baskets: Before 2020, 2020 to 2021 (inclusive), and after 2021. Since our dataset is very imbalanced across time, we down-sampled the two larger baskets such that all baskets have the same sample size. Similar to analyses before, we created subgraphs by filtering out target nodes and nodes that are one step away from target node(s). A node is a target if the search term(s) can be found in the node description in any order. In Figure 2, we studied three search terms across the three time baskets. For each subgraph, the edges highlighted in red falls within the time basket of interest. Our findings show that the frequency of causal relationships about the topic \"chip shortage\" was rare before 2020, extremely heated during the pandemic period of 2020 to 2021, and lower from 2021 onwards. This is validated by experts' understanding that the COVID-19 pandemic kick-started the chip shortage, amongst many other reasons. As a sanity check, we found that no causal relationships were mentioned about \"pandemic\" before 2020. This makes sense because the awareness of COVID-19 only started taking off in the first quarter of 2020. As a control, we also checked that interest in topics about \"robotics\" stayed roughly constant throughout the three time baskets. To con-8 Example of specific causal relations violating transitivity: \"Sugar makes John happy. Sugar causes diabetes. Diabetes makes John sad. Does sugar make John happy or sad?\" clude, our KG can be helpful for monitoring heated causal topics and news trends across time.\nFigure 2: Subgraph(s) filtered based on nodes that are one step away from target node(s) highlighted in pink. In this example, a node is referred to as a target if it contains the search term in any order indicated on top (E.g. 'chip shortage'). Edges from articles falling under the time period corresponding to the right axis are highlighted in red." }, { "figure_ref": [], "heading": "User Feedback", "publication_ref": [], "table_ref": [], "text": "The final KG was presented users to gather feedback. Response was positive, and many users could find a use for this KG in their daily work. We intend to deploy the our system to generate regular snapshots of the news that will serve as market reports for the Sales Division. Users believe that harnessed with knowledge about recent causal events, they can improve their market understanding and perform better at prediction-related tasks, like sales and demand forecasting. In the future, users would also like to see the KG to be improved by adding temporal and sentiment elements." }, { "figure_ref": [], "heading": "Conclusion & Future Work", "publication_ref": [], "table_ref": [], "text": "We focused on the application of extracting causal relations in industry news to construct a causal KG. Our approach was (1) recall-focused, by employing BERT-based on top of pattern-based extraction methods. Our approach was also (2) precision-focused, and for our test set, achieved 75% score. Finally, our final KG was designed to be (3) interpretable, with many use cases and was deemed useful by our users in the industry.\nOur work can be replicated onto many other domains. In the future, we intend to annotate a larger test set for more concrete evaluation. Additionally, we plan to deploy the extraction and graphing system to generate monthly snapshots of the market. We will monitor the users' interactions with the system and identify areas for improvement. Lastly, we hope to include temporal and sentiment elements in our extraction to enrich the final KG." }, { "figure_ref": [], "heading": "Merge Sequential Arguments: Based on sequence in text,", "publication_ref": [], "table_ref": [], "text": "if two arguments follow one another and are of the same type, we merge them into one argument. See Example 1." }, { "figure_ref": [], "heading": "Keep Longest Argument for 3-Argument Examples:", "publication_ref": [], "table_ref": [], "text": "For examples where there are (A) Two ARG0 and one ARG1 or (B) One ARG0 and two ARG1, we kept the longest argument (in terms of character count) for the type that had two arguments. See Example 2." }, { "figure_ref": [], "heading": "Keep Multiple Causal Relations based on CPC:", "publication_ref": [], "table_ref": [], "text": "For examples with multiple arguments, we retained all <ARG0> as potential Causes and all <ARG1> as potential Effects. We also retained two disconnected potential Causes with an Effect in the middle as a potential Cause. Likewise, we retained two disconnected potential Effects with a Cause in the middle as a potential Effect. With a list of potential Causes and Effects, we marked the original sentence with the potential Cause and Effect, and fed the marked sentence through a CPC model to predict if the pairs are causal or not. We retained pairs that were predicted to be causal. See Examples 3 and 4. In the later sections, we refer to all causal relations extracted from this method as BERT-M.\nDuring investigations, we also realised by filtering away examples that CSC predicts as non-causal early on, we are losing out on potential causal examples. To rectify this, we retained examples where CESD identified one Cause and Effect, suggesting the Cause/Effect boundaries were easy to identify and more likely to represent a causal event. After marking the original sentence with these potential arguments, we fed the marked sentence through the CPC predictor to obtain a prediction of whether the pair of arguments are causal or not. We retained pairs that were predicted to be causal. See Examples 5 and 6." }, { "figure_ref": [], "heading": "Quantitative evaluation of causal relation extraction", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "Table 6 provides some examples comparing model predictions to human annotations, and how the example contributes to the final score in terms of TP, FP and FN." }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "Qualitative evaluation of causal relation extraction", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "In this Section, we outline the qualitative experiments and findings to assess the effectiveness of our argument clustering approach. This Section is summarized briefly under the Section \"Experimental Results -Argument Clustering -Qualitative evaluation\".\nEffect of clustering on KG interpretability In Table 3, we show that clustering helps to increase the average edge weight and node centrality. The resulting effect is that we obtained a single connected graph instead of multiple subgraphs. We visualize this phenomenon in this subsection, and demonstrate why a highly connected graph is more useful for inferring causal relationships. Figure 3 compares subgraphs before and after clustering. We set target nodes to be any node that contains the keyword 'pandemic'. Subgraphs are then defined as target nodes plus nodes that are one step away from target nodes. Before clustering, we obtain multiple disconnected subgraphs that contain either a Cause or Effect related to the keyword 'pandemic'. After clustering, we end up with only three target nodes that reflect pandemic related traits, reflected in one highly connected graph.\nBefore clustering, the three edges that had a support of ≥ 2 all conveyed the idea that 'pandemic' causes 'disruptions'. However, after clustering, we could obtain much more meaningful causal relations. Our subgraph detected that 'pandemic' causes supply chain disruptions ('chain supply disruption disruption chains'), chip and semiconductor shortages ('chip shortages semiconductors shortage supply'), sales decreases ('sales decrease slump fell poor'), and disruptions and interferences in general ('disruption interference require happens ease'). Therefore, our argument clustering solution helps to create a highly connected causal KG, which is more insightful to infer causal relationships from.\nEffect of named-entity removal on clustering Figure 4 compares the clustering outcomes with and without the removal of named-entities. Compared to Panel II, Panel I has well-defined clusters. In Panel I, Topics 3 (Purple) and 1867 (Blue) overlap closely because supply chain disruptions and shortages often refer to similar contexts related to manufacturing and production. Topics 49 (Red), 347 (Brown) and 793 (Orange) refers to topics about the automotive and electronic vehicles (EVs) industry, and hence, occur closely. Interestingly, Topic 2 (Green) reflects arguments that are short and make little sense (E.g. \"t sm c\"), or is wholly comprised of named-entities and thus becomes an empty span (E.g. \"Tesla\", \"Europe\", \"Aug. 17\", \"November\"). Thus, the points under Topic 2 are far from all other points in the scatterplot. In Panel II, the clusters are less defined. The topics' keywords include named-entities like \"China\", \"Honda\" and \"European\". In fact, Topic 2540 (Red) seems to be completely about the automobiles manufacturer Honda, regardless of nature or connotation of the main event. For example, both arguments \"Honda HR-V is equipped with many functions\" and \"Honda recalled its Vezel SUV last year\" fall under Topic 2540. In conclusion, it is important to cluster " }, { "figure_ref": [], "heading": "Appendix", "publication_ref": [], "table_ref": [], "text": "Post-processing of BERT-based predictions One limitation of UniCausal's CESD is that it was trained to predict only one causal relation per example. The predictions are easy to infer if only one ARG0 and one ARG1 is predicted by the model. However, in practice, there were no restrictions on the number of arguments that can be predicted. In our investigations, many examples had multiple ARG0 and multiple ARG1 predicted. Without further information on which argument is tied to which, we cannot identify the Cause and the Effect.\nTherefore, for examples with multiple ARG0 and/or multiple ARG1, we had to perform heuristics to process the predictions into usable Cause and Effect arguments. We employed three key post-processing steps, with examples shown in Table 5, and explained below:" } ]
Many financial jobs rely on news to learn about causal events in the past and present, to make informed decisions and predictions about the future. With the ever-increasing amount of news available online, there is a need to automate the extraction of causal events from unstructured texts. In this work, we propose a methodology to construct causal knowledge graphs (KGs) from news using two steps: (1) Extraction of Causal Relations, and (2) Argument Clustering and Representation into KG. We aim to build graphs that emphasize on recall, precision and interpretability. For extraction, although many earlier works already construct causal KGs from text, most adopt rudimentary pattern-based methods. We close this gap by using the latest BERT-based extraction models alongside pattern-based ones. As a result, we achieved a high recall, while still maintaining a high precision. For clustering, we utilized a topic modelling approach to cluster our arguments, so as to increase the connectivity of our graph. As a result, instead of 15,686 disconnected subgraphs, we were able to obtain 1 connected graph that enables users to infer more causal relationships from. Our final KG effectively captures and conveys causal relationships, validated through experiments, multiple use cases and user feedback.
Constructing and Interpreting Causal Knowledge Graphs from News
[ { "figure_caption": "Figure 1 :1Figure 1: Overview of our methodology.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Subgraph(s) filtered based on nodes that are one step away from target node(s) highlighted in pink. In this example, a node is referred to as a target if it contains the keyword 'pandemic'. Edges with support of ≥ 2 are highlighted in red.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Scatterplot of arguments from top 8 nodes (in terms of number of connected edges) cast on the first two components using Principal Component Analysis of the word embeddings. Each point reflects an argument, and is colored by the node/topic they belong to.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "4)Our model identified 48 causal relations from 43 sentences. 4 sentences had more than one causal relation. The performance metrics are reported in Table4. By using our proposed method of combining both pattern-based and", "figure_data": "ExtractionBefore ClusteringAfter ClusteringMethod|Sents| |Sents| |Rels| Avg Rel|V ||E|Avg E|V ||E|Avg ESupportSupportSupportPattern-based1,0066119751.0321,4769751.0327748451.340BERT-based19,25015,70219,192 1.00333,940 19,192 1.0032,99017,075 1.120Total20,25515,90220,086 1.00835,230 20,086 1.0083,00017,801 1.122", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Summary statistics of extracted causal relations per step.", "figure_data": "Before Clustering After ClusteringNo. of Unique Nodes, |V |35,2303,000No. of Unique Edges, |E|20,08617,801Total Weight, s20,25419,965No. of Subgraphs15,6861Avg Clustering Coefficient9.81e -061.75e -02Avg Degree Centrality3.24e -053.96e -03Avg Eigenvector Centrality6.64e -051.32e -02Transitivity4.17e -048.81e -03", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Graph statistics before and after clustering.", "figure_data": "Extraction MethodPRF1Pattern-based100.00 4.087.84BERT-based76.0971.4373.68Both75.0073.4774.23", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Performance metrics for extraction on humanannotated test set. Scores are reported in percentages (%).", "figure_data": "Top score per column is bolded.BERT-based approaches in extraction, the F1 score is thehighest at 74.23%.", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Processing of BERT-based predictions. French furlough deal, agreed by four unions at Stellantis</ARG0>, <ARG1>enables the company to reduce the number of hours worked by staff affected by the chip shortage</ARG1>. • The French furlough deal, agreed by four unions at Stellantis, enables the company to reduce <ARG1>the number of hours worked by staff</ARG1> <ARG0>affected by the chip shortage</ARG0>. • <ARG0>The French furlough deal, </ARG0>agreed by four unions at Stellantis, enables <ARG1>the company to reduce the number of hours worked by staff affected by the chip shortage.</ARG1> • The French furlough deal, agreed by four unions at Stellantis, enables <ARG1>the company to reduce the number of hours worked by staff affected by the </ARG1><ARG0>chip shortage</ARG0>. • <ARG0>The French furlough deal, </ARG0>agreed by four unions at Stellantis, enables <ARG1>the company to reduce the number of hours worked by staff affected by the </ARG1>chip shortage.", "figure_data": "S/NOriginalBERT-Based ExtractionPost-ProcessingCSCCESDFinalMethod1Still, car compa-1<ARG1>Still</ARG1><ARG1>Still , car companies and dealersMergenies and dealers, car companies andmay have to eventually adopt some of theSequentialmay have todealers may have tochanges Tesla has introduced</ARG1>Argu-eventually adopt<ARG1>eventually adopt<ARG0>to win over buyers who havementssome of thesome of the changes Teslagrown used to buying cars onlinechanges Tesla hashasintroduced</ARG1>.</ARG0>introduced to win<ARG0>to win over buy-over buyers whoers who have grown usedhave grown usedto buying cars onlineto buying cars.</ARG0>online.2\"Due to the cur-1<ARG1>\"</ARG1> Due to\" Due to <ARG0>the current situationKeeprent situation in<ARG0>the current situa-in this region</ARG0> <ARG1>, thereLongestthis region, theretion in this region</ARG0>may be disruptions in the supply chain .Argumentmay be disrup-<ARG1>, there may be dis-\"</ARG1>for3-tions in the sup-ruptions in the supply chainArgumentply chain.\". \"</ARG1>Examples3Hence we expect1HenceweexpectHence we expect <ARG0>global supplyKeepglobalsupply<ARG0>global</ARG0>chains switching to EVs</ARG0> to haveMultiplechains switching<ARG1>supply</ARG1>a positive impact on <ARG1>the IndianCausalto EVs to have a<ARG0>chainsswitch-EV industry</ARG1>.Relationspositive impacting to EVs to</ARG0>based onon the Indian EV<ARG1>have a</ARG1>CPCindustry.<ARG0>positiveim-pacton</ARG0><ARG1>theIndianEVindustry</ARG1><ARG0>.</ARG0>4China stocks rose1<ARG1>China</ARG1>• China stocks rose on Monday afterKeepon Monday afterstocks rose on Monday after<ARG0>the governor of the coun-Multiplethe governor of<ARG0>the</ARG0>try's</ARG0> central bank vowedCausalthe country's cen-governorofthe<ARG1>to increase the implementationRelationstral bank vowed<ARG0>country</ARG0>of prudent monetary policy</ARG1> tobased onto increase the' s central bank vowedsupport the real economy.CPCimplementation<ARG1>to increase the• China stocks rose on Monday afterof prudent mon-implementation of prudentthe governor of the country's centraletary policy tomonetary policy</ARG1>bank vowed <ARG1>to increase thesupport the real<ARG0>to support the realimplementation of prudent monetaryeconomy.economy .</ARG0>policy</ARG1> <ARG0>to support thereal economy. </ARG0>• <ARG1>China stocks rose on Mondayafter the governor of the country's centralbank vowed to increase the implementa-tion of prudent monetary policy</ARG1><ARG0>to support the real economy.</ARG0>5That break was0<ARG1>Thatbreak<ARG1>Thatbreakwasex-Addextended as thewasextended</ARG1>tended</ARG1> as <ARG0>the virusCausalvirus spread.as<ARG0>thevirusspread.<ARG0>Relationsspread.<ARG0>based onCPC6Ford is shutting0<ARG1>Fordisshut-<ARG1>Ford is shutting its car factoriesAddits car factories inting its car factoriesin India</ARG1> after <ARG0>Ford In-CausalIndia after FordinIndia</ARG1>af-dia racked up more than $2bn in lossesRelationsIndia racked upter <ARG0>Ford Indiaover the past decade.</ARG0>based onmore than $2bnracked up more than $2bnCPCin losses over thein losses over the pastpast decade.decade.</ARG0>", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Examples of human annotations compared to model predictions, and how they contribute to the True Positive (TP), False Positive (FP) and False Negative (FN) counts.", "figure_data": "", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" } ]
Fiona Anting Tan; Debdeep Paul; Sahim Yamaura; Miura Koji; See-Kiong Ng
[ { "authors": "A Aziz; M A Hossain; A N Chy", "journal": "", "ref_id": "b0", "title": "CSECU-DSG @ Causal News Corpus 2022: Fusion of RoBERTa Transformers Variants for Causal Event Classification", "year": "2022" }, { "authors": "Abu Dhabi", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "United Arab Emirates", "year": "" }, { "authors": "R Bunescu; R Mooney", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "A Shortest Path Dependency Kernel for Relation Extraction", "year": "2005" }, { "authors": "P Cao; X Zuo; Y Chen; K Liu; J Zhao; Y Chen; W Peng", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Knowledge-Enriched Event Causality Identification via Latent Structure Induction Networks", "year": "2021" }, { "authors": "T Caselli; P Vossen", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "The Event StoryLine Corpus: A New Benchmark for Causal and Temporal Relation Extraction", "year": "2017" }, { "authors": "D Chen; C Manning", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "A Fast and Accurate Dependency Parser using Neural Networks", "year": "2014" }, { "authors": "M Chen; Y Cao; K Deng; M Li; K Wang; J Shao; Y Zhang", "journal": "International Committee on Computational Linguistics", "ref_id": "b6", "title": "ERGO: Event Relational Graph Transformer for Document-level Event Causality Identification", "year": "2022" }, { "authors": "X Chen; G Zhang; A Nik; M Li; J Fu", "journal": "", "ref_id": "b7", "title": "1Cademy @ Causal News Corpus 2022: Enhance Causal Span Detection via Beam-Search-based Position Selector", "year": "2022" }, { "authors": "Abu Dhabi", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "United Arab Emirates", "year": "" }, { "authors": "A Culotta; J Sorensen", "journal": "", "ref_id": "b9", "title": "Dependency Tree Kernels for Relation Extraction", "year": "2004" }, { "authors": "J Devlin; M.-W Chang; K Lee; K Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "year": "2019" }, { "authors": "J Dunietz; L Levin; J Carbonell", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "The BE-CauSE Corpus 2.0: Annotating Causality and Overlapping Relations", "year": "2017" }, { "authors": "J R Finkel; T Grenager; C Manning", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling", "year": "2005" }, { "authors": "T Gao; X Yao; D Chen", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "SimCSE: Simple Contrastive Learning of Sentence Embeddings", "year": "2021" }, { "authors": "O Hassanzadeh", "journal": "", "ref_id": "b14", "title": "Building a Knowledge Graph of Events and Consequences Using Wikidata", "year": "2021-10-24" }, { "authors": "L He; S Zheng; T Yang; F Zhang", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "KLMo: Knowledge Graph Enhanced Pretrained Language Model with Fine-Grained Relationships", "year": "2021" }, { "authors": "S Heindorf; Y Scholten; H Wachsmuth; A N Ngomo; M Potthast", "journal": "ACM", "ref_id": "b16", "title": "CauseNet: Towards a Causality Graph Extracted from the Web", "year": "2020-10-19" }, { "authors": "I Hendrickx; S N Kim; Z Kozareva; P Nakov; D Séaghdha; S Padó; M Pennacchiotti; L Romano; S Szpakowicz", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "SemEval-2010 Task 8: Multi-Way Classification of Semantic Relations between Pairs of Nominals", "year": "2010" }, { "authors": "C Hidey; K Mckeown", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Identifying Causal Relations Using Parallel Wikipedia Articles", "year": "2016" }, { "authors": "A Ittoo; G Bouma", "journal": "Data Knowl. Eng", "ref_id": "b19", "title": "Minimally-supervised learning of domain-specific causal relations using an opendomain corpus as knowledge base", "year": "2013" }, { "authors": "K Izumi; H Sakaji", "journal": "", "ref_id": "b20", "title": "Economic Causal-Chain Search using Text Mining Technology", "year": "2019" }, { "authors": "P Mirza; R Sprugnoli; S Tonelli; M Speranza", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Annotating Causality in the TempEval-3 Corpus", "year": "2014" }, { "authors": "P Mirza; S Tonelli", "journal": "Dublin City University and Association for Computational Linguistics", "ref_id": "b22", "title": "An Analysis of Causality between Events and its Relation to Temporal Information", "year": "2014" }, { "authors": "A Nik; G Zhang; X Chen; M Li; J Fu", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "1Cademy @ Causal News Corpus 2022: Leveraging Self-Training in Causality Classification of Socio-Political Event Data", "year": "2022" }, { "authors": "K Radinsky; S Davidovich; S Markovitch", "journal": "ACM", "ref_id": "b24", "title": "Learning causality for news events prediction", "year": "2012-04-16" }, { "authors": "S Schuster; C D Manning", "journal": "European Language Resources Association (ELRA)", "ref_id": "b25", "title": "Enhanced English Universal Dependencies: An Improved Representation for Natural Language Understanding Tasks", "year": "2016" }, { "authors": "S Sia; A Dalmia; S J Mielke", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "Tired of Topic Models? Clusters of Pretrained Word Embeddings Make for Fast and Good Topics too!", "year": "2020" }, { "authors": "F A Tan; X Zuo; S.-K Ng", "journal": "Springer International Publishing", "ref_id": "b27", "title": "UniCausal: Unified Benchmark and Repository for Causal Text Mining", "year": "2023" }, { "authors": "B Webber; R Prasad; A Lee; A Joshi", "journal": "Philadelphia", "ref_id": "b28", "title": "The penn discourse treebank 3.0 annotation manual", "year": "2019" }, { "authors": "Z Xu; Y Dang", "journal": "International Journal of Production Research", "ref_id": "b29", "title": "Data-driven causal knowledge graph construction for root cause analysis in quality problem solving", "year": "2022" }, { "authors": "Z Zhang; M Fang; L Chen; M R Namazi Rad", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Is Neural Topic Modelling Better than Clustering? An Empirical Study on Clustering with Contextual Embeddings for Topics", "year": "2022" }, { "authors": "Z Zhang; H Wang; H Zhao; H Tong; H Ji", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "EventKE: Event-Enhanced Knowledge Graph Embedding", "year": "2021" }, { "authors": "X Zuo; P Cao; Y Chen; K Liu; J Zhao; W Peng; Y Chen", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "Improving Event Causality Identification via Self-Supervised Representation Learning on External Causal Statement", "year": "2021" }, { "authors": "X Zuo; P Cao; Y Chen; K Liu; J Zhao; W Peng; Y Chen", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "LearnDA: Learnable Knowledge-Guided Data Augmentation for Event Causality Identification", "year": "2021" }, { "authors": "X Zuo; Y Chen; K Liu; J Zhao", "journal": "International Committee on Computational Linguistics", "ref_id": "b34", "title": "KnowDis: Knowledge Enhanced Data Augmentation for Event Causality Detection via Distant Supervision", "year": "2020" } ]
[ { "formula_coordinates": [ 4, 80.34, 522.62, 212.16, 24.72 ], "formula_id": "formula_0", "formula_text": "T F IDF d = n w w n w • log( |D| |{d ∈ D|w ∈ d}| )(1)" }, { "formula_coordinates": [ 4, 95.58, 555.2, 196.92, 38.19 ], "formula_id": "formula_1", "formula_text": "IDF k = log( |K| |{w ∈ K}| ) (2) T F IDF × IDF = T F IDF d • IDF k (3)" }, { "formula_coordinates": [ 4, 319.5, 136.45, 238.5, 20.61 ], "formula_id": "formula_2", "formula_text": "V = {(v 1 , v 2 , ..., v n )} and directed edges E = {(v 1 , v 2 ), (v 2 , v 3 ), ...}. A directed edge (v i , v j ) rep-" }, { "formula_coordinates": [ 4, 323.86, 609.77, 228.5, 31.15 ], "formula_id": "formula_3", "formula_text": "P = T P T P + F P , R = T P T P + F N , F 1 = 2 × P × R P + R(" } ]
10.18653/v1/n19-1423
2023-07-18
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b27", "b26", "b33", "b24", "b31", "b34", "b28", "b10", "b1", "b32", "b18", "b0", "b30", "b12", "b11", "b22", "b1", "b32", "b18", "b12", "b29", "b9", "b9", "b10", "b12", "b0", "b1" ], "table_ref": [], "text": "Maintaining appropriate human-computer conversation is an important task leaping towards advanced artificial intelligence. Most of existing methods have studied understanding conversations between two participants, aiming at returning an appropriate response either in a generation-based (Shang et al., 2015;Serban et al., 2016;Zhang et al., 2020;Roller et al., 2021) or retrieval-based manner (Wu et al., 2017;Zhou et al., 2018;Tao et al., 2019; * Corresponding author. : reply-to\nFigure 1: Illustration of (a) a graphical information flow of an MPC where rectangles denote utterances, and solid lines represent the \"reply\" relationship between two utterances, and (b) the detailed reply relationships between each utterance and U 3 . Gu et al., 2020). Recently, researchers have paid more attention to a more practical and challenging scenario involving more than two participants, which is well known as multi-party conversations (MPCs) (Ouchi and Tsuboi, 2016;Zhang et al., 2018;Le et al., 2019;Hu et al., 2019;Wang et al., 2020;Gu et al., 2021Gu et al., , 2022)). Unlike twoparty conversations, utterances in an MPC can be spoken by anyone and address anyone else in this conversation, constituting a graphical information flow and various relationships between utterances as shown in Figure 1(a). Thus, predicting who the next speaker will be (Meng et al., 2018) and who the addressee of an utterance is (Ouchi and Tsuboi, 2016;Zhang et al., 2018;Le et al., 2019) are unique and important issues in MPCs.\nThe complicated interactions between interlocutors, between utterances and between an interlocutor and an utterance naturally increase the difficulty of fully understanding MPCs. Existing studies on MPC understanding focus on the challenging issue of modeling the complicated conversation structures and information flows. The current stateof-the-art method MPC-BERT (Gu et al., 2021) proposed to pre-train a language model with two types of self-supervised tasks for modeling interlocutor structures and utterance semantics respectively in a unified framework. The complementary structural and semantic information in MPCs is learned by designing a variety of self-supervised optimization objectives. However, the semantics contained in the interlocutor and utterance representations may not be effectively captured as these supervision signals are placed only on top of language models. During encoding inside language models, the full and equivalent connections among utterances in regular Transformer (Vaswani et al., 2017) ignore the sparse but distinctive dependency of an utterance on another, such as \"reply-to\". Despite of the performance improvement with pre-training, MPC-BERT still overlooks the inherent MPC graph structure when fine-tuning on various downstream tasks. Intuitively, leveraging graph-induced signals when fine-tuning pre-trained language models (PLMs) may yield better contextualized representations of interlocutors and utterances and enhance conversation understanding, but has been overlooked in previous studies.\nIn light of the above issues, we propose a plugand-play and lightweight method named graphinduced fine-tuning (GIFT), which can adapt various Transformer-based PLMs and improve their ability for universal MPC understanding. Existing Transformer-based PLMs such as BERT (Devlin et al., 2019) are originally designed for processing sequential texts. To distinguish different relationships between utterances, four types of edges (reply-to, replied-by, reply-self and indirectreply) are designed to integrate graph-induced signals in the attention mechanism. These edgetype-dependent parameters are utilized to refine the attention weights and to help construct the graphical conversation structure in Transformer. Intuitively, the conversation structure influences the information flow in MPCs, thus it can be used to strengthen the representations of utterance semantics. By this means, it can help characterize fine-grained interactions during the internal encoding of PLMs, and produce better representations that can be effectively generalized to multiple downstream tasks of MPCs. Lastly, the proposed method is plug-and-play which can be implemented into various Transformer-based PLMs, and is lightweight which requires only 4 additional parameters per encoding layer.\nTo measure the effectiveness of the proposed GIFT method and to test its generalization ability, GIFT is implemented into three PLMs including BERT (Devlin et al., 2019), SA-BERT (Gu et al., 2020) and MPC-BERT (Gu et al., 2021). We evaluate the performance on three downstream tasks including addressee recognition, speaker identification and response selection, which are three core research issues of MPCs. Two benchmarks based on Ubuntu IRC channel are employed for evaluation. One was released by Hu et al. (2019). The other was released by Ouchi and Tsuboi (2016) with three experimental settings according to session lengths. Experimental results show that GIFT helps improve the performance of all three PLMs on all three downstream tasks. Take MPC-BERT as an example, GIFT improved the performance by margins of 0.64%, 1.64%, 3.46% and 4.63% on the test sets of these two benchmarks respectively in terms of utterance precision of addressee recognition, by margins of 6.96%, 23.05%, 23.12% and 22.99% respectively in terms of utterance precision of speaker identification, and by margins of 1.76%, 0.88%, 2.15% and 2.44% respectively in terms of response recall of response selection, achieving new state-of-the-art performance on MPC understanding.\nIn summary, our contributions in this paper are three-fold: (1) A graph-induced fine-tuning (GIFT) method is proposed to construct and to utilize the inherent graph structure for MPC understanding.\n(2) GIFT is implemented into three PLMs and is tested on three downstream tasks to comprehensively evaluate the effectiveness and generalization ability. (3) The proposed method achieves new state-of-the-art performance on three downstream tasks and two benchmarks." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b1", "b32", "b22", "b18", "b17", "b0", "b30", "b20", "b21", "b12", "b11", "b12", "b0", "b11", "b14", "b29" ], "table_ref": [], "text": "Existing methods on building dialogue systems can be generally categorized into studying twoparty conversations and multi-party conversations (MPCs). In this paper, we study MPCs. In addition to predicting the utterance, the tasks of identifying the speaker and recognizing the addressee of an utterance are also important for MPCs. Ouchi and Tsuboi (2016) first proposed the task of addressee and response selection and created an MPC corpus for studying this task. Zhang et al. (2018) proposed the speaker interaction RNN, which updated the speaker embeddings role-sensitively for addressee and response selection. Meng et al. (2018) proposed a task of speaker classification as a surrogate task for general speaker modeling. Le et al. (2019) proposed a who-to-whom (W2W) model to recognize the addressees of all utterances in an MPC. Kummerfeld et al. (2019) created a dataset based on Ubuntu IRC channel which was manually annotated with reply-structure graphs for MPC disentanglement. Hu et al. (2019) proposed a graph-structured neural network (GSN), the core of which is to encode utterances based on the graph topology rather than the sequence of their appearances to model the information flow as graphical. Wang et al. (2020) proposed to track the dynamic topic for response selection. Liu et al. (2020Liu et al. ( , 2021) ) studied transition-based online MPC disentanglement by modeling semantic coherence within each session and exploring unsupervised co-training through reinforcement learning. Gu et al. (2021) proposed MPC-BERT pre-trained with two types of self-supervised tasks for modeling interlocutor structures and utterance semantics. Gu et al. (2022) proposed HeterMPC to model the complicated interactions between utterances and interlocutors with a heterogeneous graph.\nCompared with MPC-BERT (Gu et al., 2021) that is the most relevant to this work, two main differences should be highlighted. First, MPC-BERT works on designing various self-supervised tasks for pre-training, while GIFT works on further improving fine-tuning performance. Second, MPC-BERT models conversation graph structures by placing self-supervision signals on top of PLMs, while GIFT achieves this by alternatively modifying the internal encoding of PLMs. Furthermore, compared with GSN (Hu et al., 2019) and Het-erMPC (Gu et al., 2022) that both attempt to model graphical information flows, it should be noted that there are also two main differences. First, GSN and HeterMPC represent each individual utterance as a node vector encoded by either BiLSTM (Hochreiter and Schmidhuber, 1997) or Transformer (Vaswani et al., 2017), and then update via graph neural network-based information passing, while this work integrates graph-induced signals into the fully-connected interactions of Transformer over the whole MPC context. Second, GSN and HeterMPC are designed specifically for MPC response generation, while this work focuses on universal MPC understanding. Overall, to the best of our knowledge, this paper makes the first attempt to design a fine-tuning method that leverages graph-induced signals during the internal encoding of Transformer-based PLMs for improving MPC understanding.\n3 Graph-Induced Fine-Tuning (GIFT)\nAn MPC instance is composed of a sequence of (speaker, utterance, addressee) triples, denoted as {(s n , u n , a n )} N n=1 , where N is the number of turns in the conversation. Our goal is to fine-tune PLMs for universal MPC understanding. Given an MPC, it is expected to produce embedding vectors for all utterances which contain not only the semantic information of each utterance, but also the speaker and addressee structure of the whole conversation. Thus, it can be effectively adapted to various tasks by fine-tuning model parameters." }, { "figure_ref": [], "heading": "Intuition", "publication_ref": [ "b25", "b12", "b29", "b9", "b0", "b11" ], "table_ref": [], "text": "Graphs are ubiquitous data structures. There is a wide range of application domains where data can be represented as graphs. For learning on graphs, graph neural networks (GNNs) (Scarselli et al., 2009) have emerged as the most powerful tool in deep learning. In short, GNNs take in a graph with node and edge features, and build abstract feature representations of nodes and edges by taking the available explicit connectivity structure (i.e., graph structure) into account. The so-generated features are then passed to downstream classification layers.\nIn this work, an MPC is viewed as a conversation graph. The current state-of-the-art method MPC-BERT (Gu et al., 2021) concatenates all utterances into a sequential text and sends it into Transformerbased PLMs for encoding. Recently, Transformerbased neural networks have been proven effective for representation learning and on a wide range of applications in natural language processing (NLP) such as machine translation (Vaswani et al., 2017) and language modeling (Devlin et al., 2019). Since Transformer considers full attention while building contextualized word representations, the full and equivalent connections among utterances ignore the sparse but distinctive dependency of an utterance on another. More importantly, recent studies on MPCs have indicated that the complicated graph structures can provide crucial interlocutor and utterance semantics (Hu et al., 2019;Gu et al., 2022). Thus, it inspires us to refine Transformerbased PLMs by modeling graph structures during internal encoding to help enhance the conversation understanding process." }, { "figure_ref": [], "heading": "Input Representation", "publication_ref": [ "b10", "b12" ], "table_ref": [], "text": "Following Gu et al. (2020) and Gu et al. (2021), another type of speaker embeddings is added to " }, { "figure_ref": [ "fig_2" ], "heading": "+ + [CLS] [CLS] [CLS]", "publication_ref": [], "table_ref": [], "text": "[Mask] the input representation as shown in Figure 2, to consider the speaker information of each utterance. Considering that the set of interlocutors are inconsistent in different conversations, a positionbased interlocutor embedding table is initialized randomly at first and is updated during fine-tuning. In this way, each interlocutor in a conversation is assigned with an embedding vector according to the order it appears in the conversation. Then, the speaker embeddings for each utterance can be derived by looking up this embedding table and assigned for all tokens in this utterance. The speaker embeddings are combined with the standard token, position and segmentation embeddings. The input representation is denoted as H = {h m } M m=0 , where h m ∈ R d , d is the dimension of embedding vectors and M is the length of input sequences.\n+ (a) Addressee Recognition (b) Speaker Identification (c) Response Selection U1 U2 U3 U4 U5 U6 U7 U8 U1 U2 U3 U4 U5 U6 U7 U8 L layers of PLMs U1 U2 U3 U4 U5 U6 U7 U8 U1 U2 U3 U4 U5 U6 U7 U8 U1 U2 U3 U4 U5 U6 U7 U8 U1 U2 U3 U4 U5 U6 U7 U8 U1 Un" }, { "figure_ref": [ "fig_2" ], "heading": "Graph-Induced Encoding", "publication_ref": [ "b19", "b29" ], "table_ref": [], "text": "To derive the contextualized and graph-induced representations, the output of encoding of our proposed method is based on both semantic similarity and structural relationships between a query vector and each of a set of key vectors. Given the input representation H, it is first encoded with the multihead self-attention mechanism as\nhead i = Atten(HW q i , HW k i , HW v i ), (1) MultiHead(H) = [head 1 , ..., head h ]W o , (2) where W q i ∈ R d× d h , W k i ∈ R d× d h , W v i ∈ R d× d h\nand W o ∈ R d×d are all trainable parameters. h is the number of attention heads and [;] denotes the concatenation operation. When calculating attention weights between tokens, existing Transformer-based PLMs consider the relationship between any two tokens to be equivalent. This approach does not model the inherent graph structure while encoding, which is crucial for constructing a graph-induced topology. To distinguish different relationships between utterances, edge-type-dependent parameters ϕ(e q,v ) are utilized to refine the attention weights as\nAtten(q, k, v) = softmax(ϕ(e q,v ) q ⊤ k √ d )v,(3)\nwhere e q,v ∈ {reply-to, replied-by, reply-self, indirect-reply} as illustrated in Figure 1(b). On the one hand, the reply-to edge guides the modeling of what the current utterance should be like given the prior utterance it replies to. On the other hand, the replied-by edge focuses on how the posterior utterances amend the modeling of the current utterance inspired by Li et al. (2022). In addition, the reply-self edge determines how much of the original semantics should be kept. Finally, the rest of the utterances are connected through the indirect-reply edge for contextualization. It is notable that the relationships between utterances are assigned for all tokens in an utterance. With these four types of edges, different relationships between utterances can be distinguished and the contextualized encoding can be conducted following a graph-induced topology. The dependency of an utterance on another can be well modeled for better MPC understanding. Afterwards, the operations of residual connection, layer normalization and feed-forward network are applied accordingly as those used in a standard Transformer encoder layer (Vaswani et al., 2017). Finally, the combination of all the above operations is performed L times to derive deep contextualized representations for MPC understanding.\nThree downstream tasks are employed to evaluate the MPC understanding as comprehensively as possible, aiming at the issues of addressing whom, who speaking and saying what. When fine-tuning on each downstream task, all parameters are updated. Figure 2 shows the input representations and model architectures for three tasks respectively." }, { "figure_ref": [], "heading": "Addressee Recognition", "publication_ref": [ "b1", "b32" ], "table_ref": [], "text": "In this paper, we follow the experimental setting in Ouchi and Tsuboi (2016) and Zhang et al. (2018) where models are tasked to recognize the addressee of the last utterance in a conversation. 1Formally, models are asked to predict âN given {(s n , u n , a n )} N n=1 \\a N , where âN is selected from the interlocutor set in this conversation and \\ denotes exclusion. When fine-tuning, this task is reformulated as finding a preceding utterance from the same addressee.\nU n is a sequence of utterance tokens. A [CLS] token is inserted at the start of each utterance, denoting the utterance-level representation for each individual utterance. Then, all utterances in a conversation are concatenated and a [SEP] token is inserted at the end of the whole sequence. It is notable that the reply-to edge of the last utterance is masked to avoid leakage. After encoded by PLMs, the contextualized representations for each [CLS] token representing individual utterances are extracted. A task-dependent non-linear transformation layer is placed on top of PLMs in order to adapt the output of PLMs to different tasks. Next, a layer normalization is performed to derive the utterance representations for this specific task {u n } N n=1 , where u n ∈ R d . Then, for the last utterance U N , its reply-to matching scores with all its preceding utterances are calculated as\nm N n = softmax(u ⊤ N • A • u n ), n < N,(4)\nwhere m N n is defined as the probability of the speaker of U n being the addressee of U N . Then, the utterance with the highest score is selected and the speaker of the selected utterance is considered as the recognized addressee. Finally, the finetuning objective of this task is to minimize the cross-entropy loss as\nL ar = - N -1 n=1 y N n log(m N n ),(5)\nwhere y N n = 1 if the speaker of U n is the addressee of U N and y N n = 0 otherwise." }, { "figure_ref": [], "heading": "Speaker Identification", "publication_ref": [ "b12" ], "table_ref": [], "text": "We follow the experimental setting in Gu et al. (2021) where models are tasked to identify the speaker of the last utterance in a conversation. Formally, models are asked to predict ŝN given {(s n , u n , a n )} N n=1 \\s N , where ŝN is selected from the interlocutor set in this conversation. When finetuning, this task is reformulated as identifying the utterances sharing the same speaker.\nFirst, the speaker embedding of the last utterance in the input representation is masked to avoid information leakage. Similar to the task of addressee recognition, the operations of PLM encoding, extracting the representations for [CLS] tokens, non-linear transformation and layer normalization are performed. For the last utterance U N , its identical-speaker matching scores m N n with all preceding utterances are calculated similarly as Eq. ( 4). Here, m N n denotes the probability of U N and U n sharing the same speaker. The fine-tuning objective of this task is to minimize the crossentropy loss similarly as Eq. ( 5). Here, y N n = 1 if U n shares the same speaker with U N and y N n = 0 otherwise." }, { "figure_ref": [], "heading": "Response Selection", "publication_ref": [], "table_ref": [], "text": "This task asks models to select ûN from a set of response candidates given the conversation context {(s n , u n , a n )} N n=1 \\u N , which is an important retrieval-based approach for chatbots. The key is to measure the similarity between two segments of context and response.\nFormally, utterances in a context are first concatenated to form a segment, and each response candidate is the other segment. Then, the two segments are concatenated with a [SEP] token and a [CLS] token is inserted at the beginning of the whole sequence.\nThe contextualized representation e [CLS] for the first [CLS] token using PLMs is extracted, which is an aggregated representation containing the semantic matching information for the contextresponse pair. Then, e [CLS] is fed into a non-linear transformation with sigmoid activation to obtain the matching score between the context and the response as\nm cr = sigmoid(e ⊤ [CLS] • w + b),(6)\nwhere m cr denotes the probability of semantic matching between the context and the response candidate, w ∈ R d×1 and b ∈ R 1 are parameters updated during fine-tuning. Finally, the fine-tuning objective of this task is to minimize the crossentropy loss according to the true/false labels of responses in the training set as\nL rs = -[y cr log(m cr ) + (1 -y cr )log(1 -m cr )],(7)\nwhere y cr = 1 if the response r is a proper one for the context c; otherwise y cr = 0." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b0", "b1", "b18", "b1", "b32", "b18", "b12" ], "table_ref": [ "tab_0" ], "text": "We evaluated our proposed methods on two Ubuntu IRC benchmarks. One was released by Hu et al. (2019), in which both speaker and addressee labels was provided for each utterance. The other benchmark was released by Ouchi and Tsuboi (2016). Here, we adopted the version shared in Le et al. (2019) for fair comparison. The conversation sessions were separated into three categories according to the session length (Len-5, Len-10 and Len-15) following the splitting strategy of previous studies (Ouchi and Tsuboi, 2016;Zhang et al., 2018;Le et al., 2019;Gu et al., 2021). Table 1 presents the statistics of the two benchmarks evaluated in our experiments." }, { "figure_ref": [], "heading": "Baseline Models", "publication_ref": [ "b18", "b1", "b32", "b9", "b10", "b12" ], "table_ref": [], "text": "We compared the proposed method with (1) non-pre-training-based models including Preceding (Le et al., 2019), SRNN, DRNN (Ouchi and Tsuboi, 2016), SHRNN (Serban et al., 2016) and SIRNN (Zhang et al., 2018), as well as (2) pre-training-based models including BERT (Devlin et al., 2019), SA-BERT (Gu et al., 2020), and MPC-BERT (Gu et al., 2021). Readers can refer to Appendix A for implementation details of the baseline models." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b13", "b0", "b1", "b0", "b1", "b1", "b8" ], "table_ref": [], "text": "The base version of various PLMs were adopted for all our experiments. GELU (Hendrycks and Gimpel, 2016) was employed as the activation for all non-linear transformations. The Adam method (Kingma and Ba, 2015) was employed for optimization. The learning rate was initialized as 0.00002 and the warmup proportion was set to 0.1. Some configurations were different according to the characteristics of these datasets. For Hu et al. (2019), the maximum utterance number was set to 7 and the maximum sequence length was set to 230. For the three experimental settings in Ouchi and Tsuboi (2016), the maximum utterance numbers were set to 5, 10 and 15 respectively, and the maximum sequence lengths were set to 120, 220 and 320 respectively. For Hu et al. (2019), the fine-tuning process was performed for 10 epochs for addressee recognition, 10 epochs for speaker identification, and 5 epochs for response selection. For Ouchi and Tsuboi (2016), the finetuning epochs were set to 5, 5 and 3 for these three tasks respectively. The batch sizes were set to 16 for Hu et al. (2019), and 40, 20, and 12 for the three experimental settings in Ouchi and Tsuboi (2016) respectively. The fine-tuning was performed using a GeForce RTX 2080 Ti GPU. The validation set was used to select the best model for testing. All codes were implemented in the TensorFlow framework (Abadi et al., 2016) and are published to help replicate our results.2 " }, { "figure_ref": [], "heading": "Metrics and Results", "publication_ref": [ "b1", "b32", "b18", "b12", "b0", "b1" ], "table_ref": [ "tab_1", "tab_2" ], "text": "Addressee recognition We followed the metric of previous work (Ouchi and Tsuboi, 2016;Zhang et al., 2018;Le et al., 2019;Gu et al., 2021) by employing precision@1 (P@1) to evaluate the performance of utterance prediction.\nTable 2 presents the results of addressee recognition. It shows that GIFT helps improve the performance of all three PLMs on all test sets. In detail, BERT fine-tuned with GIFT (BERT w/ GIFT) outperformed its counterpart, i.e., finetuning BERT without graph-induced signals, by margins of 2.92%, 2.73%, 5.75% and 5.08% on these test sets respectively in terms of P@1. In addition, GIFT improved the performance of SA- Hu et al. (2019) Ouchi and Tsuboi (2016) Len-5 Len-10 Len-15 Preceding (Le et al., BERT by margins of 1.32%, 2.50%, 4.26% and 5.22%, and of MPC-BERT by margins of 0.64%, 1.64%, 3.46% and 4.63% on these test sets respectively. These results verified the effectiveness and generalization of the proposed fine-tuning method.\nSpeaker identification Similarly, P@1 was employed as the evaluation metric of speaker identification for comparing performance. Table 3 presents the results of speaker identification. It also shows that GIFT helps improve the performance of all three PLMs on all test sets. In detail, GIFT improved the performance of BERT by margins of 13.71%, 27.50%, 29.14% and 28.82%, of SA-BERT by margins of 12.14%, 25.05%, 25.14% and 26.59%, as well as of MPC-BERT by margins of 6.96%, 23.05%, 23.12% and 22.99% in terms of P@1 on these test sets respectively. From these results, we can see that the proposed fine-tuning method are particularly useful for speaker identification." }, { "figure_ref": [], "heading": "Response selection", "publication_ref": [ "b1", "b32", "b12" ], "table_ref": [ "tab_3" ], "text": "The R n @k metrics adopted by previous studies (Ouchi and Tsuboi, 2016;Zhang et al., 2018;Gu et al., 2021) were used here. Each model was tasked with selecting k bestmatched responses from n available candidates for the given conversation context, and we calculated the recall of the true positive replies among the k selected responses, denoted as R n @k. Two settings were followed in which k was set to 1, and n was set to 2 or 10.\nTable 4 presents the results of response selection. Specifically, GIFT improved the performance of BERT by margins of 2.48%, 2.12%, 2.71% and 2.34%, of SA-BERT by margins of 3.04%, 4.16%, 5.18% and 5.35%, as well as of MPC-BERT by margins of 1.76%, 0.88%, 2.15% and 2.44% in terms of R 10 @1 on these test sets respectively. From these results, we can get inspired that the graph-induced signals introduced to construct conversation structures were crucial for deep context understanding to select an appropriate response." }, { "figure_ref": [ "fig_3", "fig_4" ], "heading": "Discussions", "publication_ref": [ "b0", "b1", "b12" ], "table_ref": [ "tab_4" ], "text": "Ablations To further illustrate the effectiveness of each component of the graph-induced topol- ogy, three ablation tests were performed on the validation set of Hu et al. (2019) and the results were shown in Table 5. First, both reply-to and replied-by edges were ablated by merging these two types of edges with in-direct edges. The performance dropped significantly since these two types of edges constituted the majority of the conversation structure topology. Furthermore, reply-to or replied-by edges were ablated by merging these two types of edges together without distinguishing the bidirectional reply relationships between utterances. The performance drop verified the necessity of modeling what it uttered and what it received respectively. Finally, reply-self edges were merged with in-direct edges, showing that it is useful to distinguish self-replying from others.\nImpact of conversation length Figure 3 illustrated how the performance of BERT, SA-BERT and MPC-BERT, as well as those implemented with GIFT changed with respect to different session lengths on three downstream tasks and on the test sets of Ouchi and Tsuboi (2016). First, we can draw the conclusions that the performance of addressee recognition and speaker identification dropped, while the performance of response selection was significantly improved for all models as the session length increased, which was consistent with the findings in Gu et al. (2021). Furthermore, to quantitatively compare the performance difference at different session lengths, the performance margins between Len-5 and Len-10, as well as those between Len-10 and Len-15 were calculated.\nReaders can refer to Table 6 in Appendix B for details of these margins. From the results, it can be seen that as the session length increased, the performance of models with GIFT dropped more slightly on addressee recognition and speaker identification, and enlarged more on response selection, than the models without GIFT in most 14 out of 18 cases (including every 2 margins across lengths 5-10-15 for each model on each task). These results implied the superiority of introducing graph-induced signals on modeling long MPCs with complicated structures. reply edges generally followed the trend of first rising, then falling, and finally rising again. In addition, the values of this edge were always the minimum among all four edges at the beginning, and surprisingly became the maximum in the last layer (to clarify, 0.9834, 0.9825 and 0.9821 for indirect-reply, reply-to and replied-by edges of the 12th layer in Figure 4(c) respectively). It is likely that models have learned human behavior in MPCs, i.e., paying less attention to utterances that are not the most relevant to themselves at first glance. After comprehending the most relevant utterances, turn to indirectly related ones in context for fully understanding the entire conversation." }, { "figure_ref": [], "heading": "Visualization of weights", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we present graph-induced finetuning (GIFT), a plug-and-play and lightweight method that distinguishes the relationships between utterances for MPC understanding. The sparse but distinctive dependency of an utterance on another among those in an MPC is modeled by utilizing the edge-type-dependent parameters to refine the attention weights during the internal encoding of PLMs. Experimental results on three downstream tasks show that GIFT significantly helps improve the performance of three PLMs and achieves new state-of-the-art performance on two benchmarks. Obviously, the addressee labels of utterances in the conversation history are important for building the inherent graph structure required for graphinduced fine-tuning. However, an MPC with a few addressee labels missing is a common issue. In the future, it will be part of our work to investigate the scarcity of addressee labels." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Enabling dialogue agents to join multi-party conversations naturally is undoubtedly a crucial step towards building human-like conversational AI, especially as such technology becomes more affordable and portable. More crucially, research on multi-party conversations has the promising potential to improve the interactive experience between humans and machines. Although the proposed method has shown great performance and generalization ability across various models and tasks, however, we never lose the sight of the other side of the coin. The proposed method requires full interactions among utterances in multi-head attention of Transformers. Therefore, computational complexity and inference latency may be worth considering when deploying to online dialogue systems. Aside from the well-known difficulties in deployment, the proposed method was only evaluated on the domain-specific datasets, i.e., Ubuntu IRC, considering the constraints of dataset resources. In the future, we will try to search more open-domain datasets for multi-party conversations, and test if the proposed method can still show great performance on a more challenging open-domain setting." }, { "figure_ref": [], "heading": "A Baseline Models", "publication_ref": [ "b18", "b2", "b32", "b3" ], "table_ref": [], "text": "We compared GIFT with these baseline methods.\nA.1 Non-pre-training-based Models • Preceding Le et al. (2019) was a heuristic method where the addressee was designated as the preceding speaker of the current speaker.\n• SRNN and DRNN Ouchi and Tsuboi (2016) proposed the static or dynamic recurrent neural network-based models (SRNN or DRNN) where the speaker embeddings were fixed or updated with the conversation flow.\n• SHRNN Inspired by Serban et al. ( 2016), Zhang et al. (2018) implemented Static-Hier-RNN (SHRNN), a hierarchical version of SRNN. It first built utterance embeddings from words and then processed utterance embeddings using high-level RNNs.\n• SIRNN Zhang et al. (2018) proposed a speaker interaction RNN-based model (SIRNN). This model distinguished the interlocutor roles (sender, addressee, observer) at a finer granularity and updated the speaker embeddings role-sensitively, since interlocutors might play one of the three roles at each turn and those roles vary across turns." }, { "figure_ref": [], "heading": "A.2 Pre-training-based Models", "publication_ref": [ "b9", "b10", "b12" ], "table_ref": [], "text": "The proposed GIFT was implemented into three PLMs.\n• BERT (Devlin et al., 2019) was pre-trained to learn universal language representations on a large amount of general corpora with the self-supervised tasks of MLM and NSP.\n• SA-BERT (Gu et al., 2020) added speaker embeddings and further pre-trained BERT on a domain-specific corpus to incorporate domain knowledge. We re-implemented SA-BERT on the same pre-training corpus used in this paper to ensure fair comparison.\n• MPC-BERT (Gu et al., 2021) was pre-trained with two major types of self-supervised tasks for modeling interlocutor structures and utterance semantics in a unified framework. (2016). For models with GIFT, numbers marked with ‡ denoted larger performance improvement or less performance drop compared with the corresponding models without GIFT." }, { "figure_ref": [], "heading": "B Impact of Conversation Length", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "To quantitatively compare the performance difference at different session lengths, the performance margins between Len-5 and Len-10, as well as those between Len-10 and Len-15 were calculated. Table 6 presents the details of these margins. From the results, it can be seen that as the session length increased, the performance of models with GIFT dropped more slightly on addressee recognition and speaker identification, and enlarged more on response selection, than the models without GIFT in most 14 out of 18 cases (including every 2 margins across lengths 5-10-15 for each model on each task)." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was supported by the Opening Foundation of State Key Laboratory of Cognitive Intelligence, iFLYTEK COGOS-2022005. We thank anonymous reviewers for their valuable comments." } ]
Addressing the issues of who saying what to whom in multi-party conversations (MPCs) has recently attracted a lot of research attention. However, existing methods on MPC understanding typically embed interlocutors and utterances into sequential information flows, or utilize only the superficial of inherent graph structures in MPCs. To this end, we present a plug-and-play and lightweight method named graph-induced fine-tuning (GIFT) which can adapt various Transformer-based pre-trained language models (PLMs) for universal MPC understanding. In detail, the full and equivalent connections among utterances in regular Transformer ignore the sparse but distinctive dependency of an utterance on another in MPCs. To distinguish different relationships between utterances, four types of edges are designed to integrate graph-induced signals into attention mechanisms to refine PLMs originally designed for processing sequential texts. We evaluate GIFT by implementing it into three PLMs, and test the performance on three downstream tasks including addressee recognition, speaker identification and response selection. Experimental results show that GIFT can significantly improve the performance of three PLMs on three downstream tasks and two benchmarks with only 4 additional parameters per encoding layer, achieving new state-of-theart performance on MPC understanding.
GIFT: Graph-Induced Fine-Tuning for Multi-Party Conversation Understanding
[ { "figure_caption": "Figure 2 :2Figure 2: Input representations and model architectures when fine-tuning on (a) addressee recognition, (b) speaker identification and (c) response selection. Specifically for U 3 , it illustrates how the graph-induced signals of the conversation structure in Figure 1(b) are utilized during Transformer-based encoding.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Performance of models fine-tuned with or without graph-induced signals at different session lengths on the test sets of Ouchi and Tsuboi (2016) of three downstream tasks.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: The weights of four types of edges in different encoding layers of MPC-BERT fine-tuned on the training set of Hu et al. (2019) of three downstream tasks.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Statistics of the two benchmarks evaluated in this paper.", "figure_data": "DatasetsTrain Valid TestHu et al. (2019)311,725 5,000 5,000Len-5 461,120 28,570 32,668Ouchi and Tsuboi (2016)Len-10 495,226 30,974 35,638Len-15 489,812 30,815 35,385", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Evaluation results of addressee recognition on the test sets in terms of P@1. Results except ours are cited fromOuchi and Tsuboi (2016) andZhang et al. (2018). Numbers marked with † denoted that the improvements after implementing GIFT were statistically significant (t-test with p-value < 0.05) comparing with the corresponding PLMs. Numbers in bold denoted that the results achieved the best performance.", "figure_data": "2019)-55.7355.6355.62SRNN (Ouchi and Tsuboi, 2016)-60.2660.6660.98SHRNN (Serban et al., 2016)-62.2464.8665.89DRNN (Ouchi and Tsuboi, 2016)-63.2866.7068.41SIRNN (Zhang et al., 2018)-72.5977.1378.53.BERT (Devlin et al., 2019)82.8880.2275.3274.03SA-BERT (Gu et al., 2020)86.9881.9978.2776.84MPC-BERT (Gu et al., 2021)89.5484.2180.6778.98BERT w/ GIFT85.80 †82.95 † 81.07 † 79.11 †SA-BERT w/ GIFT88.30 †84.49 † 82.53 † 82.65 †MPC-BERT w/ GIFT90.1885.85 † 84.13 † 83.61 †Hu et al. (2019) Ouchi and Tsuboi (2016)Len-5 Len-10 Len-15BERT (Devlin et al., 2019)71.8162.2453.1751.58SA-BERT (Gu et al., 2020)75.8864.9657.6254.28MPC-BERT (Gu et al., 2021)83.5467.5661.0058.52BERT w/ GIFT85.52 †89.74 † 82.31 † 80.40 †SA-BERT w/ GIFT88.02 †90.01 † 82.76 † 80.87 †MPC-BERT w/ GIFT90.50 †90.61", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Evaluation results of speaker identification on the test sets in terms of P@1. Results except ours are cited fromGu et al. (2021).", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Evaluation results of response selection on the test sets. Results except ours are cited fromOuchi and Tsuboi (2016),Zhang et al. (2018) andGu et al. (2021).", "figure_data": "ARSIRS(P@1) (P@1) (R 10 @1)BERT w/ GIFT86.24 86.50 75.26w/o reply-to and replied-by 84.38 70.67 72.30w/o reply-to or replied-by 85.72 85.67 74.00w/o reply-self85.72 85.92 74.72SA-BERT w/ GIFT88.88 89.32 78.80w/o reply-to and replied-by 86.90 77.07 77.50w/o reply-to or replied-by 88.44 88.87 78.22w/o reply-self88.42 89.05 78.32MPC-BERT w/ GIFT90.78 91.72 81.08w/o reply-to and replied-by 90.38 84.32 79.60w/o reply-to or replied-by 90.52 90.90 80.22w/o reply-self90.46 91.10 80.02Table 5: Evaluation results of the ablation tests onthe validation set of Hu et al. (2019) on the tasks ofaddressee recognition (AR), speaker identification (SI),and response selection (RS).", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Performance change of models as the session length increased on the test sets of Ouchi and Tsuboi", "figure_data": "Len 5 → Len 10 Len 10 → Len 15AR (P@1)BERT-4.90-1.29BERT w. GIFT-1.88 ‡-1.96SA-BERT-3.72-1.43SA-BERT w. GIFT-1.96 ‡-0.47 ‡MPC-BERT-3.54-1.69MPC-BERT w. GIFT-1.72 ‡-0.52 ‡SI (P@1)BERT-9.07-1.59BERT w. GIFT-7.43 ‡-1.91SA-BERT-7.34-3.34SA-BERT w. GIFT-7.25 ‡-1.89 ‡MPC-BERT-6.56-2.48MPC-BERT w. GIFT-6.49 ‡-2.61RS (R 10 @1)BERT+3.46+1.51BERT w. GIFT+4.05 ‡+1.14SA-BERT+4.03+1.15SA-BERT w. GIFT+5.05 ‡+1.32 ‡MPC-BERT+3.87+1.82MPC-BERT w. GIFT+5.14 ‡+2.11 ‡", "figure_id": "tab_4", "figure_label": "6", "figure_type": "table" } ]
Jia-Chen Gu; Zhen-Hua Ling; Quan Liu; Cong Liu; Guoping Hu
[ { "authors": " Hu", "journal": "", "ref_id": "b0", "title": "", "year": "2019" }, { "authors": "Tsuboi Ouchi", "journal": "", "ref_id": "b1", "title": "Len-5 Len-10 Len-15 R 2 @", "year": "2016" }, { "authors": "Tsuboi Drnn (ouchi", "journal": "", "ref_id": "b2", "title": "", "year": "2016" }, { "authors": " Sirnn (zhang", "journal": "", "ref_id": "b3", "title": "", "year": "2018" }, { "authors": " Bert (devlin", "journal": "", "ref_id": "b4", "title": "", "year": "2019" }, { "authors": "Sa-Bert ( Gu", "journal": "", "ref_id": "b5", "title": "", "year": "2020" }, { "authors": "Mpc-Bert ( Gu", "journal": "", "ref_id": "b6", "title": "", "year": "2021" }, { "authors": "", "journal": "MPC-BERT w", "ref_id": "b7", "title": "", "year": "1997" }, { "authors": "Martín Abadi; Paul Barham; Jianmin Chen; Zhifeng Chen; Andy Davis; Jeffrey Dean; Matthieu Devin; Sanjay Ghemawat; Geoffrey Irving; Michael Isard; Manjunath Kudlur; Josh Levenberg; Rajat Monga; Sherry Moore; Derek Gordon Murray; Benoit Steiner; Paul A Tucker; Vijay Vasudevan; Pete Warden; Martin Wicke; Yuan Yu; Xiaoqiang Zheng", "journal": "", "ref_id": "b8", "title": "Tensorflow: A system for large-scale machine learning", "year": "2016-11-02" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b9", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "year": "2019-06-02" }, { "authors": "Jia-Chen Gu; Tianda Li; Quan Liu; Zhen-Hua Ling; Zhiming Su; Si Wei; Xiaodan Zhu", "journal": "", "ref_id": "b10", "title": "Speaker-aware BERT for multi-turn response selection in retrieval-based chatbots", "year": "2020-10-19" }, { "authors": "Jia-Chen Gu; Chao-Hong Tan; Chongyang Tao; Zhen-Hua Ling; Huang Hu; Xiubo Geng; Daxin Jiang", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "HeterMPC: A heterogeneous graph neural network for response generation in multiparty conversations", "year": "2022-05-22" }, { "authors": "Jia-Chen Gu; Chongyang Tao; Zhen-Hua Ling; Can Xu; Xiubo Geng; Daxin Jiang", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "MPC-BERT: A pre-trained language model for multiparty conversation understanding", "year": "2021-08-01" }, { "authors": "Dan Hendrycks; Kevin Gimpel", "journal": "", "ref_id": "b13", "title": "Bridging nonlinearities and stochastic regularizers with gaussian error linear units", "year": "2016" }, { "authors": "Sepp Hochreiter; Jürgen Schmidhuber", "journal": "Neural Computation", "ref_id": "b14", "title": "Long short-term memory", "year": "1997" }, { "authors": "Wenpeng Hu; Zhangming Chan; Bing Liu; Dongyan Zhao; Jinwen Ma; Rui Yan", "journal": "", "ref_id": "b15", "title": "GSN: A graph-structured network for multi-party dialogues", "year": "2019-08-10" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b16", "title": "Adam: A method for stochastic optimization", "year": "2015-05-07" }, { "authors": "Jonathan K Kummerfeld; R Sai; Joseph Gouravajhala; Peper; R Chulaka Vignesh Athreya; Jatin Gunasekara; Ganhotra; Sankalp Siva; Lazaros C Patel; Walter S Polymenakos; Lasecki", "journal": "", "ref_id": "b17", "title": "A large-scale corpus for conversation disentanglement", "year": "2019-07-28" }, { "authors": "Ran Le; Wenpeng Hu; Mingyue Shang; Zhenjun You; Lidong Bing; Dongyan Zhao; Rui Yan", "journal": "", "ref_id": "b18", "title": "Who is speaking to whom? learning to identify utterance addressee in multi-party conversations", "year": "2019-11-03" }, { "authors": "Yiyang Li; Hai Zhao; Zhuosheng Zhang", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Back to the future: Bidirectional information decoupling network for multi-turn dialogue modeling", "year": "2022-12-07" }, { "authors": "Hui Liu; Zhan Shi; Jia-Chen Gu; Quan Liu; Si Wei; Xiaodan Zhu", "journal": "", "ref_id": "b20", "title": "End-to-end transition-based online dialogue disentanglement", "year": "2020" }, { "authors": "Hui Liu; Zhan Shi; Xiaodan Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Unsupervised conversation disentanglement through cotraining", "year": "2021-07-11" }, { "authors": "Zhao Meng; Lili Mou; Zhi Jin", "journal": "European Language Resources Association (ELRA", "ref_id": "b22", "title": "Towards neural speaker modeling in multi-party conversation: The task, dataset, and models", "year": "2018-05-07" }, { "authors": "Hiroki Ouchi; Yuta Tsuboi", "journal": "", "ref_id": "b23", "title": "Addressee and response selection for multi-party conversation", "year": "2016-11-01" }, { "authors": "Stephen Roller; Emily Dinan; Naman Goyal; Da Ju; Mary Williamson; Yinhan Liu; Jing Xu; Myle Ott; Eric Michael Smith; Y-Lan Boureau; Jason Weston", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Recipes for building an open-domain chatbot", "year": "2021-04-19" }, { "authors": "Franco Scarselli; Marco Gori; Ah Chung Tsoi; Markus Hagenbuchner; Gabriele Monfardini", "journal": "IEEE Trans. Neural Networks", "ref_id": "b25", "title": "The graph neural network model", "year": "2009" }, { "authors": "Iulian Vlad Serban; Alessandro Sordoni; Yoshua Bengio; Aaron C Courville; Joelle Pineau", "journal": "", "ref_id": "b26", "title": "Building end-to-end dialogue systems using generative hierarchical neural network models", "year": "2016-02-12" }, { "authors": "Lifeng Shang; Zhengdong Lu; Hang Li", "journal": "", "ref_id": "b27", "title": "Neural responding machine for short-text conversation", "year": "2015-07-26" }, { "authors": "Chongyang Tao; Wei Wu; Can Xu; Wenpeng Hu; Dongyan Zhao; Rui Yan", "journal": "", "ref_id": "b28", "title": "One time of interaction may not be enough: Go deep with an interaction-over-interaction network for response selection in dialogues", "year": "2019-07-28" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Lukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b29", "title": "Attention is all you need", "year": "2017-09" }, { "authors": "Weishi Wang; C H Steven; Shafiq R Hoi; Joty", "journal": "", "ref_id": "b30", "title": "Response selection for multi-party conversations with dynamic topic tracking", "year": "2020-11-16" }, { "authors": "Yu Wu; Wei Wu; Chen Xing; Ming Zhou; Zhoujun Li", "journal": "", "ref_id": "b31", "title": "Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots", "year": "2017-07-30" }, { "authors": "Rui Zhang; Honglak Lee; Lazaros Polymenakos; Dragomir R Radev", "journal": "", "ref_id": "b32", "title": "Addressee and response selection in multi-party conversations with speaker interaction rnns", "year": "2018-02-02" }, { "authors": "Yizhe Zhang; Siqi Sun; Michel Galley; Yen-Chun Chen; Chris Brockett; Xiang Gao; Jianfeng Gao; Jingjing Liu; Bill Dolan", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "DIALOGPT : Large-scale generative pre-training for conversational response generation", "year": "2020-07-05" }, { "authors": "Xiangyang Zhou; Lu Li; Daxiang Dong; Yi Liu; Ying Chen; Wayne Xin Zhao; Dianhai Yu; Hua Wu", "journal": "", "ref_id": "b34", "title": "Multi-turn response selection for chatbots with deep attention matching network", "year": "2018-07-15" } ]
[ { "formula_coordinates": [ 4, 81.95, 111.88, 434.25, 104.75 ], "formula_id": "formula_0", "formula_text": "+ (a) Addressee Recognition (b) Speaker Identification (c) Response Selection U1 U2 U3 U4 U5 U6 U7 U8 U1 U2 U3 U4 U5 U6 U7 U8 L layers of PLMs U1 U2 U3 U4 U5 U6 U7 U8 U1 U2 U3 U4 U5 U6 U7 U8 U1 U2 U3 U4 U5 U6 U7 U8 U1 U2 U3 U4 U5 U6 U7 U8 U1 Un" }, { "formula_coordinates": [ 4, 70.47, 637.53, 219.39, 57 ], "formula_id": "formula_1", "formula_text": "head i = Atten(HW q i , HW k i , HW v i ), (1) MultiHead(H) = [head 1 , ..., head h ]W o , (2) where W q i ∈ R d× d h , W k i ∈ R d× d h , W v i ∈ R d× d h" }, { "formula_coordinates": [ 4, 317.18, 371.78, 207.96, 27.87 ], "formula_id": "formula_2", "formula_text": "Atten(q, k, v) = softmax(ϕ(e q,v ) q ⊤ k √ d )v,(3)" }, { "formula_coordinates": [ 5, 87.2, 620.3, 202.66, 14.19 ], "formula_id": "formula_3", "formula_text": "m N n = softmax(u ⊤ N • A • u n ), n < N,(4)" }, { "formula_coordinates": [ 5, 349.49, 97.07, 175.65, 33.58 ], "formula_id": "formula_4", "formula_text": "L ar = - N -1 n=1 y N n log(m N n ),(5)" }, { "formula_coordinates": [ 6, 108.35, 219.59, 181.52, 13.82 ], "formula_id": "formula_5", "formula_text": "m cr = sigmoid(e ⊤ [CLS] • w + b),(6)" }, { "formula_coordinates": [ 6, 72.04, 350.41, 217.83, 23.36 ], "formula_id": "formula_6", "formula_text": "L rs = -[y cr log(m cr ) + (1 -y cr )log(1 -m cr )],(7)" } ]
2023-06-06
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b23", "b4", "b26", "b32", "b1", "b10", "b17", "b24", "b38", "b39", "b44", "b45", "b2", "b3", "b5", "b6", "b21", "b29", "b40", "b42", "b18" ], "table_ref": [], "text": "A N Automatic Fingerprint Recognition System (AFRS) is a user-friendly and cost-effective solution for biometric-based person recognition. It takes less time, computing resources and human effort to verify a person than other biometric recognition systems. Due to its ease of use and automation, AFRS is being used for verification or authentication of a person in security-related applications such as Aadhar verification, airports [24], international borders, etc. Its usage in such security-sensitive applications makes it vulnerable to various threats. A Presentation Attack (PA) is one of them which is imposed by creating an artifact of a genuine user's finger and presenting it to the sensing device of an AFRS. The PAs can be created in two ways i.e. noncooperative method of spoofing and cooperative method of spoofing. In the non-cooperative method, the latent fingerprint left on a surface is captured and then fabricated using spoofing material after digitization. On the other side, the user itself provides an impression of their fingers to create the spoof in the cooperative method. Apart from this, the discovery of novel spoofing materials also imposes a big challenge to the security of AFRS. These materials are used to fabricate more realistic artifacts of fingers. Fingerprint Presentation Attack Detection (FPAD) is a countermeasure to PAs. The FPAD methods can be classified into two broad categories that are hardware-based methods and softwarebased methods. Hardware-based methods require additional devices for the measurement of the natural properties of the finger such as temperature, pulse rate and humidity which makes them costly. On the other hand, softwarebased methods require only the fingerprint sample which makes them user-friendly and cost-effective. Therefore, our focus is on the development of a software-based method that will be able to detect the PAs created with the help of known as well as unknown spoofing materials.\nThe state-of-the-art software-based methods are further classified as perspiration and pore-based methods [5], [27], [33], statistical and handcrafted features-based methods [2], [11], [18], [25], [39], [40], [45], [46] and deep learning-based methods [3], [4], [6], [7], [22], [30], [41], [43]. Perspirationbased methods are proven to be insufficient because this property is affected by external temperature and other environmental factors. Along with this limitation, the feature extraction process of these methods requires multiple impressions of the same finger which makes it less user-friendly. Pore-based methods require the input samples to be of high-resolution (>1000 pixels per inch) which increases the cost of the FPAD system. Similarly, the quality of the sensing device impacts the performance of the statistical and handcrafted feature-based methods. In recent times, deep learning approaches have been adopted by various researchers due to their superior image classification capability. A set of convolutional filters possessed by them extracts minute features from input fingerprint samples. However, Convolutional Neural Networks (CNN) have the unmatched capability of extracting the discriminating features but they do not exhibit the same capability on fingerprint databases. The lack of texture and color information in fingerprint images is one of the possible reasons behind this. The depth of these networks makes them suffer from the vanishing gradient due to the lack of discriminating information. Hence some pre-processing is required in fingerprint databases to get good classification results.\nIn this paper, we propose a novel end-to-end architecture that consists of a heatmap generator and a modified ResNet classifier. The Heatmap generator is composed of an encoder-decoder block and a channel attention block. It converts the input sample into a heatmap by emphasizing the important features present in an input fingerprint sample. The encoder-decoder block highlights the features present in the region of interest in an image while the channel attention block finds discriminant features in the sample. The outcome of these aforementioned blocks is a single-channel heatmap which is fed to the modified ResNet classifier for the classification. The ResNet architecture [19] is modified to make it less computationally expensive while being trained and tested on the fingerprint samples. The modification is done by removing the redundant convolutional blocks while maintaining their spatial properties and reducing the number of learnable parameters as well. The proposed EXPlainable RESidual Slim NETwork (EXPRESSNET) model is validated using Liveness Detection Competition (LivDet) 2011, 2013, 2015, 2017 and 2019 databases. It outperforms existing FPAD methods in intra-sensor same-material and unknownmaterial protocols. The main contributions of this paper are discussed as follows.\n1. To the best of our knowledge, we are the first to introduce the concept of explainability of deep CNN in the area of FPAD.\n2. The proposed model highlights the driving features of input fingerprint samples by converting them into a singlechannel heatmap. In this way, discriminating features such as wetness, ridge and valley clarity and scars are highlighted for better classification.\n3. The proposed heatmap generator block can be attached to any CNN classifier to enhance its classification performance.\n4. The spatial properties of ResNet's feature maps are preserved along with a reduction in the number of learnable parameters by proposing modifications in the original ResNet architecture.\n5. A detailed comparison of the proposed model has been done against the spoofs created using cooperative and non-co-operative subjects as well as known and unknown spoofing materials.\nThe remainder of this paper is organized as follows. Section 2 discusses existing methodologies suggested by various researchers. Section 3 describes the design and working of the proposed architecture. In section 4, experimental results, as well as comparative analysis are given. Finally, the paper is concluded in section 6." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [], "table_ref": [], "text": "FPAD is an essential tool for the AFRS to deal with PAs. As a countermeasure to PAs, researchers have proposed a variety of software-based solutions, which may be further categorized as pore and perspiration-based methods, statistical and handcrafted feature-based methods and deep learning-based methods. This section discusses the most recent approaches that fall into these categories, as well as their advantages and limitations." }, { "figure_ref": [], "heading": "Perspiration and pore based-methods", "publication_ref": [ "b9", "b1", "b12", "b27" ], "table_ref": [], "text": "The presence of small holes or pores in human skin causes perspiration in fingers. This natural property is not present in the spoofs fabricated with different materials. An initial study was proposed by Derakshani et al. [10]. They utilized the diffusion pattern of sweat as a feature to discriminate between live and spoof fingerprints. Later, Abhyankar et al. [2] proposed a wavelet-based method that utilizes the sweat feature of the fingerprint to detect PAs. Since, the pores are hard to reflect in the spoofs at the time of fabrication, the number of pores may differ in a live fingerprint and its spoofs created with different materials. This dissimilarity is utilized as a discriminating feature by Espinoza [13]. The proposed method is validated using a custom-made fingerprint database. Similarly, Marcialis et al. [28] captured two fingerprint impressions at an interval of five-second and then detects the pores in both impressions. The proposed method utilizes the number of pores present in both impressions as a feature for detecting PAs. The proposed method is validated using a custom-made fingerprint database that consists of 8960 live and 7760 spoof images. Though, the perspiration pattern is used for the detection of PAs, its presence depends on the atmosphere temperature. A live finger in a dry environment does not exhibit this property which causes the discard of the live sample by the FPAD system working on this feature. Moreover, the extraction of pores has been shown to be expensive since the fingerprint sensor must be capable of capturing high-definition samples (>=1000 pixels per inch). For the reasons stated above, perspiration and pore-based approaches are less user-friendly and cost-effective." }, { "figure_ref": [], "heading": "Statistical and handcrafted feature based-methods", "publication_ref": [ "b4", "b31", "b44", "b45", "b49", "b45", "b39", "b13", "b10", "b35", "b35", "b36", "b24", "b24", "b38", "b39" ], "table_ref": [], "text": "The skin of a finger and its counterpart, the fabricated spoofs, have different natural properties such as color, wetness and elasticity level which are reflected in the quality of the samples captured with fingerprint sensors. Statistical and handcrafted feature-based methods use quality features of the fingerprints for the detection of their liveness. Choi et al. [5] extracted histogram, directional contrast, ridge thickness and ridge signal features for detecting PAs. They utilized these features for the training of an SVM classifier. The proposed method is validated using the custom-made fingerprint database. Similarly, Park et al. [32] utilized statistical features including standard deviation, variance, skewness, kurtosis, hyper-skewness and hyper-flatness along with three additional features i.e. average brightness, standard deviation and differential image for the training of SVM to detect PAs. They validated their method using the ATVSFFp database which contains 272 real fingerprints and 270 spoof fingerprint samples. Further, Xia et al. [45] extracted second and third-order occurrence of gradients from fingerprint samples. They used these features for the training of the SVM classifier. The proposed method is validated using LivDet 2009 and 2011 databases. In another work [46], Xia et al. suggested a novel image descriptor that extracts intensity variance along with gradient properties of the fingerprint samples to form a feature vector. This feature vector is further used for the training of the SVM classifier. The proposed work is validated using LivDet 2011 and 2013 databases. Yuan et al. [50] in continuation to the work of [46], proposed a method that utilizes gradient property for the detection of the PAs. It creates two co-occurrence matrices using the Laplacian operator that compute image gradient values for different quantization operators. Further, The matrices are utilized as a feature vector for the training of the back-propagation neural network. The suggested method is validated using LivDet 2013 database. Since the live finger and its spoof have different levels of elasticity, it is reflected in the varying width of the ridges and valleys as well as the quality of their image samples. Sharma et al. [40] [14] utilized BSIF which is obtained by applying a set of predefined filters whose output is then converted to a binary sequence. This binary sequence is used as a feature vector for the training of the SVM classifier. The proposed method is tested on LivDet 2011 database. The varying elasticity of the live fingers and corresponding spoofs causes a significant difference in their shapes and textures also. Further, Dubey et al. [11] suggested a shape and texture featurebased method. They utilized Speeded Up Robust Feature (SURF) and Pyramid extension of Histogram of Gradient (PHOG) to extract shape information from the fingerprint sample. Along with the aforementioned features, the Gabor wavelet is used by them to extract the texture information. The proposed method is validated using LivDet 2011 and 2013 databases. Ajita et al. [36] proposed a novel method for the detection of PAs created with unknown materials. They suggested the use of an Adaptive-Boost (AdaBoost) multi-class classifier that classifies an input fingerprint as live, spoof and unknown. The Fingerprint samples detected as 'unknown' are further used to train the classifier to detect their presence in the future. The proposed method is tested on LivDet 2011 database. In continuation to their previous work [36], Ajita et al. [37] suggested the use of a Weibullcalibrated SVM classifier for the detection of PAs. This SVM is a combination of 1-class as well as binary SVM. This modification shows a significant improvement as compared with the results on LivDet 2011 database. Kim et al. [25] proposed a novel image descriptor that utilizes the local coherence of the fingerprint sample as a feature for the training of SVM. The proposed method is validated using ATVSFFp and LivDet 2009, 2011, 2013 and 2015 databases. The efficacy of these methods depends on the quality of the input fingerprint sample which further depends on the sensing device. Some of the aforementioned methods [25], [39], [40] have shown remarkable performance against the PAs created using known fabrication materials but do not resemble the same against the spoofs created using novel materials." }, { "figure_ref": [], "heading": "Deep learning based-methods", "publication_ref": [ "b7", "b43", "b8", "b3", "b29", "b42", "b6", "b5", "b6", "b51" ], "table_ref": [], "text": "Deep CNNs can extract minute information from image samples since they have convolutional layers. These models have shown excellent classification capabilities when evaluated on imagenet [8], CIFAR [44] and MNIST [9] databases. This benefit led researchers to use CNNs in the detection of PAs as well. This section discusses state-of-the-art deep learning-based FPAD methods. Arora et al. [4] proposed a robust framework to detect presentation attacks in fingerprint biometric systems that involves contrast enhancement using histogram equalization. Fingerprint samples after preprocessing are fed to the VGG classifier. The proposed work is validated on benchmark fingerprint databases which include FVC 2006, ATVSFFp, Finger vein data-set, LivDet 2013 and 2015 databases. Similarly, Nogueira et al. [30] utilized pre-trained CNN architectures using transfer learning. Their method involves existing deep CNN architectures such as VGG, Alexnet and CNN with SVM. The proposed method is tested on LivDet 2009, 2011 and 2013 databases. Uliyan et al. [43] proposed deep features-based methods for the detection of PAs. It utilizes a Deep Boltzmann Machine (DBM) for the extraction of features from fingerprint images. DBM has been utilized by them to find the complex relationship among the features. The proposed work is validated using benchmark fingerprint databases. Chugh et al. [7] suggested a deep learning-based method that uses minutiae-centered fingerprint patches for the training and testing of a MobileNet classifier. A fingerprint is divided into a finite number of patches based on the number of minutiae points present in it. Extracted patches are fed to a CNN model which generates a liveness score for every patch. The liveness score for an input sample is computed using score-level fusion. This proposed method is validated using LivDet 2011, 2013 and 2015 databases and Michigan State University's (MSU) FPAD database. Since, novel fabrication materials are discovered every day, it is hard to generalize an FPAD model to perform FPAD in an open-set or unknown-material protocol. In continuation of their previous work [6], Chugh et al. [7] suggested another method for the detection of spoofs fabricated using unknown materials. They proposed an image synthesis technique to create new fingerprint patches which contribute to better training of the MobileNet classifier. The proposed method is validated using LivDet 2017, ATVSFFp and MSU-FPAD databases. Zhang et al. [52] suggested a CNN architecture that outperforms all the feature-based methods in terms of classification accuracy. They proposed an architecture that consists of a series of improved residual connected blocks. This modified architecture results in the detection of PAs without over-fitting and less computing time. The proposed method is validated on Livdet 2013 and 2015 databases." }, { "figure_ref": [], "heading": "Explainability in Deep Learning", "publication_ref": [ "b33", "b37", "b34", "b11", "b50", "b25", "b41" ], "table_ref": [], "text": "The term explainability refers to any information that helps the user understand the pattern of the decisions made by the deep learning model for the input samples belonging to different classes. In recent times, various surveys [34], [38] Visualization methods, being applied to the image classifiers, are further classified as backpropagation-based methods [35], activation maximization methods [12], deconvolution methods [51] and layer-wise relevance propagationbased methods [26], etc. Deconvolution methods utilize inverse convolution operations to visualize high-layer features present in the input image samples. Amir et al. [42] utilized the deconvolution method in an attempt to emphasize the important features present in the input sample. The proposed method is tested on CIFAR, MNIST and tinyimagenet databases. The performance is compared with state-of-the-art explainability methods. This method performs well on images belonging to different classes based on the shape, color and texture of the objects present in them. Since live and spoof fingerprint samples can not be discriminated based on these features the deconvolution method is required to be enhanced for fingerprint databases. The detailed literature review concludes that the deep learning-based methods have shown remarkable performance while being applied in the area of image classification problems but they are not sufficient while being utilized for live and spoof fingerprint samples. One of the possible reasons may be the limited amount of discriminating features in fingerprint samples. We have developed a novel approach that highlights the key features that play a vital role in the discrimination of live and spoof fingerprint samples without imposing computational overhead on the entire FPAD system which is discussed in the following sections." }, { "figure_ref": [ "fig_0" ], "heading": "PROPOSED WORK", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a novel architecture to detect PAs by generating heatmaps. The architecture, shown in Fig. 1 consists of the encoder-decoder and the channel attention block for heatmap generation and modified ResNet for classification. The first component highlights the regions as well as discriminating features that play a vital role in the classification process. In this way, the classifier is empowered for better classification of the input samples. The details of the components of the EXPRESSNET architecture are mentioned in the following subsections." }, { "figure_ref": [], "heading": "Preprocessing", "publication_ref": [], "table_ref": [], "text": "The sample captured with different sensing devices has different spatial dimensions. To overcome this problem, fingerprint samples are resized into the size of 512 × 512. This modification, in turn, increases model training time while having no effect on the number of trainable parameters." }, { "figure_ref": [], "heading": "Heatmap Generator Block", "publication_ref": [], "table_ref": [], "text": "The resized input sample is passed to the heatmap generator block which constitutes of encoder-decoder, channel attention block and heatmap generation layers. The details of the aforementioned blocks are given in the following subsections." }, { "figure_ref": [ "fig_1" ], "heading": "Encoder-Decoder Block", "publication_ref": [ "b0", "b41" ], "table_ref": [], "text": "The proposed encoder-decoder block first down-size and then up-sizes the input feature maps to highlight the features present in them. In other words, the encoder extracts relevant information and the decoder shows the driving features present in the feature maps while retaining their spatial properties. The encoder part is composed of convolutional operation along with pooling operation. The convolutional filter extracts feature from the sample while the poling operation downsamples the input sample. The output of the encoder block is formulated as Eq. (1).\nEncoder out = M axpool M,m X,x=0 N,n Y,y=0 I X,Y × K x,y(1)\nHere, I X,Y denotes the input fingerprint sample of dimension M × N and K x,y denotes the convolutional filter with size x × y. After convolution, the max-pooling operation is used to downsample the output feature maps. Encoder out f denote the output feature maps. The output of the encoder is passed to the decoder to enhance the features. In [42] the Fig. 2: Live and spoofs fabricated with various materials along with their generated heatmaps by the proposed heatmap generator decoder consists of transposed convolution operator which is higher in terms of computational cost. To keep this cost low, we have constituted the decoder block using an upsample operation followed by the convolutional operation.\nThe decoder block can be formulated as Eq. ( 2).\n(\n)2\nDecoder out = M,m X,x=0 N,n Y,y=0 ((U psample(Encoder out ) × K x,y )\nHere, Decoder out is the output of the encoder-decoder block which is a set of 'f ' feature maps of size M × N each. In this model, the value of 'f ' is kept as 32. These output feature maps have highlighted pixels that contribute to the classification of the input sample. The feature maps are fed to the channel attention block which is described in the following subsection." }, { "figure_ref": [], "heading": "Channel Attention Block (CAB)", "publication_ref": [], "table_ref": [], "text": "The CAB produces an attention map that exploits the interchannel relationship of features. The goal of the encoderdecoder block is to find \"where\" the important feature is present while the CAB is responsible for finding \"what\" is important in the image. The calculation of channel attention is formulated as per Eq. ( 3).\nCAB out = M LP (AverageP ool(Decoder out ))(3)\nHere, Multi-Layer Perceptron (MLP) is a collection of two dense layers. The formation of MLP is denoted with Eq. ( 4).\nM LP = ReLU (W 1 σ(W 0 ()))(4)\nHere, W 1 and W 0 represent the weights of fully-connected layers and ReLU and Sigmoid are the activation functions applied to those layers respectively. The channel attention map is then multiplied by the feature maps generated by the encoder-decoder block. The feature maps with highlighted information are then merged together to form a singlechannel heatmap. A convolutional filter is utilized for the same which is mentioned in the following subsection." }, { "figure_ref": [ "fig_1" ], "heading": "Heatmap Generation Layer", "publication_ref": [], "table_ref": [], "text": "The output of the channel attention block is a set of feature maps that have important features highlighted. These feature maps are further to be merged to form a singlechannel heatmap. For the same, a convolutional filter is used that takes 'f' feature maps as input and produces a single heatmap as an output. The formulation of the same is given as Eq. 5. This operation is followed by the Tanh activation function that maps input values in the range (-1 to +1). (Decoder out f × CAB out )\n(5) As seen in Fig. 2, it is evident that the discriminating features such as wetness, noise, scar, clarity of ridges and valley widths are highlighted by the proposed heatmap generator. The output heatmap is fed as an input to the classifier. For the classification of fingerprint heatmaps, Residual CNN is opted as a classifier and to reduce the computational cost, its architecture has been modified. The details of the original and modified ResNet classifiers are mentioned in the following subsection." }, { "figure_ref": [], "heading": "Modified Residual CNN (Slim-ResNet) Classifier", "publication_ref": [], "table_ref": [], "text": "The process of highlighting the driving features by introducing the encoder-decoder and channel attention impose computational overhead on the entire system while an FPAD system should take a minimum amount of time to classify the input fingerprint sample. We reduced the depth of the opted CNN architecture without tampering with its spatial properties to address the overhead imposed by the heatmap generator block. The original ResNet architecture consists of four building blocks, each having a set of three convolutional layers. In ResNet-50, the first, second, third and fourth blocks are repeated 3, 4, 6 and 3 times, respectively. In this way, the total number of convolutional layers in it is 48 (3 × 3 + 3 × 4 + 3 × 6 + 3 × 3). This architecture had been proposed to deal with the problem of vanishing gradient that occurs when the CNNs are trained with images that have fewer features. The skip connections between the blocks maintain the gradient and persist the parameters to learn for better classification. The depth of the Resnet can be reduced in two ways. i.e. removing the presence of the entire last block or reducing the repetitions of all the blocks. In the first approach, the number of feature maps is reduced as we remove the last convolutional block along with its repetitions. The major disadvantage of this approach is that we get the bigger-sized feature maps at the end of the architecture resulting in decreased classification performance. We choose to use the second strategy, which minimizes the recurrence of a block resulting in less number of layers as compared with the original ResNet architecture. In this approach, there are 30 convolutional layers since the first convolution block repeats twice, the second block twice, the third block four times and the last block twice. As we obtain feature maps at every level with the same size as the original architecture, the removal of the layers in this manner survives in the spatial attributes of feature maps. The Slim ResNet architecture's spatial dimension consists of 2048 feature maps of the size of 7 × 7 pixels each. Since, the input samples are resized, we get the feature maps of the size of 16×16. The output feature maps undergo the process of pooling which results in an array of 2048 values. The downsample the output of the convolution base, and make it suitable for binary classification, three fully-connected layers with 512, 256 and 1 neurons, respectively, are added. The original and the modified ResNet architecture are depicted in Fig. 3." }, { "figure_ref": [], "heading": "EXPERIMENTAL SETUP", "publication_ref": [], "table_ref": [], "text": "In this section, we discuss the different databases used in our experimental evaluation, the performance metrics for evaluation and the implementation details of our method." }, { "figure_ref": [], "heading": "Database", "publication_ref": [], "table_ref": [ "tab_2", "tab_2" ], "text": "The performance of the proposed model is validated using LivDet 2011, 2013, 2015, 2017 and 2019 databases. Each database is prepared with multiple sensing devices. The training and testing fingerprint samples are arranged in a separate group of datasets. The details of all the utilized databases are mentioned in Table 1. Table 1 describes the information of sensors, number of live and spoof samples and materials utilized for the fabrication of spoofs. The sensors including, Biometrika (hi-Scan), italdata, digitalpersona, sagem, crossmatch and greenbit are optical sensors while orcanthus is a thermal sensor. The samples captured with orcanthus consist of noise and scars making them hard to classify for an FPAD model." }, { "figure_ref": [], "heading": "Performance Metrics", "publication_ref": [ "b0" ], "table_ref": [], "text": "The performance of the proposed model is measured using ISO/IEC IS 30107 criteria [1]. The Attack Presentation Classification Error Rate (APCER) shows the percentage of misclassified spoof fingerprint images and its counterpart the Bonafide Presentation Classification Error Rate (BPCER), \nACE = AP CER + BP CER 2 (8\n)\nThe ACE is further utilized to derive the accuracy of the proposed model which is formulated as Eq. ( 9).\nAccuracy = 100 -ACE(9)" }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [], "table_ref": [], "text": "The proposed algorithm is implemented in python using the Tensorflow-Keras library. All training and testing have been done over NVIDIA TESLA P100 GPU. Each model has been trained from scratch for 250 epochs which took around 10-12 hours to converge. The learning rate and batch size are kept as 0.0001 and 8 respectively." }, { "figure_ref": [], "heading": "EXPERIMENTAL RESULTS AND COMPARATIVE ANALYSIS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [], "table_ref": [], "text": "The performance of the proposed model is validated in two different benchmark protocols, including intra-sensor and known spoof material and intra-sensor and unknown spoof material based on the arrangement of training and testing spoof samples captured with multiple devices. A description of these protocols along with the findings of the proposed method is discussed in the following subsections." }, { "figure_ref": [], "heading": "Intra-Sensor and Known Spoof Material", "publication_ref": [], "table_ref": [], "text": "In this experimental setup, the training and testing fingerprint samples are captured using the same sensing device. " }, { "figure_ref": [], "heading": "Intra-Sensor and Unknown Spoof Material", "publication_ref": [], "table_ref": [ "tab_5", "tab_5", "tab_4" ], "text": "In this experimental setup, the fingerprint samples belonging to the training and testing datasets are captured using the same sensing device however the samples belonging to the spoof category in both datasets are fabricated using different materials. Validation in this protocol, measures the robustness of the FPAD system to defend the AFRS in the real-world scenario since an intruder can present an artifact of a user's fingerprint made with newly discovered fabrication materials that are unseen to the FPAD model. LivDet 2017 and 2019 are captured in the same way as the training and testing spoof samples are fabricated from different materials. The findings of the proposed method on the aforementioned databases are reported in Table 4. Table 4 shows that the proposed model achieves an average BPCER of 4.70%, APCER of 3.28% and ACE of 3.92% on the LivDet 2017 database. Similarly the proposed model classifies the live and spoof samples with an error of 4.68% and 2.96% respectively on LivDet 2019. The proposed method also confronts the spoof samples present in LivDet 2015 database with an average APCER of 5.82% as mentioned by the column \"APCER (unknown)\" in Table 3. " }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [], "text": "The properties of the live and spoof fingerprint samples differ due to the lack of moisture in the spoof. Apart from that, the spoof samples include noise, scars and uneven width of ridges and valleys that are introduced during the fabrication process. These abnormalities are emphasized by the proposed heatmap generator which plays an important role in the detection of PAs. The findings of the proposed method are compared with existing methods tested on benchmark databases which are mentioned in the following subsection." }, { "figure_ref": [], "heading": "Comparative Analysis", "publication_ref": [], "table_ref": [], "text": "The findings of the proposed method, are compared with state-of-the-art approaches in several benchmark settings.\nA detailed comparative analysis is given in the following subsections." }, { "figure_ref": [], "heading": "Comparison with existing methods on LivDet 2011 database", "publication_ref": [ "b45", "b10", "b48", "b19", "b5", "b39", "b17", "b29" ], "table_ref": [ "tab_6", "tab_6" ], "text": "The performance of the proposed model is compared with state-of-the-art methods tested on LivDet 2011 database which is mentioned in Table 5. As per Table 5, the proposed method outperforms the methods discussed in [46], [11], [49], [20], [6], [40], [18], [30] over the fingerprint samples collected with biometrika, digital-persona and sagem sensors. The spoof fingerprint samples in this database were obtained using the cooperative spoofing approach, resulting in the development of efficient spoof samples that can readily deceive a CNN-based FPAD model. The suggested heatmap generator emphasizes the presence of moisture in the input fingerprint data. As a result, spoof samples lack this feature and are easily spotted by the classifier. This advantage elevates the suggested technique over handcrafted features-based and deep CNN-based FPAD approaches. The proposed method attains overall classification accuracy of 96.86%." }, { "figure_ref": [], "heading": "Comparison with existing methods on LivDet 2013 database", "publication_ref": [ "b48", "b19", "b52", "b31", "b16", "b20", "b49", "b22", "b42", "b29", "b2", "b5", "b31", "b39", "b28", "b42", "b24" ], "table_ref": [ "tab_7", "tab_8" ], "text": "The findings of the proposed method are compared with the method tested on the LivDet 2013 database. This database is captured using the non-cooperative method of spoofing in which the latent fingerprints left on the glass, wood, or other smooth surface are used to fabricate the spoofs. This process adds a significant amount of noise, scars and other irregularities to the spoofs which are highlighted by the heatmap generator. Table 6 shows a detailed comparison of the proposed method's performance with state-of-the-art methods validated on the LivDet 2013 database. It is evident that the proposed methods perform better as compared with the method discussed in [49], [20], [53], [32], [17], [21], [50], [23], [43], [30], [3], and [6], while being tested on dataset captured with biometrika and italdata sensors. 7 clearly indicates that the classification performance of the proposed method is better than the method discussed in [32], [40], [29], [43] and [25]. The heatmap generator finds discriminating features that result in better classification accuracy of the classifier than state-of-the-art deep CNN-based approaches." }, { "figure_ref": [], "heading": "Comparison with existing methods on LivDet 2017 database", "publication_ref": [ "b6", "b5", "b52", "b15", "b5" ], "table_ref": [ "tab_9", "tab_11" ], "text": "The performance of the proposed method is also compared with state-of-the-art methods tested on LivDet 2017 database. The training and testing spoof samples captured in this database are fabricated using different spoofing materials which makes it more challenging for an FPAD model to classify. However, the fabrication materials available for the spoofing, do not resemble the moisture present in the live fingerprint samples. The proposed method is able to find the discriminating features with the help of the heatmap generator. Table 8 shows that the proposed method performs better than the method discussed in [7], [6], [53] and [16] while being tested on the fingerprint samples captured with orcanthus and digital persona. The proposed method also outperforms the aforementioned methods with an average classification accuracy of 96.07%. This comparison reveals that the heatmap generator can produce a heatmap with discriminating information regardless of the material used for fabrication. 9 reports a comparison of the proposed model's findings with state-of-the-art methods tested on the LivDet 2019 database. It shows that the proposed method outperforms the method discussed in [6] as well as the participating FPAD algorithms i.e., JungCNN, JWL LivDet, ZJUT DET while being tested on the samples collected with orcanthus and digital persona sensors. The proposed method also outperforms the aforementioned methods in terms of average classification accuracy. The comparative analysis of the performance of the proposed method on various LivDet databases indicates that it consistently performs better regardless of the sensors in the intra-sensor paradigm of FPAD whether the spoof samples are fabricated using known or unknown materials. The possession of the heatmap generator enables the classifier to learn better as compared with traditional CNN-based approaches." }, { "figure_ref": [], "heading": "Evaluation of EXPRESSNET in High-Security Systems", "publication_ref": [], "table_ref": [], "text": "An FPAD model is to be tested for its performance in highsecurity systems too as its main objective is not only to achieve the minimum APCER, BPCER and ACE. In this paper, we have reported the findings of the proposed model " }, { "figure_ref": [], "heading": "Processing Time", "publication_ref": [], "table_ref": [], "text": "The processing time of an FPAD model is considered the amount of time it takes to find whether the input fingerprint sample is live or spoof. This time is supposed to be minimum as the sample has to undergo the process of verification after the detection of its liveness. The proposed model, EXP RESSN ET , takes the classification time of 300 milliseconds and 20 milliseconds on Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz 6 th generation processor, and Nvidia TESLA P00 respectively, to classify a single fingerprint image. The less amount of classification time makes it suitable for the AFRS in real-time applications." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "AFRS deployed in various security and commercial applications can be deceived by PAs. This paper presents an FPAD mechanism that has shown the capability of detecting spoofs when they are created using cooperative, or noncooperative methods of spoofing as well as using known and unknown fabrication materials. Existing handcrafted and deep learning-based methods are insufficient in detecting PAs while being tested in the aforementioned scenarios. One of the possible reasons behind this is the lack of feature extraction capability of CNN-based methods due to the limited amount of discriminating information present in the input fingerprint samples. In this paper, a novel endto-end model is presented which first converts the input " } ]
Presentation attack is a challenging issue that persists in the security of automatic fingerprint recognition systems. This paper proposes a novel explainable residual slim network that detects the presentation attack by representing the visual features in the input fingerprint sample. The encoder-decoder of this network along with the channel attention block converts the input sample into its heatmap representation while the modified residual convolutional neural network classifier discriminates between live and spoof fingerprints. The entire architecture of the heatmap generator block and modified ResNet classifier works together in an end-to-end manner. The performance of the proposed model is validated on benchmark liveness detection competition databases i.e. Livdet 2011Livdet , 2013Livdet , 2015Livdet , 2017Livdet , and 2019 and the classification accuracy of 96.86%, 99.84%, 96.45%, 96.07%, 96.27% are achieved on them, respectively. The performance of the proposed model is compared with the state-of-the-art techniques, and the proposed method outperforms state-of-the-art methods in benchmark protocols of presentation attack detection in terms of classification accuracy.
EXPRESSNET: An Explainable Residual Slim Network for Fingerprint Presentation Attack Detection
[ { "figure_caption": "Fig. 1 :1Fig. 1: Block diagram of EXPRESSNET architecture", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 22depicts live and spoof fingerprint samples belonging to LivDet 2011, biometrika dataset and respective heatmaps generated by the heatmap generator module.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 : 100 ( 6 )31006Fig. 3: Block diagram of original and modified ResNet architecture", "figure_data": "", "figure_id": "fig_2", "figure_label": "31006", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig. 4: Detection Error Trade-off (DET) curves for LivDet 2011, 2013, 2015, 2017 and 2019 databases", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "7x7 Conv, 64/27x7 Conv, 64/2Maxpool/21x1 Conv, 64Conv, 643x3 Conv, 64x 33x3 Conv, 64x 21x1 Conv, 2561x1 Conv, 256+1x1 Conv, 128/2 3x3 Conv, 128 1x1 Conv, 512x 41x1 Conv, 128/2 3x3 Conv, 128 1x1 Conv,x 31x1 Conv, 256/2 3x3 Conv, 256x 6x 41x1 Conv, 10241x1 Conv, 512/2 3x3 Conv, 512x 3x 21x1 Conv, 2048+Maxpool/2Fully Connected(1000 Units)Original ResNetArchitectureModified ResNetArchitecture", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Details of the benchmark LivDet databases", "figure_data": "DatabaseSensorLiveSpoofSpoofing MaterialsLivDet 2011 [47] Biometrika1000/1000 1000/1000 Ecoflex, Gelatine, Latex, Siligum, WoodglueItaldata1000/1000 1000/1000Digital Persona 1000/1000 1000/1000 Gelatine, Latex, Playdoh, Silicone, WoodGlue Sagem 1000/1000 1000/1000LivDet 2013 [15] Biometrika1000/1000 1000/1000 Ecoflex, Gelatine, Latex, Modsil, WoodglueDigital Persona 1000/1000 1000/1000LivDet 2015 [29] Crossmatch1000/1000 1473/1448 Body Double, Ecoflex, Playdoh, OOMOO, GelatineDigital Persona 1000/1000 1000/1500Greenbit1000/1000 1000/1500Ecoflex, Latex, Gelatine, Woodglue, Liquid Ecoflex, RTVHi-Scan1000/1000 1000/1500LivDet 2017 [48] Greenbit1000/1700 1200/2040Orcanthus1000/1700 1180/2018Body Double, Ecoflex, Woodglue, Gelatine, Latex, Liquid EcoflexDigital Persona999/16921199/2028LivDet 2019 [31] Greenbit1000/1020 1200/1224 Body Double, Ecoflex, Woodglue, Mix1, Mix2, Liquid EcoflexOrcanthus1000/9901200/1088 Body Double, Ecoflex, Woodglue, Mix1, Mix2, Liquid EcoflexDigital Persona 1000/1099 1000/1224 Ecoflex, Gelatine, Woodglue, Latex, Mix1, Mix2, Liquid Ecoflexevaluate the system's overall performance. Equation (8)represents the formulation of ACE.", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "The spoof samples belonging to both training and testing datasets are fabricated with the same spoofing materials. LivDet 2011 and 2013 are prepared according to this setup while LivDet 2015 partially belongs to this category as two-thirds of the testing samples are captured using known spoof materials. The results on LivDet 2011 and 2013 databases are reported in Table2. Table2indicates that the proposed model attains an average BPCER of 3.50%, APCER of 2.79% and ACE of 3.14% while being tested on the LivDet 2011 database. In the same protocol, the model achieves a BPCER of 0.15%, APCER of 0.17% and ACE of 0.16% while being tested on the LivDet 2013 database. The results on LivDet 2015 are reported in Table3which indicates that the proposed model achieves an average BPCER of 3.23% and APCER of 2.91% as mentioned by the column \"APCER (Known)\". The performance on LivDet 2011, 2013 database on intra-sensor known-material protocol", "figure_data": "DatabaseSensorBPCER APCER ACE (%)LivDet 2011 Biometrika7.11.64.35Digital Persona1.91.01.45Italdata3.97.05.45Sagem1.121.581.35Average3.502.793.14LivDet 2013 Biometrika0.150.150.15Italdata0.150.200.17Average0.150.170.16", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "The performance on LivDet 2015 database on intra-sensor known and unknown materials protocols", "figure_data": "DatabaseSensorBPCERAPCER (Known)APCER (Unknown)ACE (%)Crossmatch1.03.4111.063.7Digital Persona5.63.53.24.49LivDet 2015Biometrika4.03.25.23.93Greenbit2.111.303.62.08Average3.232.915.823.55", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "The performance on LivDet 2017 and 2019 databases on intra-sensor unknown materials protocol", "figure_data": "DatabaseSensorBPCER APCER ACE (%)Digital Persona5.143.44.2LivDet 2017Orcanthus Greenbit3.36 5.592.86 3.583.09 4.49Average4.703.283.93Digital Persona7.677.3LivDet 2019Greenbit Orcanthus5.3 1.121.23 0.653.08 0.87Average4.672.953.75", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Comparison with state-of-the-art methods on LivDet 2011 in intra-sensor protocol", "figure_data": "MethodAccuracy (Biometrika)Accuracy (Digital Persona)Accuracy (Italdata)Accuracy (Sagem)Avg.Xia et al. [46]93.5596.288.2596.6693.37Dubey et al. [11]92.1193.7591.994.6493.1Yuan et al. [49]97.0588.9497.892.0193.82Gragnaniello et al.93.192.0087.3596.3592.2[18]Nogueira et al. [30]91.898.194.9195.3695.04Yuan et al. [49]90.0898.6587.6597.193.55Jian et al. [20]95.7598.494.196.8396.27Sharma et al. [40]92.794.488.693.392.25Chugh et al. [6]98.7698.3997.5598.6198.33EXPRESSNET95.6598.5594.5598.6596.86", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Comparison with state-of-the-art methods on LivDet 2013 in intra-sensor protocol", "figure_data": "MethodAccuracy (Biometrika)Accuracy (Italdata)Avg.Yuan et al. [49]96.4597.6597.05Jian et al. [20]99.2599.4099.32Zhang et al. [53]99.5396.9998.26Park et al. [32]99.1598.7598.95Gottschlich et al.96.1098.3097.0[17]Johnson et al. [21]98.098.498.20Yuan et al. [50]95.6598.697.12Jung et al. [23]94.1297.9296.02Uliyan et al. [43]96.094.5095.25Nogueira et al. [30]99.2097.798.45Chugh et al. [6]99.8099.7099.75Anusha et al. [3]99.7699.6899.72EPRESSNET99.8599.8399.845.2.3 Comparison with existing methods on LivDet2015 databaseThe LivDet 2015 database is composed of the spoof samplescaptured with known and unknown spoofing materials. Adetailed comparison mentioned in Table", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Comparison with state-of-the-art methods on LivDet 2015 in intra-sensor protocol", "figure_data": "MethodAccuracy (Crossmatch)Accuracy (Greenbit)Accuracy (Digital Persona)Accuracy (Biometrika)Avg.Park et al. [32]99.6397.3091.595.996.08Sharma et al. [40]98.0795.794.1695.2295.78Zhang et al. [53]97.0197.8195.4297.0296.82Jung et al. [22]98.6096.2090.5095.8095.27LivDet 2015 Winner98.1095.4093.7294.3695.39[29]Uliyan et al. [43]95.00---95.00Kim et al. [25]----86.39EXPRESSNET96.3097.9295.5196.1296.45", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Comparison with state-of-the-art methods on LivDet 2017 database in intra-sensor protocol", "figure_data": "MethodAccuracy (Orcanthus)Accuracy (Digital Persona)Accuracy (Greenbit)Avg.Chugh et al. [7]95.0195.2097.4295.88Chugh et al. [6]94.5195.1296.6895.43Zhang et al. [53]93.9392.8995.2094.00Gonzalez et al.94.3895.0894.5494.66[16]EXPRESSNET96.9595.8095.5196.075.2.5 Comparison", "figure_id": "tab_9", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_10", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison with state-of-the-art methods on LivDet 2019 database in intra-sensor protocol graphical representation of error rates achieved by a binary classification system by adjusting the value of the classification threshold. We have reported the DET curves for all the datasets of LivDet 2011, 2013, 2015, 2017 and 2019 databases which are depicted in Fig.4. In the Fig.4, it can be observed that the proposed model attains the BPCER of less than 1% to retain the APCER of 1% on biometrika and digital persona sensors of the LivDet 2011 database while it is less than 5% and 22% on sagem and italdata sensors of the same database. On LivDet 2013, the proposed model achieves a BPCER of less than 1% to maintain the APCER of 1% on biometrika and italdata sensors. Similarly, the proposed model is able to achieve a BPCER of less than 5% to gain the APCER of 1% while being tested on crossmatch, digital-persona and greenbit sensors of the LivDet 2015 database. LivDet 2017 and 2019 in which the testing spoof samples are captured using unknown spoof materials, the model is able to retain the BPCER in the range of 5% -17% on Livdet 2017 database. In the same way, the model retains the BPCER of less than 5% on orcanthus and greenbit sensors of the LivDet 2019 database.", "figure_data": "MethodAccuracy (Orcanthus)Accuracy (Digital Persona)Accuracy (Greenbit)Avg.Jung CNN [31]99.1381.2399.0693.14Chugh et al. [6]97.5083.6499.7393.62JWL LivDet [31]97.4588.8699.2095.17ZJUT Det A [31]97.5088.7799.2095.16EXPRESSNET99.1692.7096.9296.27using the Detection Error Trade-off (DET) curve. A DETcurve is a", "figure_id": "tab_11", "figure_label": "9", "figure_type": "table" } ]
Anuj Rai; Somnath Dey
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "Information technology -biometric presentation attack detection -part 3: Testing and reporting", "year": "2017" }, { "authors": "A Abhyankar; S Schuckers", "journal": "Pattern Recognition", "ref_id": "b1", "title": "Integrating a wavelet based perspiration liveness check with fingerprint recognition", "year": "2009" }, { "authors": "B Anusha; S Banerjee; S Chaudhuri", "journal": "IEEE Computer Society", "ref_id": "b2", "title": "Defraudnet:end2end fingerprint spoof detection using patch level attention", "year": "2020-03" }, { "authors": "S Arora", "journal": "Arabian Journal for Science and Engineering", "ref_id": "b3", "title": "Fingerprint spoofing detection to improve customer security in mobile financial applications using deep learning", "year": "2019" }, { "authors": "H Choi; R Kang; K Choi; A Teoh; J Kim", "journal": "Optical Engineering -OPT ENG", "ref_id": "b4", "title": "Fake-fingerprint detection using multiple static features", "year": "2009" }, { "authors": "T Chugh; K Cao; A K Jain", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b5", "title": "Fingerprint spoof buster: Use of minutiae-centered patches", "year": "2018" }, { "authors": "T Chugh; A K Jain", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b6", "title": "Fingerprint spoof detector generalization", "year": "2021" }, { "authors": "J Deng; W Dong; R Socher; L J Li; K Li; L Fei-Fei", "journal": "IEEE Conference on Computer Vision and Pattern Recognition", "ref_id": "b7", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "L Deng", "journal": "IEEE Signal Processing Magazine", "ref_id": "b8", "title": "The mnist database of handwritten digit images for machine learning research [best of the web", "year": "2012" }, { "authors": "R Derakhshani; S A Schuckers; L A Hornak; L O'gorman", "journal": "Pattern Recognition", "ref_id": "b9", "title": "Determination of vitality from a non-invasive biomedical measurement for use in fingerprint scanners", "year": "2003" }, { "authors": "R K Dubey; J Goh; V L L Thing", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b10", "title": "Fingerprint liveness detection from single image using low-level features and shape analysis", "year": "2016" }, { "authors": "D Erhan; Y Bengio; A Courville; P Vincent", "journal": "Univeristé de Montréal", "ref_id": "b11", "title": "Visualizing higherlayer features of a deep network", "year": "2009-01" }, { "authors": "M Espinoza; C Champod", "journal": "", "ref_id": "b12", "title": "Using the number of pores on fingerprint images to detect spoofing attacks", "year": "2011" }, { "authors": "L Ghiani; A Hadid; G L Marcialis; F Roli", "journal": "", "ref_id": "b13", "title": "Fingerprint liveness detection using binarized statistical image features", "year": "2013" }, { "authors": "L Ghiani; D Yambay; V Mura; S Tocco; G L Marcialis; F Roli; S Schuckcrs", "journal": "", "ref_id": "b14", "title": "Livdet 2013 fingerprint liveness detection competition 2013", "year": "2013" }, { "authors": "L J González-Soler; M Gomez-Barrero; L Chang; A Pérez-Suárez; C Busch", "journal": "IEEE Access", "ref_id": "b15", "title": "Fingerprint presentation attack detection based on local features encoding for unknown attacks", "year": "2021" }, { "authors": "C Gottschlich; E Marasco; A Y Yang; B Cukic", "journal": "IEEE International Joint Conference on Biometrics", "ref_id": "b16", "title": "Fingerprint liveness detection based on histograms of invariant gradients", "year": "2014" }, { "authors": "D Gragnaniello; G Poggi; C Sansone; L Verdoliva", "journal": "", "ref_id": "b17", "title": "Fingerprint liveness detection based on weber local image descriptor", "year": "2013" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b18", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "W Jian; Y Zhou; H Liu", "journal": "IEEE Access", "ref_id": "b19", "title": "Densely connected convolutional network optimized by genetic algorithm for fingerprint liveness detection", "year": "2021" }, { "authors": "P Johnson; S Schuckers", "journal": "", "ref_id": "b20", "title": "Fingerprint pore characteristics for liveness detection", "year": "2014" }, { "authors": "H Y Jung; Y Heo", "journal": "Electronics Letters", "ref_id": "b21", "title": "Fingerprint liveness map construction using convolutional neural network", "year": "2018" }, { "authors": "H Y Jung; Y S Heo; S Lee", "journal": "IEEE Access", "ref_id": "b22", "title": "Fingerprint liveness detection by a template-probe convolutional neural network", "year": "2019" }, { "authors": "N Khan; M Efthymiou", "journal": "International Journal of Information Management Data Insights", "ref_id": "b23", "title": "The use of biometric technology at airports: The case of customs and border protection (cbp)", "year": "2021" }, { "authors": "W Kim", "journal": "IEEE Signal Processing Letters", "ref_id": "b24", "title": "Fingerprint liveness detection using local coherence patterns", "year": "2017" }, { "authors": "S Lapuschkin; A Binder; G Montavon; K R Samek; W ", "journal": "", "ref_id": "b25", "title": "Analyzing classifiers: Fisher vectors and deep neural networks", "year": "2016" }, { "authors": "E Marasco; C Sansone", "journal": "Pattern Recognition Letters", "ref_id": "b26", "title": "Combining perspiration-and morphology-based static features for fingerprint liveness detection", "year": "2012" }, { "authors": "G L Marcialis; F Roli; A Tidu", "journal": "", "ref_id": "b27", "title": "Analysis of fingerprint pores for vitality detection", "year": "2010" }, { "authors": "V Mura; L Ghiani; G L Marcialis; F Roli; D A Yambay; S A Schuckers", "journal": "", "ref_id": "b28", "title": "Livdet 2015 fingerprint liveness detection competition", "year": "2015" }, { "authors": "R F Nogueira; R De Alencar Lotufo; R Campos Machado", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b29", "title": "Fingerprint liveness detection using convolutional neural networks", "year": "2016" }, { "authors": "G Orr Ù; R Casula; P Tuveri; C Bazzoni; G Dessalvi; M Micheletto; L Ghiani; G Marcialis", "journal": "", "ref_id": "b30", "title": "Livdet in action -fingerprint liveness detection competition", "year": "2019-06" }, { "authors": "E Park; X Cui; W Kim; H Kim", "journal": "", "ref_id": "b31", "title": "End-to-end fingerprints liveness detection using convolutional networks with gram module", "year": "2018" }, { "authors": "Y Park; U Jang; E C Lee", "journal": "Soft Computing", "ref_id": "b32", "title": "Statistical anti-spoofing method for fingerprint recognition", "year": "2018-07" }, { "authors": "G Ras; M Van Gerven; P Haselager", "journal": "Springer International Publishing", "ref_id": "b33", "title": "Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges", "year": "2018" }, { "authors": "G Ras; N Xie; M Van Gerven; D Doran", "journal": "J. Artif. Int. Res", "ref_id": "b34", "title": "Explainable deep learning: A field guide for the uninitiated", "year": "2022" }, { "authors": "A Rattani; A Ross", "journal": "IEEE International Joint Conference on Biometrics", "ref_id": "b35", "title": "Automatic adaptation of fingerprint liveness detector to new spoof materials", "year": "2014" }, { "authors": "A Rattani; W J Scheirer; A Ross", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b36", "title": "Open set fingerprint spoof detection across novel fabrication materials", "year": "2015" }, { "authors": "W Samek; T Wiegand; K R ", "journal": "ITU Journal: ICT Discoveries -Special Issue 1 -The Impact of Artificial Intelligence (AI) on Communication Networks and Services", "ref_id": "b37", "title": "Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models", "year": "2017" }, { "authors": "D Sharma; A Selwal", "journal": "The Visual Computer", "ref_id": "b38", "title": "Hyfipad: a hybrid approach for fingerprint presentation attack detection using local and adaptive image features", "year": "2021" }, { "authors": "R Sharma; S Dey", "journal": "The Visual Computer", "ref_id": "b39", "title": "Fingerprint liveness detection using local quality features", "year": "2019" }, { "authors": "L Spinoulas; H Mirzaalian; M E Hussein; W Abdalmageed", "journal": "IEEE Transactions on Biometrics, Behavior, and Identity Science", "ref_id": "b40", "title": "Multi-modal fingerprint presentation attack detection: Evaluation on a new dataset", "year": "2021" }, { "authors": "A Tavanaei", "journal": "", "ref_id": "b41", "title": "Embedded encoder-decoder in convolutional networks towards explainable AI", "year": "2020" }, { "authors": "D M Uliyan; S Sadeghi; H A Jalab", "journal": "Engineering Science and Technology, an International Journal", "ref_id": "b42", "title": "Anti-spoofing method for fingerprint recognition using patch based deep learning machine", "year": "2020" }, { "authors": "C Wu; Y Li; Z Zhao; B Liu", "journal": "Journal of Ambient Intelligence and Humanized Computing", "ref_id": "b43", "title": "Research on image classification method of features of combinatorial convolution", "year": "2020" }, { "authors": "Z Xia; R Lv; Y Zhu; P Ji; H Sun; Y Q Shi", "journal": "Signal, Image and Video Processing", "ref_id": "b44", "title": "Fingerprint liveness detection using gradient-based texture features", "year": "2017" }, { "authors": "Z Xia; C Yuan; R Lv; X Sun; N N Xiong; Y Q Shi", "journal": "IEEE Transactions on Systems, Man, and Cybernetics: Systems", "ref_id": "b45", "title": "A novel weber local binary descriptor for fingerprint liveness detection", "year": "2020" }, { "authors": "D Yambay; L Ghiani; P Denti; G L Marcialis; F Roli; S Schuckers", "journal": "", "ref_id": "b46", "title": "Livdet 2011 -fingerprint liveness detection competition", "year": "2011" }, { "authors": "D Yambay; S Schuckers; S Denning; C Sandmann; A Bachurinski; J Hogan", "journal": "", "ref_id": "b47", "title": "Livdet 2017 -fingerprint systems liveness detection competition", "year": "2018" }, { "authors": "C Yuan; X Sun; Q M Wu", "journal": "Soft Comput", "ref_id": "b48", "title": "Difference co-occurrence matrix using bp neural network for fingerprint liveness detection", "year": "2019-07" }, { "authors": "C Yuan; Z Xia; L Jiang; Y Cao; Jonathan Wu; Q M Sun; X ", "journal": "IEEE Access", "ref_id": "b49", "title": "Fingerprint liveness detection using an improved cnn with image scale equalization", "year": "2019" }, { "authors": "M D Zeiler; R Fergus", "journal": "Springer International Publishing", "ref_id": "b50", "title": "Visualizing and understanding convolutional networks", "year": "2014" }, { "authors": "Y Zhang; S Pan; X Zhan; Z Li; M Gao; C Gao", "journal": "IEEE Access", "ref_id": "b51", "title": "Fldnet: Light dense cnn for fingerprint liveness detection", "year": "2020" }, { "authors": "Y Zhang; D Shi; X Zhan; D Cao; K Zhu; Z Li", "journal": "IEEE Access", "ref_id": "b52", "title": "Slim-rescnn: A deep residual convolutional neural network for fingerprint liveness detection", "year": "2019" }, { "authors": "Anuj Rai", "journal": "", "ref_id": "b53", "title": "Tech. degree in Computer Technology and Applications from National Institute of Technical Teachers Training and Research Bhopal, India", "year": "2015" } ]
[ { "formula_coordinates": [ 4, 324.74, 642.2, 239.26, 30.08 ], "formula_id": "formula_0", "formula_text": "Encoder out = M axpool M,m X,x=0 N,n Y,y=0 I X,Y × K x,y(1)" }, { "formula_coordinates": [ 5, 292.62, 341.75, 7.38, 9.14 ], "formula_id": "formula_1", "formula_text": ")2" }, { "formula_coordinates": [ 5, 52.01, 320.58, 226.95, 52.32 ], "formula_id": "formula_2", "formula_text": "Decoder out = M,m X,x=0 N,n Y,y=0 ((U psample(Encoder out ) × K x,y )" }, { "formula_coordinates": [ 5, 76.22, 580.98, 223.78, 9.65 ], "formula_id": "formula_3", "formula_text": "CAB out = M LP (AverageP ool(Decoder out ))(3)" }, { "formula_coordinates": [ 5, 112.7, 635.23, 187.3, 9.65 ], "formula_id": "formula_4", "formula_text": "M LP = ReLU (W 1 σ(W 0 ()))(4)" }, { "formula_coordinates": [ 7, 109.51, 249.97, 186.8, 22.31 ], "formula_id": "formula_5", "formula_text": "ACE = AP CER + BP CER 2 (8" }, { "formula_coordinates": [ 7, 296.31, 257.05, 3.69, 9.14 ], "formula_id": "formula_6", "formula_text": ")" }, { "formula_coordinates": [ 7, 121.49, 307.5, 178.51, 9.48 ], "formula_id": "formula_7", "formula_text": "Accuracy = 100 -ACE(9)" } ]
10.18653/v1/W18-5513
2023-05-16
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b15", "b31", "b28", "b14", "b23", "b8", "b5", "b2", "b38", "b1", "b46", "b7", "b32", "b2", "b38", "b34", "b27", "b39", "b5", "b4", "b9" ], "table_ref": [], "text": "Computational fact checking approaches typically explore neural models to verify the truthfulness of a claim by reasoning over multiple pieces of evidence (Jiang et al., 2020;Ostrowski et al., 2021). However, few methods have been devoted to acquiring explanations for these systems, which weakens user trust in the prediction and prohibits the discovery of artifacts in datasets (Kotonya and Toni, 2020a;Lyu et al., 2022;Janizek et al., 2021). * corresponding author\nIn this work, we explore post hoc interpretability, aiming to explain the veracity prediction of a multi-hop fact verification model and reveal how the model arrives at the decision by retaining subsets of the input (i.e., rationale).\nTo understand the behavior of a model with a certain prediction, a classical way to perform explaining is erasure search (Li et al., 2016;Feng et al., 2018;De Cao et al., 2020;Atanasova et al., 2022;Si et al., 2022), an approach wherein rationale is obtained by searching for a maximum subset of the input (e.g., tokens, sentences) that can be completely removed from the input without affecting the veracity prediction1 . This removal perturbation to the input guarantees the decorrelation of discarded features with veracity prediction of the model, in contrast to the intrinsic approaches (e.g., attention-based methods) that cannot ensure the ignoring of low-scoring input features (Atanasova et al., 2020;Kotonya and Toni, 2020b;Zhang et al., 2021;Fajcik et al., 2022).\nExisting explanation approaches for multi-hop fact verification based on erasure searching can be categorized into sentence-level rationale extraction based (Paranjape et al., 2020;Atanasova et al., 2022;Si et al., 2022) and token-level rationale extraction based (Ribeiro et al., 2016;Lundberg and Lee, 2017;Sundararajan et al., 2017;De Cao et al., 2020;Chen and Ji, 2020;Ge et al., 2022). Despite extensive exploration, we found that the rationales extracted at the sentence level are too coarse and might contain irrelevant and redundant tokens to the claim (e.g., \"994km tract of tidal wetlands\" in E1 and \"Gulf of Carpentaria\" in E5 in Figure 1.), which impairs the ability to shield the effect of noise evidence. Obviously, this issue can be overcome by extracting rationale at the token level. However, current token-level rationale extraction methods lack the capability to discern the true ev-Claim: This organism and Panax are both plant genera. The Gulf named after the organism is part of Port McArthur Tidal Wetlands System." }, { "figure_ref": [], "heading": "Label: SUPPORTS", "publication_ref": [], "table_ref": [], "text": "Evidence: E 1 (✔): [Port McArthur Tidal Wetlands System] The Port McArthur Tidal Wetlands System comprises a 994 km tract of tidal wetlands on the south-west coast of the Gulf of Carpentaria in the Northern Territory of Australia. E 2 (❌): [Port McArthur Tidal Wetlands System] The land extends along the coast opposite the Sir Edward Pellew Group of Islands, incorporating the estuaries of the McArthur and Wearyan Rivers. E 3 (✔): [Panax] The Panax (ginseng) genus belongs to the Araliaceae (ivy) family. E 4 (✔): [Carpentaria] Carpentaria acuminata (carpentaria palm), the sole species in the genus Carpentaria, is a palm native to tropical coastal regions in the north of Northern Territory, Australia. E 5 (❌): [Sir Edward Pellew Group of Islands] The Sir Edward Pellew Group of Islands is situated in the south-west corner of the Gulf of Carpentaria, off the northern coast of Australia." }, { "figure_ref": [], "heading": "VMASK(token)", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "✔❌", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "IB(sentence)", "publication_ref": [ "b32", "b4", "b13", "b47", "b35" ], "table_ref": [], "text": "Ground-truth Label Figure 1: An example in the HoVer dataset marked with sentence-level rationales extracted by Information Bottleneck (IB) (Paranjape et al., 2020) and token-level rationales extracted by VMASK (Chen and Ji, 2020).\nidence and noise evidence. This poses a major challenge to explain the multi-hop fact verification model. Extensive redundant and confusing tokens will inevitably be extracted as rationales from the noise evidence, thus inducing inconsistency between the extracted tokens with the true evidence (e.g., the token rationales should not be extracted from {E2, E5} in Figure 1). This results in underaggressive pruning that is unable to reflect the intrinsic information the model relies on to arrive at the decision. Therefore, this paper seeks to explore a feasible way to extract the \"right tokens\" from the \"right sentences\" (i.e., we aim to only retain the task-relevant tokens contained in {E1, E3, E4}.). For this purpose, we propose a novel paradigm to yield indicative token rationales by extracting multi-granular rationales with regularization.\nAlthough promising, follow-up questions then arise: (i) How to extract the multi-granular rationales simultaneously for the veracity prediction? (ii) How to ensure the faithfulness (Jain et al., 2020) and consistency of the multi-granular rationales? In this paper, we give affirmative answers to the questions and offer a novel Consistent mUlti-granular Rationale Extraction (CURE) approach for explainable multi-hop fact verification. The core idea of our CURE is that both the tokenlevel explainer and the sentence-level explainer are learned simultaneously, with the desire to ex-tract the consistent multi-granular rationales and make them faithful toward the verification. It ensures the mutual effect between the information of retained tokens and sentences and produces the indicative token rationales. In specific, given a pretrained multi-hop fact verification model, we first train two parameterized explainers to generate mask vectors for each token and sentence to indicate which token or sentence is necessary or can be discarded, based on the intermediate hidden representation of the Transformer-XH (Zhao et al., 2020). Then, the two learnable mask vectors are intersected and induced back into the input to remove the irrelevant tokens and sentences. Meanwhile, a capsule network (Sabour et al., 2017) is used to aggregate the retained features by intervening on coupling coefficients with the sentence mask. In addition, three diagnostic properties are introduced as guidance to regularize rationale extraction, (i) Fidelity to constrain the faithfulness of rationales; (ii) Consistency to increase the consistency between the multi-granular rationales; (iii) Salience to guide the rationale extraction with predefined salience score.\nIn a nutshell, our main contributions can be summarized as follows: (I) We for the first time explore the multi-granular rationale extraction for the explainable multi-hop fact verification. (II) Three diagnostic properties are designed and applied to regularize rationale extraction to achieve faithfulness and consistency. (III) Experiments on three multi-hop fact verification datasets are conducted to validate the superiority of our approach." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [ "b25" ], "table_ref": [], "text": "Task Following Liu et al. (2020), given a claim c with associated evidence {e 1 , e 2 , . . . , e n }, we construct a fully connected input graph G = (X, A), where n is the number of evidence, x i ∈ X denotes the evidence node by concatenating the evidence text e i with the claim c. We aim to jointly extract multi-granular rationales with the desire of faithfulness and consistency for explainable multi-hop fact verification, i.e., sentencelevel rationales R s = (X s , A s ∈ R n×n ) and token-level rationales r = {r i ⊂ e i }| n i=0 , where r i = {t i,j |t i,j ∈ e i }. We only extract token rationales from e i and denote |x i | as the number of tokens in ith evidence node.\nDefinition 1. (Faithfulness) R s and r are multigranular faithful to their corresponding prediction \nY if and only if Y rely entirely on G R = ({x i ∩r i | x i ∈ X s }, A s ).\nDefinition 2. (Consistency) R s and r are multigranular consistent to their corresponding prediction Y if and only if satisfying\nx i ∈Xs |x i ∩ r i | ≤ , x i ∈X\\Xs |x i ∩ r i | → 0,(1)\nwhere is the maximum expected sparsity of token-level rationales. X \\ X s denotes the complementary subset of X s ." }, { "figure_ref": [ "fig_0" ], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "We now describe the proposed methods in detail, which includes the architectures: (i) a veracity prediction model (shown in Figure 2 " }, { "figure_ref": [ "fig_0" ], "heading": "Veracity Prediction", "publication_ref": [ "b37" ], "table_ref": [], "text": "For the multi-hop fact verification model f (•), as shown in Figure 2(b), we employ the classical veracity model Transformer-XH combined with capsule network as illustrated in Si et al. (2021).\nSemantic encoder Given a graph G = (X, A), a Transformer layer is first applied to the node X to obtain the token representation h = h 0 , h 1 , ..., h n for each evidence, where h i = h i,0 , h i,1 , ..., h i,|x i | , h i,j denotes the jth token representation in ith evidence. Then, a GAT layer is applied to the [CLS] token representation to propagate the message exchange among all evidence along the edges, i.e., hi,0\n| n i=1 = GAT (h i,0 | n i=1 ).\nThe updated representation thus is obtained, h i = hi,0 , h i,1 , ..., h i,|x i | , where hi,0 denotes the sentence representation for ith evidence. By stacking L-layers of Transformer with GAT, we get the representation H = h 0 , h 1 , ..., h L , where h 0 = X.\nAggregator We use the capsule network to aggregate the information among all the evidence by taking sentence representation hL i,0 | n i=0 as the evidence capsule and label as the class capsule. It permits us to further eliminate the effect of nonrationale for veracity prediction. The capsule loss is used to optimize the veracity prediction model." }, { "figure_ref": [ "fig_0" ], "heading": "Multi-granular Rationale Extraction", "publication_ref": [ "b5", "b26" ], "table_ref": [], "text": "Our CURE relies on erasure search, retaining minimal but sufficient multi-granular rationales while maintaining the original veracity (De Cao et al., 2020). As shown in Figure 2(a), we propose two parameterized explainers to both generate the binary mask vectors at the token level and sentence level, indicating the absence or presence of each token or sentence.\nTaking the hidden representation H from multiple layers in Transformer-XH as input, for the token-level explainer, we employ a shallow interpreter network g t (•) (i.e., one-hidden-layer MLP network) to yield binary token mask vectors z = {z i }| n i=0 conditioned on the token representation, where\nz i = {z i,j }| |x i |\nj=1 denotes mask values for each token in ith evidence. We do not consider the sentence representation with j = 0. We then apply Hard Concrete reparameterization (HCR) trick (Louizos et al., 2018) to enforce the values approximate to discrete 0 or 1, while keeping continuous and differential for learning mask vectors.\nz i = z 0 i • • • z L i , pt i = pt 0 i • • • pt L i , (z l i , pt l i ) = HCR(g t (h l i,j | |x i | j=1 )),(2)\nwhere denotes Hadamard product, pt i,j |\n|x i |\nj=1 ∈ pt i denotes the importance score of jth token in ith evidence.\nFor the sentence-level explainer, we train a different interpreter network g s (•) to predict a binary sentence mask vector m ∈ R n based on sentence representation to indicate the absence of sentence,\nm = m 0 • • • m L , ps = ps 0 • • • ps L , (m l , ps l ) = HCR(g s ( hl i,0 | n i=0 )).(3)\nThe multi-granular rationales are selected by multiplying the two mask vectors with the input2 , where token rationales r = {r i }| n i=0 with r i = x i z i and sentence rationales R s = (X s = X m, A s = A m m). Therefore, the perturbed graph can be derived by intersecting the two subsets of granularity rationales, i.e., G R = ({x i ∩ r i | x i ∈ X s }, A s ). Meanwhile, to ensure that only extracted rationale would be used for veracity prediction, we further intervene in the dynamic routing between the evidence capsule and the class capsule in the capsule network for succinct aggregation by multiplying the sentence mask vector with the coupling coefficients." }, { "figure_ref": [], "heading": "Properties", "publication_ref": [ "b16", "b32", "b30", "b18", "b16", "b5" ], "table_ref": [], "text": "Fidelity Fidelity guarantees that the model veracity is maintained after perturbing the input, which measures the sufficiency for faithfulness of multi-granular rationales (Jiang et al., 2021). To ensure the faithfulness of rationales, We re-feed the original graph G and perturbed graph G R into the veracity model f (•) to generate the prediction logits respectively. Then we define the Euclidean distance between these two logits as fidelity loss,\nL F = f (G) -f (G R ) 2 .\n(4)\nConsistency According to Definition 2, we derive the decisive token rationale via improving the consistency between the two single-granular explainers, which ensures that almost all token rationales come from sentence rationales rather than from noise sentences. We thus introduce the symmetric Jensen-Shannon Divergence to regularize the consistency between the importance score of two mask vectors,\nL C = 1 2 KL(P (z)|| P (z) + P (m) 2 ) + 1 2 KL(P (m)|| P (z) + P (m) 2 ),(5)\nwhere\nP (z) = softmax i ( |x i | j=1 pt i,j\n), P (m) = softmax i (ps i ), and KL(•||•) denotes the Kullback-Leibler divergence. Clearly, the consistency property tends to have the mutual effect that informative rationale on one side would help the other side.\nSalience Unsupervised paradigm may be impracticable to extract high-quality multi-granular rationales. We thus utilize the predefined salience score as a signal to guide the rationale extraction as prior works. For sentence rationale extraction, following Paranjape et al. (2020), we adopt the rationale label as guidance by formulating it as a multi-label classification problem using cross entropy (CE) loss,\nL SS = CE(m, E),(6)\nwhere E = {E i ∈ {0, 1}}| n i=0 denotes whether the sentence is annotated rationale by humans.\nDue to the expensive cost of gathering human rationale labels with fine-grained, for token rationale extraction, we construct the pseudo label S = {s i }| n i=0 for each token in each piece of evidence via the technique of layered integrated gradient (Mudrakarta et al., 2018) provided by the Captum (Kokhlikyan et al., 2020), where\ns i = {s i,j ∈ [-1, 1]}| |x i | j=0 .\nThen the KL divergence is employed to regularize the token rationale extraction,\nL ST = n i=0 KL(P (z i )||ŝ i ),(7)\nwhere P (z i ) = softmax j (pt i,j ) denotes the importance score of tokens over the ith evidence, ŝi = softmax j (s i,j ). In addition, to regularize the compactness of token rationale (Jiang et al., 2021),\nwe minimize the number of non-zeros predicted by the token-level explainer via minimizing the L 0 norm with expectation (De Cao et al., 2020).\nL 0 = n i |x i | j pt i,j ,(8)" }, { "figure_ref": [], "heading": "Optimization", "publication_ref": [], "table_ref": [], "text": "The optimization objective is minimizing the following loss function L,\nL = λ 1 L F + λ 2 L C + λ 3 L SS + λ 4 L ST + λ 5 L 0 ,(9)\nwhere λ 1-5 are hyperparameters standing for loss weights.\nDuring training, we freeze the parameters of the pretrained veracity prediction model f (•) and only optimize the explainer parameters (i.e., g t (•) and g s (•)) by minimizing L. In the inference stage, the value of z i,j and m i are determined by 1(pt ij > α) and 1(ps i > α), respectively, where α is the threshold of rationales, 1(•) is the indicator function." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b15", "b0", "b31", "b17", "b31", "b6", "b32", "b10", "b3", "b22", "b33", "b13", "b4", "b34", "b24", "b16", "b11", "b12", "b21" ], "table_ref": [ "tab_5", "tab_6" ], "text": "Datasets We perform experiments on three multi-hop fact verification datasets, including HoVer (Jiang et al., 2020), LIAR-PLUS (Alhindi et al., 2018), and PolitiHop (Ostrowski et al., 2021). For HoVer, following (Khattab et al., 2021), the dataset is constructed with retrieved evidence, where each claim is associated with 5 pieces of evidence. For LIAR-PLUS and Politi-Hop, we use the datasets provided in Ostrowski et al. (2021) and restrict each claim associated with 10 and 5 pieces of evidence, respectively. All the datasets require multi-hop reasoning and consist of annotated sentence-level rationale and noise evidence.\nBaselines Since no other works aimed at multigranular rationale extraction, we compare CURE with twelve single-granular rationale extraction methods as baselines, including eight intrinsicbased methods (i.e., Pipeline in ERASER (DeYoung et al., 2020), Information Bottleneck (IB) (Paranjape et al., 2020), Two-Sentence Selecting (TSS) (Glockner et al., 2020), Learning from rationales (LR) (Carton et al., 2022) for sentence rationale extraction. Lei et al. (2016), DeClarE (Popat et al., 2018), FRESH (Jain et al., 2020), VMASK (Chen and Ji, 2020) for token rationale extraction.) and four post hoc methods (i.e., LIME (Ribeiro et al., 2016) 2020), we adopt the macro F1 and accuracy for verification prediction evaluation, and macro F1, Precision and Recall to measure the sentence-level agreement with human-annotated rationales. We also report fidelity defined in Equation 4 as a metric of faithfulness for post hoc methods. We propose a metric named Token Rationale Overlap rate to measure the overlap between token rationale with sentence Rationale (TRO-R) or Non-rationale (TRO-N). It reflects the consistency between the two granular rationales3 ,\nTRO-R := 1 |X s | xi∈Xs |x i ∩ r i | |x i | , TRO-N := 1 |X ns | xi ∈Xns |x i ∩ r i | |x i | , Consistency := 1 - TRO-N TRO-R(10)\nwhere X ns = X \\ X s denotes the complement subset of X s .\nImplementation Details Our Veracity Prediction model adopts the pretrained RoBERTa (Liu et al., 2019) 1: Evaluation results of multi-granular rationale across three datasets. CURE* denotes the results using predicted token rationale and predicted sentence rationale, CURE denotes the results using predicted token rationale and annotated sentence rationale. ↑ means the larger value is better. -C, -SS and -ST denote the constraint removal of Consistency, Salience-Sentence and Salience-Token, respectively. Our main results are marked in bold.\nport the evaluation using the sentence rationale annotated by humans instead of the predicted sentence rationale to compute the TRO-R and TRO-N. We can observe that: (I) CURE is quite faithful with the lowest fidelity value across all three datasets, surpassing all other baselines. This result is in accordance with Jiang et al. (2021) that the Euclidean distance between the logits constrains the explainer to provide more faithful explanations. (II) CURE is capable of extracting consistent multi-granular rationales with the highest consistency score, which indicates the importance of the differential between true evidence and noise evidence for the token rationale extraction. This is significantly reflected in the CURE*. In contrast, all baselines are unable to induce consistent rationales with huge gaps towards our CURE, even though some baselines achieve better performance on single TRO-R or TRO-N (e.g., SHAP on TRO-R and LIME on TRO-N). (III) On claim verification, our CURE outperforms the post hoc methods, while slightly lower compared with intrinsic methods. We conjecture that the information leakage caused by soft selection may improve the performance of these models. (IV) Beyond relative performance against baselines, we conduct control experiments in the ablation study to explore the effectiveness of diagnostic property. With the removal of different properties individually, we observe the reduced performance in the extracted rationales, both in fidelity and consistency. The most significant property is Salience-Sentence, this can be due to that explainer is susceptible to overfitting and yields task-irrelevant token explanations from noise sentences when lacking prior knowledge about the data. The second key prop- erty is Consistency, there are varying decreases in both fidelity and consistency throughout the three datasets, particularly for LIAR-PLUS, which requires more complex rationales for reasoning over multiple evidence compared with the other two datasets. We reasonably presume the synergy of the two granular explainers by constraining the extraction of right token from right sentence (Gupta et al., 2022). Moreover, we note a minor decrease for claim verification when removing the Salience-Token, showing that the retained task-relevant tokens directed by the salience score can help to boost the performance of veracity prediction.\nPlausibility As shown in Table 2 and 3, we further conduct the experiments to explore how well the extracted rationales agrees with human annotation (Jacovi and Goldberg, 2020) compared to classical single-granular rationale methods.\nFor sentence rationale, surprisingly, we find that our CURE still outperforms the most baselines on claim verification and rationale extraction. We reasonably posit that the high quality right token is useful for extracting right sentence rationale in turn. To further validate the quality of token rationale extraction, we ask 3 annotators with NLP backgrounds to re-annotate 150 fine-grained samples from the development set of the HoVer dataset to obtain the rationale label at the token level. Our annotators achieve 0.6807 on Krippendorff's α (Krippendorff, 2011) and retain 20% tokens annotated as rationales. We measure the agreement between the predicted token rationale and human annotated rationale with the Spearman's correlation, macro F1, Precision, and Recall. As shown in Table 3, our CURE is far more promising that outperforms the baselines with a huge gap on all evaluation metrics. It clearly indicates the necessity of consistency between multi-granular rationales for explaining multi-hop fact verification." }, { "figure_ref": [], "heading": "Manual Evaluation", "publication_ref": [ "b48", "b43", "b5", "b4" ], "table_ref": [], "text": "Inspired by Zhou et al. (2020) and Yan et al. (2022), we provide a manual evaluation of the token rationales (contained in the sentence rationale rather than the whole sentences) extracted by CURE, compared to DIFFMASK (De Cao et al., 2020) and VMASK (Chen and Ji, 2020). We randomly select 50 samples and ask three annotators with NLP backgrounds to score these rationales in a likert scale of 1 to 5 according to three different criteria: rationales do not contain redundant and irrelevant words.\nThe human evaluation results are shown in Figure 3. We can observe that CURE achieves the best results on correctness and faithfulness. Although DIFFMASK performs particularly well on non-redundancy, the correctness and faithfulness of the generated rationales are far worse than those of the other two models, indicating the low quality of its rationales. In fact, DIFFMASK excels at masking almost all tokens due to the only constraint of L 0 loss. Considering the mutual constraints between non-redundancy and the other two criteria, we calculate the average scores of three criteria for each method. CURE still outperforms on average score, which demonstrates the high quality of the token rationales generated by our method." }, { "figure_ref": [], "heading": "Rationale Examples", "publication_ref": [], "table_ref": [], "text": "Figure 4 presents an intuitive example with rationales generated by our CURE from the HoVer dataset. We can observe that our CURE correctly predicts the sentence rationales while entirely removing the noise sentence E2. Meanwhile, The corresponding retained token rationales contain information that is not only important for veracity prediction, but also appears the non-redundancy of the token rationales by ignoring the redundant tokens. Moreover, the retained tokens show strong consistency towards the extracted sentence rationales. It is worth noting that our CURE is prone to retaining the title of the document as the key cue for linking multiple pieces of evidence." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b33", "b36", "b44", "b42", "b41", "b40", "b29", "b1", "b32", "b10", "b46", "b7", "b45", "b5", "b38", "b2", "b9" ], "table_ref": [], "text": "A growing interest in interpretability has led to a flurry of approaches in trying to reveal the reasoning behavior behind the multi-hop fact verification task. A well-studied way is to use the attention weights as attribution score to indicate the importance of a token or sentence, such as selfattention (Popat et al., 2018) or co-attention (Shu et al., 2019;Yang et al., 2019;Wu et al., 2020Wu et al., , 2021)). While this method is incapable to guarantee the inattention of low-score features, drawing criticism recently (Wiegreffe and Pinter, 2019;Meister et al., 2021). Another line of research focuses on perturbation-based methods. These methods explore a built-in explainer to generate the rationale by masking the unimportant language features (Atanasova et al., 2020;Paranjape et al., 2020;Glockner et al., 2020;Kotonya and Toni, 2020b;Zhang et al., 2021;Fajcik et al., 2022). This way generally employs the extractthen-predict paradigm, while Yu et al. (2021) reveals an issue of model interlocking in such a cooperative rationalization paradigm.\nRecently, a few studies explore the post hoc paradigm for explanation extraction by detaching the explainer and the task model. With the parameters of the task model frozen, they focus on the external explainer to retain the key cue in input as the rationales to indicate features the task model relies on (De Cao et al., 2020;Si et al., 2022;Atanasova et al., 2022;Ge et al., 2022). Our work falls under the scope of the post hoc paradigm, different from the prior works that only consider the single-granular rationale, we for the first time propose a novel paradigm to yield indicative token rationales by regularizing the multi-granular rationale extraction." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a novel multi-granular rationale extraction framework for explainable multi-hop fact verification. We jointly model token-level and sentence-level rationale extraction by incorporating three diagnostic properties as additional constraints to generate faithful and consistent multi-granular rationales. The results on three multi-hop fact verification datasets illustrate the effectiveness of our method. In the future, we will explore how to generate counterfactual explanations." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "A limitation of our work is that we employ the supervised paradigm because of the difficulty to satisfy our expectations about the rationales. We need the labels of sentence-level rationales as guidance to obtain better classification performance and high-quality rationales, which may be difficult to extend our method into the scenarios with few annotations (i.e., semi-supervised or unsupervised). In addition, the L 0 loss regularization overemphasizes the sparsity, which can damage the performance on claim verification and make the model sensitive to hyperparameters." } ]
The success of deep learning models on multi-hop fact verification has prompted researchers to understand the behavior behind their veracity. One possible way is erasure search: obtaining the rationale by entirely removing a subset of input without compromising the veracity prediction. Although extensively explored, existing approaches fall within the scope of the single-granular (tokens or sentences) explanation, which inevitably leads to explanation redundancy and inconsistency. To address such issues, this paper explores the viability of multigranular rationale extraction with consistency and faithfulness for explainable multihop fact verification. In particular, given a pretrained veracity prediction model, both the token-level explainer and sentence-level explainer are trained simultaneously to obtain multi-granular rationales via differentiable masking. Meanwhile, three diagnostic properties (fidelity, consistency, salience) are introduced and applied to the training process, to ensure that the extracted rationales satisfy faithfulness and consistency. Experimental results on three multi-hop fact verification datasets show that the proposed approach outperforms some state-of-the-art baselines.
Consistent Multi-Granular Rationale Extraction for Explainable Multi-hop Fact Verification
[ { "figure_caption": "Figure 2 :2Figure 2: The overall architecture of our CURE. (a): the multi-granular rationale extraction, (b): the veracity prediction model", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "(b)), (ii) multi-granular rationale extraction (shown in Figure 2(a)), and the terms we optimize: (iii) the diagnostic properties, (iv) the optimization.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :Figure 4 :34Figure 3: Human evaluation results in a likert scale of 1 to 5, where 1 means strongly disagree and 5 means strongly agree. Average denotes the average score of three criteria. The inner-rater agreement measured by Krippendorff's α is 0.88.", "figure_data": "", "figure_id": "fig_2", "figure_label": "34", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "token-levelsentence-levelExplainerExplainerTransformerTransformerTransformer", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "0.8 0.8 0.8 0.7 0.7 0.7 0.1 0.1 0.1 0.9 0.9 0.9 0.1 0.1 0.2 0.8 0.7 0.1 0.9 0.2", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "ST LC LSS Ltoken masksentence masktoken representationsentence representationoriginalinputDynamicRoutingperturbedinputEvidence CapsuleClass Capsule", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": ", SHAP (Lundberg and Lee, 2017), Layer Integrated Gradient (L-INTGRAD)(Mudrakarta et al., 2018), DIFFMASK(De Cao et al., 2020)).", "figure_data": "Metrics Inspired by DeYoung et al. (", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Evaluation of claim verification and sentence rationale extraction across three datasets. The best results are marked in bold.", "figure_data": "DatasetModelClaim VerificationSentence RationaleAcc.F1F1PrecisionRecallPipeline0.58110.53930.66770.74500.6564IB0.62520.60480.37770.39270.3967LIAR-PLUSTSS0.62390.61720.43240.63490.3469LR0.76520.75190.62420.67760.6381CURE0.82100.80780.67890.80720.6329Pipeline0.62550.62440.94270.90280.9900IB0.56780.56740.62360.70180.5783HoVerTSS0.53680.51110.68830.90260.5755LR0.51100.40500.94190.90290.9988CURE0.76980.76890.93760.90450.9877Pipeline0.65960.41730.63900.59860.8234IB0.68790.54890.41800.51060.3902PolitiHopTSS0.65250.43340.42720.51770.4044LR0.70210.47120.56990.56740.6657CURE0.69500.34590.69470.65840.8403ModelSpearmanF1Precision RecallLIME0.16950.54220.65640.5459SHAP0.03050.36360.51380.5170L-INTGRAD0.07760.51080.53140.5479VMASK0.11770.52470.54730.5732CURE0.42930.67470.67390.7650", "figure_id": "tab_5", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Evaluation of token rationale extraction on the HoVer dataset based on our re-annotation. The best results are marked in bold.", "figure_data": "", "figure_id": "tab_6", "figure_label": "3", "figure_type": "table" } ]
Jiasheng Si; Yingjie Zhu; Deyu Zhou
[ { "authors": "Savvas Tariq Alhindi; Smaranda Petridis; Muresan", "journal": "", "ref_id": "b0", "title": "Where is your evidence: Improving fact-checking by justification modeling", "year": "2018" }, { "authors": "Pepa Atanasova; Jakob Grue Simonsen; Christina Lioma; Isabelle Augenstein", "journal": "", "ref_id": "b1", "title": "Generating fact checking explanations", "year": "2020" }, { "authors": "Pepa Atanasova; Jakob Grue Simonsen; Christina Lioma; Isabelle Augenstein", "journal": "", "ref_id": "b2", "title": "Diagnostics-guided explanation generation", "year": "2022" }, { "authors": "Surya Samuel Carton; Chenhao Kanoria; Tan", "journal": "", "ref_id": "b3", "title": "What to learn, and how: Toward effective learning from rationales", "year": "2022" }, { "authors": "Hanjie Chen; Yangfeng Ji", "journal": "", "ref_id": "b4", "title": "Learning variational word masks to improve the interpretability of neural text classifiers", "year": "2020" }, { "authors": "Nicola De Cao; Michael Sejr Schlichtkrull; Wilker Aziz; Ivan Titov", "journal": "", "ref_id": "b5", "title": "How do decisions emerge across layers in neural models? interpretation with differentiable masking", "year": "2020" }, { "authors": "Jay Deyoung; Sarthak Jain; Nazneen Fatema Rajani; Eric Lehman; Caiming Xiong; Richard Socher; Byron C Wallace", "journal": "", "ref_id": "b6", "title": "ERASER: A benchmark to evaluate rationalized NLP models", "year": "2020" }, { "authors": "Martin Fajcik; Petr Motlicek; Pavel Smrz", "journal": "", "ref_id": "b7", "title": "Claim-dissector: An interpretable fact-checking system with joint re-ranking and veracity prediction", "year": "2022" }, { "authors": "Eric Shi Feng; Alvin Wallace; I I Grissom; Mohit Iyyer; Pedro Rodriguez; Jordan Boyd-Graber", "journal": "", "ref_id": "b8", "title": "Pathologies of neural models make interpretations difficult", "year": "2018" }, { "authors": "Ling Ge; Chunming Hu; Guanghui Ma; Junshuang Wu; Junfan Chen; Jihong Liu; Hong Zhang; Wenyi Qin; Richong Zhang", "journal": "", "ref_id": "b9", "title": "E-VarM: Enhanced variational word masks to improve the interpretability of text classification models", "year": "2022" }, { "authors": "Max Glockner; Ivan Habernal; Iryna Gurevych", "journal": "", "ref_id": "b10", "title": "Why do you think that? exploring faithful sentence-level rationales without supervision", "year": "2020" }, { "authors": "Vivek Gupta; Shuo Zhang; Alakananda Vempala; Yujie He; Temma Choji; Vivek Srikumar", "journal": "", "ref_id": "b11", "title": "Right for the right reason: Evidence extraction for trustworthy tabular reasoning", "year": "2022" }, { "authors": "Alon Jacovi; Yoav Goldberg", "journal": "", "ref_id": "b12", "title": "Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness", "year": "2020" }, { "authors": "Sarthak Jain; Sarah Wiegreffe; Yuval Pinter; Byron C Wallace", "journal": "", "ref_id": "b13", "title": "Learning to faithfully rationalize by construction", "year": "2020" }, { "authors": "Joseph D Janizek; Pascal Sturmfels; Su-In Lee", "journal": "Journal of Machine Learning Research", "ref_id": "b14", "title": "Explaining explanations: Axiomatic feature interactions for deep networks", "year": "2021" }, { "authors": "Yichen Jiang; Shikha Bordia; Zheng Zhong; Charles Dognin; Maneesh Singh; Mohit Bansal", "journal": "", "ref_id": "b15", "title": "HoVer: A dataset for manyhop fact extraction and claim verification", "year": "2020" }, { "authors": "Zhongtao Jiang; Yuanzhe Zhang; Zhao Yang; Jun Zhao; Kang Liu", "journal": "", "ref_id": "b16", "title": "Alignment rationale for natural language inference", "year": "2021" }, { "authors": "Omar Khattab; Christopher Potts; Matei Zaharia", "journal": "", "ref_id": "b17", "title": "Baleen: Robust multi-hop reasoning at scale via condensed retrieval", "year": "2021" }, { "authors": "Narine Kokhlikyan; Vivek Miglani; Miguel Martin; Edward Wang; Bilal Alsallakh; Jonathan Reynolds; Alexander Melnikov; Natalia Kliushkina; Carlos Araya; Siqi Yan; Orion Reblitz-Richardson", "journal": "", "ref_id": "b18", "title": "Captum: A unified and generic model interpretability library for pytorch", "year": "2020" }, { "authors": "Neema Kotonya; Francesca Toni", "journal": "", "ref_id": "b19", "title": "a. Explainable automated fact-checking: A survey", "year": "2020" }, { "authors": "Neema Kotonya; Francesca Toni", "journal": "", "ref_id": "b20", "title": "Explainable automated fact-checking for public health claims", "year": "2020" }, { "authors": "Klaus Krippendorff", "journal": "", "ref_id": "b21", "title": "Computing krippendorff's alpha-reliability", "year": "2011" }, { "authors": "Tao Lei; Regina Barzilay; Tommi Jaakkola", "journal": "", "ref_id": "b22", "title": "Rationalizing neural predictions", "year": "2016" }, { "authors": "Jiwei Li; Will Monroe; Dan Jurafsky", "journal": "", "ref_id": "b23", "title": "Understanding neural networks through representation erasure", "year": "2016" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b24", "title": "Roberta: A robustly optimized BERT pretraining approach", "year": "2019" }, { "authors": "Zhenghao Liu; Chenyan Xiong; Maosong Sun; Zhiyuan Liu", "journal": "", "ref_id": "b25", "title": "Fine-grained fact verification with kernel graph attention network", "year": "2020" }, { "authors": "Christos Louizos; Max Welling; Diederik P Kingma", "journal": "", "ref_id": "b26", "title": "Learning sparse neural networks through L 0 regularization", "year": "2018" }, { "authors": "M Scott; Su-In Lundberg; Lee", "journal": "", "ref_id": "b27", "title": "A unified approach to interpreting model predictions", "year": "2017" }, { "authors": "Qing Lyu; Marianna Apidianaki; Chris Callison-Burch", "journal": "", "ref_id": "b28", "title": "Towards faithful model explanation in nlp: A survey", "year": "2022" }, { "authors": "Clara Meister; Stefan Lazov; Isabelle Augenstein; Ryan Cotterell", "journal": "", "ref_id": "b29", "title": "Is sparse attention more interpretable", "year": "2021" }, { "authors": "Pramod Kaushik Mudrakarta; Ankur Taly; Mukund Sundararajan; Kedar Dhamdhere", "journal": "", "ref_id": "b30", "title": "Did the model understand the question", "year": "2018" }, { "authors": "Wojciech Ostrowski; Arnav Arora; Pepa Atanasova; Isabelle Augenstein", "journal": "", "ref_id": "b31", "title": "Multi-hop fact checking of political claims", "year": "2021" }, { "authors": "Bhargavi Paranjape; Mandar Joshi; John Thickstun; Hannaneh Hajishirzi; Luke Zettlemoyer", "journal": "", "ref_id": "b32", "title": "An information bottleneck approach for controlling conciseness in rationale extraction", "year": "2020" }, { "authors": "Kashyap Popat; Subhabrata Mukherjee; Andrew Yates; Gerhard Weikum", "journal": "", "ref_id": "b33", "title": "DeClarE: Debunking fake news and false claims using evidence-aware deep learning", "year": "2018" }, { "authors": "Marco Tulio Ribeiro; Sameer Singh; Carlos Guestrin", "journal": "", "ref_id": "b34", "title": "why should i trust you?\": Explaining the predictions of any classifier", "year": "2016" }, { "authors": "Sara Sabour; Nicholas Frosst; Geoffrey E Hinton", "journal": "", "ref_id": "b35", "title": "Dynamic routing between capsules", "year": "2017" }, { "authors": "Kai Shu; Limeng Cui; Suhang Wang; Dongwon Lee; Huan Liu", "journal": "", "ref_id": "b36", "title": "Defend: Explainable fake news detection", "year": "2019" }, { "authors": "Jiasheng Si; Deyu Zhou; Tongzhe Li; Xingyu Shi; Yulan He", "journal": "", "ref_id": "b37", "title": "Topic-aware evidence reasoning and stance-aware aggregation for fact verification", "year": "2021" }, { "authors": "Jiasheng Si; Yingjie Zhu; Deyu Zhou", "journal": "", "ref_id": "b38", "title": "Exploring faithful rationale for multi-hop fact verification via salience-aware graph learning", "year": "2022" }, { "authors": "Mukund Sundararajan; Ankur Taly; Qiqi Yan", "journal": "", "ref_id": "b39", "title": "Axiomatic attribution for deep networks", "year": "2017" }, { "authors": "Sarah Wiegreffe; Yuval Pinter", "journal": "", "ref_id": "b40", "title": "Attention is not not explanation", "year": "2019" }, { "authors": "Lianwei Wu; Yuan Rao; Yuqian Lan; Ling Sun; Zhaoyin Qi", "journal": "", "ref_id": "b41", "title": "Unified dual-view cognitive model for interpretable claim verification", "year": "2021" }, { "authors": "Lianwei Wu; Yuan Rao; Yongqiang Zhao; Hao Liang; Ambreen Nazir", "journal": "", "ref_id": "b42", "title": "DTCA: Decision tree-based co-attention networks for explainable claim verification", "year": "2020" }, { "authors": "Hanqi Yan; Lin Gui; Yulan He", "journal": "", "ref_id": "b43", "title": "Hierarchical interpretation of neural text classification", "year": "2022" }, { "authors": "Fan Yang; Shiva K Pentyala; Sina Mohseni; Mengnan Du; Hao Yuan; Rhema Linder; Eric D Ragan; Shuiwang Ji; Xia ( Ben; ) Hu", "journal": "", "ref_id": "b44", "title": "Xfake: Explainable fake news detector with visualizations", "year": "2019" }, { "authors": "Mo Yu; Yang Zhang; Shiyu Chang; Tommi Jaakkola", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b45", "title": "Understanding interlocking dynamics of cooperative rationalization", "year": "2021" }, { "authors": "Zijian Zhang; Koustav Rudra; Avishek Anand", "journal": "", "ref_id": "b46", "title": "Explain and predict, and then predict again", "year": "2021" }, { "authors": "Chen Zhao; Chenyan Xiong; Corby Rosset; Xia Song; Paul Bennett; Saurabh Tiwary", "journal": "", "ref_id": "b47", "title": "Transformer-xh: Multi-evidence reasoning with extra hop attention", "year": "2020" }, { "authors": "Wangchunshu Zhou; Jinyi Hu; Hanlin Zhang; Xiaodan Liang; Maosong Sun; Chenyan Xiong; Jian Tang", "journal": "", "ref_id": "b48", "title": "Towards interpretable natural language understanding with explanations as latent variables", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 72, 310.36, 218.27, 24.18 ], "formula_id": "formula_0", "formula_text": "Y if and only if Y rely entirely on G R = ({x i ∩r i | x i ∈ X s }, A s )." }, { "formula_coordinates": [ 3, 77.83, 392.14, 212.44, 25.37 ], "formula_id": "formula_1", "formula_text": "x i ∈Xs |x i ∩ r i | ≤ , x i ∈X\\Xs |x i ∩ r i | → 0,(1)" }, { "formula_coordinates": [ 3, 307.28, 308.41, 218.27, 27.55 ], "formula_id": "formula_2", "formula_text": "| n i=1 = GAT (h i,0 | n i=1 )." }, { "formula_coordinates": [ 3, 338.15, 724.67, 72.05, 14.38 ], "formula_id": "formula_3", "formula_text": "z i = {z i,j }| |x i |" }, { "formula_coordinates": [ 4, 78.54, 135.59, 211.72, 47.37 ], "formula_id": "formula_4", "formula_text": "z i = z 0 i • • • z L i , pt i = pt 0 i • • • pt L i , (z l i , pt l i ) = HCR(g t (h l i,j | |x i | j=1 )),(2)" }, { "formula_coordinates": [ 4, 262.91, 192.83, 13.44, 7.82 ], "formula_id": "formula_5", "formula_text": "|x i |" }, { "formula_coordinates": [ 4, 72.91, 306.49, 217.36, 47.63 ], "formula_id": "formula_6", "formula_text": "m = m 0 • • • m L , ps = ps 0 • • • ps L , (m l , ps l ) = HCR(g s ( hl i,0 | n i=0 )).(3)" }, { "formula_coordinates": [ 4, 123.54, 707.25, 115.19, 10.68 ], "formula_id": "formula_7", "formula_text": "L F = f (G) -f (G R ) 2 ." }, { "formula_coordinates": [ 4, 336.09, 194.82, 189.46, 47.34 ], "formula_id": "formula_8", "formula_text": "L C = 1 2 KL(P (z)|| P (z) + P (m) 2 ) + 1 2 KL(P (m)|| P (z) + P (m) 2 ),(5)" }, { "formula_coordinates": [ 4, 338.06, 251.18, 132.68, 16 ], "formula_id": "formula_9", "formula_text": "P (z) = softmax i ( |x i | j=1 pt i,j" }, { "formula_coordinates": [ 4, 374.03, 475.69, 151.51, 10.72 ], "formula_id": "formula_10", "formula_text": "L SS = CE(m, E),(6)" }, { "formula_coordinates": [ 4, 307.28, 607.95, 218.27, 27.37 ], "formula_id": "formula_11", "formula_text": "s i = {s i,j ∈ [-1, 1]}| |x i | j=0 ." }, { "formula_coordinates": [ 4, 356.39, 669.57, 169.15, 33.71 ], "formula_id": "formula_12", "formula_text": "L ST = n i=0 KL(P (z i )||ŝ i ),(7)" }, { "formula_coordinates": [ 5, 139.12, 116.74, 151.15, 34.74 ], "formula_id": "formula_13", "formula_text": "L 0 = n i |x i | j pt i,j ,(8)" }, { "formula_coordinates": [ 5, 73.99, 219.29, 216.28, 23.37 ], "formula_id": "formula_14", "formula_text": "L = λ 1 L F + λ 2 L C + λ 3 L SS + λ 4 L ST + λ 5 L 0 ,(9)" }, { "formula_coordinates": [ 5, 341.35, 325.88, 184.19, 86.05 ], "formula_id": "formula_15", "formula_text": "TRO-R := 1 |X s | xi∈Xs |x i ∩ r i | |x i | , TRO-N := 1 |X ns | xi ∈Xns |x i ∩ r i | |x i | , Consistency := 1 - TRO-N TRO-R(10)" } ]
10.1007/s00521-021-06279-x
2023-05-16
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b53", "b75", "b25", "b51", "b60", "b58", "b71", "b71", "b31", "b23", "b52", "b19", "b80", "b49", "b50", "b39", "b61", "b5", "b50" ], "table_ref": [], "text": "In the manufacture of mechanical products in complex industrial processes, defects such as internal holes [54], pits [76], abrasions [26], underfill [52] , and scratches [61] arise, due to failures in design, production equipment and production environment conditions. Products may also easily corrode [59] and be prone to fatigue because of daily use. These defects increase the costs incurred by enterprises, including warranty and reputation costs, shorten the service life of manufactured products, and result in an extensive waste of resources, and may cause substantial harm to people and their safety [72]. Hence, detecting defects is a core competency that enterprises should possess in order to improve the quality of the manufactured products without affecting production [72,32,24]. Automatic defect-detection technology has obvious advantages over manual detection [53]. It not only adapts to an unsuitable environment but also works in the long run with high precision and efficiency, and does not suffer from fatigue or variability as with a human inspection. Research on defect-detection technology can reduce the production cost, improve production efficiency and product quality, as well as lay a solid foundation for the intelligent transformation of the manufacturing industry [20].\nSupervised machine learning inspection methods, including classification and object detection, usually depend on access to a set of annotated images for training. This presents a problem in manufacturing, where defect rates are low, making it difficult or impossible to collect a representative set of images of the defects that are likely to be encountered [81,50]. Additionally because production line data generally consists of repeated images of nearly identical parts, there is a high risk of over-fitting to available data, resulting in a model that may perform well in training and validation, but is not robust to changes that can occur in production [51]. The overall result is models that can fail inexplicably under different conditions.\nAnomaly detection has been used to overcome some drawbacks of supervised learning, by training models to learn \"OK\" parts and flag any that deviate. This approach is most suitable for highly uniform images, but can become more challenging for parts with significant natural variation. In this case, many images are still required, small defects can be lost within natural variability, [40] and there is no guarantee that the model is learning a set of features that adequately represent the defects.\nDeep learning is often lauded for its generalization ability: models learn concepts rather than specific rules, and so can work on unseen examples. This is true for example in classifiers trained on the over 14 million images in Image-Net, that can predict class labels for subjects in new situations [62]. Generalization is achieved by exposing the model to a diverse variety of examples, incentivizing it to learn broad concepts of what differentiates, say, a bird from a plane, rather than looking for rules.\nFor many domain specific data sets, including manufacturing inspection, the model does not see a diverse set of examples. For defect inspection, the background image is often the same (the part itself) and defects may frequently be of the same type or in the same location. Under these circumstances, models are more apt to learn shortcuts that may not generalize to new defects or even causally relate to the presence of a defect in the image [6]. Such shortcuts can make the model less robust to variations in the data encountered in production [51].\nThis work addresses the over-fitting problem in defect inspection by training a ML model on a data set that contains diverse external data, featuring defect types that are of interest in a variety of contexts. We experimentally vary the background object for a class of defects and examine how classification and object detection performance compares with a training set containing near identical objects as would be encountered in a typical manufacturing inspection scenario." }, { "figure_ref": [], "heading": "Contribution and Organization", "publication_ref": [], "table_ref": [], "text": "Our contributions are summarized as follows:\n• We have created data sets and training models that incentivize learning at the concept level, by varying the background characteristics of the image. • We have shown that an object model trained on diverse data that includes defect instances in different contexts and materials can generalize to defects in new situations. • We have validated our approach using several experiments that show that a generalization effect.\n• We have shown that object detection models provide better performance compared to classifiers in terms of area under the receiver operator characteristic curve (AUC) when generalizing to new backgrounds. • We have used a clustering method to study the factors that affect generalization in order to train models that work under a wider range of conditions. The paper is structured as follows. Relevant related works are presented in Section 2. Defect detection based on machine learning methods, including unsupervised learning, traditional supervised learning and deep learning, are reviewed in Section 3. Performance metrics used for evaluating defect detection models are provided in Section 4. A summary of data augmentation is explained in Section 5. Section 6 explains the data collection and labeling and the model selected for experiments. Simulation results are provided in Sections 7 and 8. Section 9 identifies the key factors that can affect the generalization of models. Finally, conclusions and future works are given in Section 10." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b26", "b46", "b17", "b32", "b72", "b80", "b49", "b80", "b4" ], "table_ref": [], "text": "Efforts to improve generalization can largely be grouped into three categories: out-of-distribution (OOD) detection, synthetic data generation, and additional data collection. OOD detection flags model inputs that are outside the training data distribution, for example by looking at disagreement over an ensemble of models [27] or at the distance of an input from the training data in a modified feature space [47]. Anomaly detection can be considered equivalent to OOD flagging for this discussion. While a promising and valuable component of a visual inspection system, OOD detection methods can add complexity, require sufficient data to model the distribution well enough to minimize false positives, and require that the model is responsive to new anomaly features in order to register them as outside the distribution. For example, a model that learns to ignore a region or features in an image because they do not contribute to minimizing loss during training may not capture information from features from that region should a defect occur there, and so not \"see\" the image as OOD.\nSynthetic data generation includes programmatic image transformations that change the data distribution [18], alterations such as moving or pasting elements of other classes into images [33], and synthesis such as by style transfer [73] or GANs [81,50]. The limitation of these methods is that they are constrained by the variability of the available data and any manually added variation, and therefore may not be able to fully capture real differences that arise in production data. Additionally, generalization from synthetic to real data presents additional challenges and is not guaranteed [81].\nAdding more data can improve performance and generalization. Ref. [5] combined six separate data sets showing images of damaged concrete, and showed that models trained through transfer learning on the combined set had better overall performance. They did not evaluate generalization outside the six data sets. A challenge with this approach for many defect types is the limited number of data sets, the lack of diversity within each data set." }, { "figure_ref": [], "heading": "Problem description", "publication_ref": [], "table_ref": [], "text": "Computer vision is regularly proposed to inspect manufactured products, and in particular neural network models are often recommended for their ability to generalize to defects beyond the the examples encountered in training. In theory, Convolutional Neural Network (CNN) models have demonstrated generalization power. For example, classifiers and object detection models trained on publicly available data sets (such as Image-Net and COCO [https://cocodataset.org/] respectively) can perform their task on new instances that differ from the training data. Classifiers trained on Image-Net have been used to identify new images that are not part of that data set.\nIn practice, the generalization power of CV models trained on large research data sets is rarely extended to domain specific applications like manufacturing. While data sets like Image-Net have millions of different images and considerable variation, manufacturing data sets typically have (regardless of number) images of more-or-less the same component over and over again. Also, instances of defects or variations are rare for mature manufacturing processes, and can under-represent the total range of defects that could emerge. If we use such data that is highly homogeneous, class imbalanced, and has few examples of defects to train a machine learning model, the result may be highly over-fit to the data that is available. Such an over-fit model will not provide any of the generalization advantage of a model trained on diverse data, and in fact may be just as brittle, or more so, than a rules based classical computer vision model. The reason it could be worse is that trained models can find shortcuts that don't actually relate to any causal features of the image, whereas a model based on human-derived rules and some classical image manipulation is at least tuned to look at defects.\nA corollary to the above situation is the difficulty in validating an inspection model. With only an imbalanced, homogeneous data set available, it is challenging to convincingly demonstrate the ability of trained models to correctly identify new defects.\nConceptually, one can imagine that the success of training on big data sets like Image-Net could be transposed to inspection problems: a model that has been trained on thousands of images of different defects, for example, would be expected to be incentivized to learn some general features of a defect (rather than just memorize a shortcut) and be able to identify a defect in some new material, size, and orientation, when presented with one. Such a detector has big advantages in manufacturing, because it does not require training data to begin working, and has a lower risk of misidentifying a \"new\" defect, so long as it's still (in this example) a defect.\nOur approach is to show that an object model trained on diverse data that includes defect instances in different contexts and materials can generalize to defects in new situations, and that by using transfer learning, it is possible to train a performant model using only a few hundred instances. To make such models practical and reliable, we want to investigate how well they generalize, and how generalization can be improved, so that a user of the models can confirm they meet inspection standards and will catch defects that arise. This approach can be applied with any machine learning methods used to detect surface defects in manufactured goods.\nIn the next subsection, we will describe several machine learning methods used to detect manufacturing defects." }, { "figure_ref": [], "heading": "Relevant machine learning methods", "publication_ref": [], "table_ref": [], "text": "Researchers have developed multiple deep learning models to identify defects in industrial manufacturing. This section focuses on machine learning algorithms that are specifically used for defect detection, which are categorized into two subsections. The first subsection discusses defect detection methods based on classification, while the second focuses on methods based on object detection models." }, { "figure_ref": [], "heading": "Defect detection method based on classification", "publication_ref": [ "b63", "b22", "b21", "b30", "b67", "b67", "b82", "b77", "b6", "b68", "b29", "b78", "b8", "b45", "b13", "b27", "b34", "b61", "b64", "b16", "b18", "b69", "b81", "b9", "b43", "b52", "b33", "b48", "b24", "b20", "b16" ], "table_ref": [], "text": "Defect classification can predict the presence of a defect in an image and label the image accordingly. Classification can be binary -OK or not-good (NG) or multi-class if more labels are needed. The support vector machine (SVM) [64] and K nearest neighbor (KNN) [23] are two well-known classifiers that have been commonly applied to defect inspection.\nSupport vector machines (SVM) are a popular machine learning tool that are suitable for small and medium-sized data samples, as well as for nonlinear, high-dimensional classification problems. They have been extensively used in the industrial vision detection field. For instance, a real-time machine vision system that uses SVMs to learn complex defect patterns was proposed by the authors in [22]. In [31], a binary defect pattern classification method that combines a supervised SVM classifier with unsupervised self-organizing map clustering was proposed, in which SVMs are employed to classify and identify manufacturing defects. The method achieved over 90% classification accuracy, which outperformed the back-propagation neural network. However, this study only focused on binary map classification. For multi-class defect detection and classification, [68] proposed a method based on a multi-class SVM and a neural network classifier for weld radiographs. [68] established an improved SVM classification model based on a genetic algorithm for real-time analysis of spectrum data to accurately estimate different types of porosity defects in an aluminum alloy welding process. Moreover, SVM classifiers have played a significant role in inspecting surface defects in copper strips [83], monitoring and diagnosing defects in laser welding processes [78], defect detection in wheel bearings [7], and more.\nThe KNN algorithm has demonstrated greater simplicity and stability compared to neural networks [69,30]. In [79], the authors utilized a sequence of pre-processing techniques, including wavelet, threshold, and pathological operations, to prepare images for defect detection. They then employed the grey-level co-occurrence matrix (GLCM) method to extract features before using the KNN algorithm to classify defect images. The overall accuracy rate of this classification approach was around 96%.\nThe authors of [9] proposed a method that utilizes multiple image texture feature extraction techniques. They combined local binary pattern (LBP) with the grey level run length matrix (GLRLM) to extract image features and employed KNN and SVM for classification. The experimental results indicated that combining LBP and GLRLM can improve feature extraction performance, and SVM outperforms nearest neighbor methods for texture feature classification. Alternatively, an unsupervised algorithm can be applied for defect classification. A multi-objective fault signal diagnosis problem can be solved efficiently using a genetic algorithm-based method that relies on K-means clustering [46]. Additionally, [14] introduces an unsupervised defect detection algorithm for patterned fabrics. This algorithm divides a filtered image into a series of blocks and inputs the squared difference between each block median and the mean of all block medians into K-means clustering to classify the blocks as OK or NG. The overall detection success rate was found to reach 95%. Industrial production has greatly benefited from recent advances in neural networks, generally considered to be encompassed under the term Artificial Intelligence (AI). In particular, deep learning, which uses an increased number of network layers, has become the standard in supervised computer vision. Deep learning is able to automatically learn and extract features with strong predictive value for computer vision tasks, and can reduce the amount of feature engineering and fine-tuning required.\nThe convolutional neural network (CNN) is the most widely used architecture for classifying images. LeNet's emergence in 1998 marked the beginning of CNNs [28]. In 2012, AlexNet's success in the Image-Net competition popularized deep learning in computer vision, and numerous CNN models have since emerged, including Network-in-Network [35], VGGNet [62], GoogLeNet [65], ResNet [17], and DenseNet [19]. A CNN consists of three primary types of neural layers that perform distinct functions: convolutional layers that identify local feature combinations from the previous layer, pooling layers that consolidate semantically similar features, and fully connected layers that ultimately transform feature maps into a feature [70,82].\nThe CNN was initially designed for image analysis, making it suitable for automated defect classification in visual inspection [10]. In recent years, deep learning has been applied to industrial defect classification in various fields, including industrial production and electronic components. For supervised steel defect classification, a max-pooling CNN approach was proposed in [44]. The CNN outperformed SVM classifiers and functioned correctly with different types of defects. Surface quality affects product appearance and performance. In [53], a generic CNN-based approach for automatic visual inspection of dirt, scratches, burrs, and wears on part surfaces was presented. Pre-trained CNN models achieved improved accuracy on small data sets for a surface quality visual inspection system. A robust detection method based on a visual attention mechanism and feature-mapping deep learning was proposed in [34] to detect casting defects by X-ray inspection. A CNN extracted defect features from potentially defective regions to obtain a deep learning feature vector, and the similarity of suspicious defective regions was calculated using the feature vector. The method effectively solved the problem of false and missing inspections. A CNN-based inspection system was proposed in [49] to achieve defect classification in casting products, but the CNN deep learning model performed well only with a large volume of high-quality data. In [25], authors proposed an indicator to differentiate between defects and the background area for the classification of defect types in thin-film-transistor liquid-crystal display panels. For industrial production processes, automatic defect classification was performed based on a CNN.\nTransfer learning is a technique in machine learning that involves leveraging a pre-existing model in a different task. This method can address the issue of limited labeled data. To illustrate, a CNN-based transfer learning approach for automatic defect classification was suggested in [21] where it was demonstrated that the technique is practical even with small training data sets, achieving over 80% accuracy with just a few dozen labeled data points. For our study, we employ Image-Net as a pre-training data set, as previous research [17] has shown that ResNet-50, trained on Image-Net, serves as a reliable generic feature extractor and a suitable starting point for training." }, { "figure_ref": [], "heading": "Defect detection method based on object detection models", "publication_ref": [ "b83", "b12", "b15", "b11", "b57", "b7", "b36", "b14", "b10", "b76", "b47", "b54", "b40", "b55", "b37", "b56", "b2", "b73", "b28", "b62", "b70", "b1", "b42", "b79" ], "table_ref": [], "text": "Detecting objects in images is a crucial aspect of computer vision, which involves locating objects in an image using bounding boxes and determining their type. Object detection using deep learning methods can be broadly grouped into two categories. The first category generates regions and then classifies them to obtain various object categories, while the second category treats object detection as a regression or classification problem and uses a unified framework to directly obtain the final categories and locations [84]. Examples of region proposal-based methods include R-CNN [13], spatial pyramid pooling (SPP-net) [16], Fast R-CNN [12], Faster R-CNN [58], region-based fully convolutional networks (R-FCNs) [8], feature pyramid networks (FPNs) [37], and Mask R-CNN [15]. Examples of regression-and classification-based methods include MultiBox [11], AttentionNet [77], G-CNN [48], You Only Look Once (YOLO) [55], the single-shot MultiBox detector (SSD) [41], YOLOv2 [56], RetinaNet [38], YOLOv3 [57], and YOLOv4 [3]. Generally, region proposal-based methods have higher accuracy but are slower, while regression-and classification-based methods are faster but have lower accuracy.\nA two-stage fabric defect detector based on a cascaded mixed feature pyramid network (FPN) was proposed by the authors in [74]. They introduced a feature extraction backbone model that matches parameters with fitting degrees to address issues related to small defect feature space and background noise. Stacked feature pyramid networks were established to integrate cross-scale defect patterns for feature fusion and enhancement in a neck module. Moreover, they proposed cascaded guided region proposal networks (RPNs) to refine the anchor centers and shapes used for anchor generation. The experimental results demonstrated that this method can enhance the recognition performance across different scales.\nFaster R-CNN is a cutting-edge technique for real-time object detection that uses an RPN to generate ROIs instead of selective search. For instance, the authors of [29] proposed a Faster R-CNN method to perform intelligent fault detection for high voltage lines. The method selects a random region as the proposal region and then determines the corresponding category and location of a specific component after training. The experiments showed that the detection method, based on the ResNet-101 network model, was effective in identifying insulator damage and bird nests on a high voltage line. In [63], authors introduced an enhanced Faster R-CNN method for surface defect recognition in wheel hubs. They replaced the last maximum pooling layer with an ROI pooling layer that enabled the use of a single feature map for all the proposals generated by the RPN in a single pass. This technology allowed object detection networks to use an input feature map with a flexible size and output a fixed-size feature map. The experimental results demonstrated that the improved Faster R-CNN method achieved higher detection accuracy, at the expense of detection speed.\nThe object detection and recognition algorithm, \"You Only Look Once\" (YOLO), uses a deep neural network and fixed-grid regression to perform its functions quickly and is desigend for use in real-time applications [71]. Its unique feature is that it takes the entire image as input and directly determines the object's location and category at multiple positions in the image through regression. In [2], the YOLO/CNN model was employed by authors to detect defects on printed circuit boards (PCBs) and achieved a defect detection accuracy of 98.79%. However, the types of defects that can be detected by this method are limited and require optimization. The authors in [43] proposed an active learning method for steel surface defect inspection using YOLOv2. Results from extensive experiments on a challenging public benchmark demonstrate that the proposed approach is highly effective.\nThe SSD algorithm is a hybrid of YOLO and Faster R-CNN that employs multi-scale regional features for regression. This approach maintains the high speed of YOLO while ensuring a certain level of accuracy. For example, in [80], a DF-SSD object detection method based on DenseNet and feature fusion was proposed to replace VGG-16 in SSD. A fusion mechanism for multiscale feature layers was also designed to effectively integrate low-level visual features and high-level semantic features. The experimental results indicated that the proposed DF-SSD method could achieve advanced performance in the detection of small objects and objects with specific relationships. However, for this work, we will be using RetinaNet, which is an SSD variant described in detail in Section 6.1." }, { "figure_ref": [], "heading": "Performance metrics", "publication_ref": [], "table_ref": [], "text": "Selecting the right metrics is key to evaluating defect inspection models, as different metrics may prioritize different outcomes, such as the prevalence of different failure modes. In the following section, we will outline some commonly used performance evaluation metrics in the defect detection field." }, { "figure_ref": [], "heading": "Confusion matrix", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "In an inspection system designed to identify defective parts, we assume two possible labels for a part: it can either be deemed \"OK\" or \"NG\" (not good). An image is labelled as NG if it contains one or more defects. When the inspection system makes a prediction about the status of a part, it can either predict that the part is OK or predict that it is NG. To evaluate the accuracy of the inspection system, we can tabulate the image-level prediction results on a test set as in Table 1 that shows the possible outcomes of the system's predictions. The table has four quadrants, with the predicted status of the image on one axis and the actual (\"True\") status of the image on the other.\nThe top left quadrant represents true positives (TP), which occur when the inspection system correctly identifies an OK image as OK. The bottom right quadrant represents true negatives (TN), which occur when the system correctly identifies an NG image as NG. The top right quadrant represents false negatives (FN), which occur when the system incorrectly identifies an OK image as NG. This can result in unnecessary costs or delays if the part is removed from the production line when it is actually acceptable. The bottom left quadrant represents false positives (FP), which occur when the system incorrectly identifies an NG image as OK. An automated inspection system will trade off throughput (minimizing false positives that require manual inspection) with the chance of shipping a defective part (a false negative). The latter is usually considered a more serious error because it means that a defective part may go unnoticed and end up in the final product or shipped to a customer. However the false positive rate will generally determine whether automating inspection can be economically viable, because it this rate will determine how much manual work is avoided. " }, { "figure_ref": [], "heading": "Predicted", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1" ], "heading": "ROC, AUC, IOU, and AP", "publication_ref": [ "b44", "b3" ], "table_ref": [ "tab_0" ], "text": "For a model that outputs a score, assumed to be between 0 and 1 and representing predicted probability that there is a defect in the image, the labels in Table 1 depend on the threshold selected to map a score to OK or NG. To evaluate detection performance, the ROC (Receiver Operating Characteristic) curve [45] and AUC (Area Under Curve) [4] can also be used. The ROC curve plots the relationship between true positive (TP) rate and false positive (FP) rate, as demonstrated in Fig. 1, which displays two ROC curves. Generally, when the ROC curve is closer to a step function, as in Experiment 2, the model is deemed to perform better, with low FP and high TP rates. The AUC, which is the area under the ROC curve, is often used to compare two ROC curves from different models. When employing models like SSD for object detection, IOU (Intersection over Union) is frequently used to determine if an object is accurately localized. IOU measures the overlap rate between the bounding box provided by the model and the ground truth bounding box. If the IOU exceeds a predefined threshold, the object detection is considered successful. Object detection models for natural scenes often use scores of .25, .5, or .75 as thresholds to indicate success. For manufacturing inspection, it is more important that a defect be found than accurately bounded. Furthermore, some defect types, such as cracks, may not lend themselves well to being measured by IOU due to their aspect ratio and possibly subjective extent.\nIOU = Detection Result ∩ Ground Truth Detection Result ∪ Ground Truth(1)\nThe average precision (AP) metric used in object detection algorithms is based on the precision-recall curve, which shows the trade-off between the precision and recall of the detections for different threshold values. AP measures the area under this curve and provides a summary of the accuracy of the detections for all possible threshold values.\nTo compute the AP, we first calculate the precision and recall for each threshold value of the confidence score (i.e., the score assigned to each detected object by the algorithm). We then interpolate the precision values at each recall level to obtain a smooth curve and compute the area under this curve.\nThe equation for AP can be written as:\nAP = 1 |D| |D| d=1 1 0 p d (r)dr (2\n)\nwhere D is the set of test images, d indexes the images in D, p d (r) is the precision at recall level r for image d, and |D| is the number of images in the test set.\nThe precision at recall level r for image d can be computed as:\np d (r) = max r≥r p d (r)(3)\nwhere p d (r) is the precision at recall level r for image d.\nIn practice, the AP is often computed for a range of threshold values and averaged over all the test images to obtain a single value that summarizes the overall performance of the algorithm.\nWhile AP is a useful metric for evaluating the overall performance of object detection algorithms, it may not always be the most appropriate metric for inspection tasks where minimizing false negatives is critical. This is because AP penalizes false negatives (i.e., missed detections) more severely than false positives (i.e., incorrect detections), which may not be desirable in some inspection scenarios. Additionally, AP can be sensitive to IOU as discussed which is less relevant for flagging defects.\nIn this work, we map object detection scores to (AUC,ROC) scores by assigning image level probabilities equal to the lowest probability predicted for any bounding box by the object detection model. This allows us to compare object performance with that of a classifier, and provides a more suitable score for evaluating inspection performance." }, { "figure_ref": [], "heading": "Data sets for metric evaluation", "publication_ref": [], "table_ref": [], "text": "A common practice for reporting metrics is to divide a base data set in to two or three subsets termed train, validation, and optionally test or holdout. The model is trained on the train set and evaluated on the validation set that was not used in training. Optionally, the holdout set is used to confirm performance of a model that has been through multiple fine-tuning sessions on the train and validation data, making sure the model has not been inadvertently over-fit to the validation set. The model would be considered over-fit if the holdout performance is lower than on the other sets.\nUnless otherwise specified, it is usually assumed that the subsets are randomly split from some initial pool of data. Such evaluation implies that the distribution of the data is the same in all two or three subsets, and that there will not be \"new\" or \"unseen\" data in the validation or holdout spits. This differs from real cases where data may drift subtly or a long tailed distribution may result in never-before-seen defects in production. As such, performance on equal splits, even with a test split included, may not be indicative of actual production performance." }, { "figure_ref": [], "heading": "Data Augmentation", "publication_ref": [], "table_ref": [], "text": "Training a deep neural network model typically requires a significant amount of data, which can be expensive and time-consuming to acquire and label. Data augmentation is a useful technique that addresses this challenge by substantially increasing the diversity of available training data without the need for additional data collection. Common data augmentation methods include image rotation, flipping, mirroring, noise addition, and illumination alteration. These techniques are often combined to generate even more varied data." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "The aim of our experiments was to compare object detection and classification models' performance at identifying manufacturing defects in new contexts and on newly collected test sets, as a measure of the generality of the model. We additionally studied the role of training data, comparing a \"uniform\" data set consisting of near identical parts and backgrounds, which is typical of manufacturing, with a data set containing similar defects across a wide variety of parts." }, { "figure_ref": [], "heading": "Model selection 6.1.1. ResNet Backbone", "publication_ref": [ "b16", "b74" ], "table_ref": [], "text": "A ResNet-50 backbone was used in this study, configured as a binary classifier for classification experiments and as part of the RetinaNet object detetion model as described below. ResNet-50 is a commonly used architecture in machine learning computer vision tasks, and consists of repeated convolutional layers short circuited with identity operations, originally formulated to improve ease of training for very deep networks [17]. The use of this specific classifier was based on its proven success in image classification tasks, and the binary classifier head allowed for the specific classification needs of this study.\nFor the object detection model, we employed the RetinaNet SSD, based on the Detectron2 library [75]. This library is Facebook AI Research's advanced software that offers cutting-edge segmentation and detection algorithms. All models used a RetinaNet model pre-trained on COCO as the starting point for training, with all model parameters allowed to train. The succeeding subsection will present a concise overview of the RetinaNet detector." }, { "figure_ref": [ "fig_13", "fig_13", "fig_13", "fig_13" ], "heading": "RetinaNet detector", "publication_ref": [ "b41", "b35", "b38", "b38", "b38" ], "table_ref": [], "text": "RetinaNet integrates the strengths of several target recognition approaches, most notably the \"anchor\" concept from RPN and the usage of feature pyramids from Single Shot Multibox Detector (SSD) [42] and Feature Pyramid Networks (FPN) [36]. The RetinaNet model consists of three components: a convolutional neural network for feature extraction and two sub-networks for classification and box regression [39]. Figure 9 illustrates the model structure, with Figure 9a depicting the ResNet-50 backbone network, Figure 9b showing how FPN serves as a decoder to generate a multi-scale convolutional feature pyramid, and Figure 9c revealing how two sub-networks are employed for classification and bounding box regression. Using feature mapping, the classification and box regression sub-networks are constructed via straightforward convolutional operations. The classification sub-network is responsible for object classification, while the box regression sub-network returns the bounding box position. FPN's advantage is that it leverages the hierarchical structure of the deep convolutional network to represent multi-scale objects, enabling the recognizer to create more accurate position predictions. In this paper, ResNet-50 is utilized to extract image features [39]. Unlike the two-stage recognition method, the low accuracy of the one-stage target recognition is primarily due to the extreme imbalance between foreground and background samples during the training of the dense recognizer. This leads to an abundance of negative samples during the training process. To address this issue of category imbalance, focus loss is employed. This method modifies the standard cross-entropy by reducing the loss assigned to wellclassified examples [39]. With focus loss supervision, RetinaNet is able to achieve remarkable improvements in the universal object recognition benchmark. Equation ( 4) represents the focus loss and has been utilized to enhance detection accuracy. An a-balanced variation of the focus loss is defined as follows:\nF L(p t ) = α t (1 -p t ) γ log(p t )(4)\nThe hyperparameters α t and γ are used where α t ∈ [0,1] is the weight assigned to address class imbalance, and the parameter γ adjusts the rate at which easy examples are down weighted. To simplify notation, p t is defined as:\np t = p if y=1. 1-p otherwise. (5\n)\nwhere p ∈ [0,1] is the probability estimated by the model, and y = 1 specifies the ground truth." }, { "figure_ref": [], "heading": "Experimental setup", "publication_ref": [ "b0", "b74" ], "table_ref": [], "text": "The classifier experiments in this study used the torchvision [1] implementation of a ResNet-50 classifier, pre-trained on Image-Net. During training, the classifier model was configured with the following parameter settings: Epoch: 200, learning rate: 0.0001, batch size per image: 16, and image size: 448.\nTo conduct object detection experiments, we used RetinaNet model from the Detectron2 library [75]. The model used a ResNet-50 backbone pre-trained on COCO. The object detection model was fine tuned with the following parameter settings: Epoch: 9000, learning rate: 0.00025, and batch size per image: 128. All experiments were conducted on a single K80 or V100 GPU." }, { "figure_ref": [ "fig_10", "fig_11", "fig_12" ], "heading": "Data collection and Labeling", "publication_ref": [ "b66" ], "table_ref": [ "tab_1" ], "text": "A first data set consisting of photographs of 200 substantially identical metal \"Mending plates\" was assembled. Half of these parts were damaged by removing a roughly crescent shape portion of material using a 3 mm diameter round file, to simulate missing material caused by a manufacturing defect. A second data set consisting of photographs of duplicate copies of 132 flat metal parts was constructed, for 264 total. One of each part was damaged by removing a roughly crescent shape portion of material using a 3 mm diameter round file as with the \"Mending plates\". The parts were photographed in four orientations each (undergoing planar rotations of nominally 0, 90 degrees, 180 degrees and 270 degrees). Fig. 6 shows examples of the photographs. 220 of the parts were purchased, had defects introduced, and were photographed, as a batch, and comprise a train and validation set. The remaining 44 were purchased, had defects introduced, and photographed separately, and comprise a holdout set, as shown in Fig. 7. Fig. 8 show a grid of examples of the defects introduced into the metal parts for the \"Generic\" datset. These defects are all of a similar type, but vary in expression based on the location, size, and other random variations. The images in both the \"Mending plates\" and \"Generic\" data sets have a resolution of 2400×2400 pixels and were manually annotated with bounding boxes using labelstudio [67].\nTable 2 provides a summary of the data set composition, including the names, original size, defect types, and whether weather augmentation was utilized. " }, { "figure_ref": [], "heading": "Data set", "publication_ref": [], "table_ref": [], "text": "Original data set size defects type Data Augmentation \"Mending plates\" 140 underfill and OKs Yes \"Generic\" 110 underfill and OKs Yes" }, { "figure_ref": [], "heading": "Comparative results between a classifier and an object detection model", "publication_ref": [], "table_ref": [], "text": "A series of experiments with the \"Mending plates\" and \"Generic\" data sets was conducted to compare the performance of a classifier with that of an object detection model. The primary objective was to evaluate how effectively both models could learn to identify specific defects out of context and in held out examples, with the aim of improving their ability to detect such defects in new scenarios." }, { "figure_ref": [], "heading": "Classifier", "publication_ref": [], "table_ref": [ "tab_2", "tab_3" ], "text": "Table 3 and Table 4 provide the results for a set of experiments of the classifier for the two created data sets, the \"Mending plates\" and the \"Generic\" one. For the \"Mending plates\" data set, the classifier performed almost perfectly over the validation set but failed over the holdout set of \"Mending plates\". The preparation and appearance of the holdout set is the same as for the validation and training set, except it was acquired on another day and not used in training. This simulates the case where a trained model is used in production. The poor performance suggests that the trained model is over-fit to spurious features in the training data, and despite performing well on the validation set (which is randomly split from the same pool as the training data) it is sufficiently fragile that it fails on seemingly identical data acquired later. " }, { "figure_ref": [], "heading": "Experiment Train data", "publication_ref": [], "table_ref": [], "text": "Test data AUC 1 \"Mending plates\" validation set of \"Mending plates\" 0.995 2 \"Mending plates\" holdout set of \"Mending plates\" 0.354\nNow, using the \"Generic\" data set, we can easily observe how the classifier performed well over the validation and the holdout set of the \"Generic\" data set. Compared with the \"Mending plates\", the overall performance is lower (0.925 vs 0.995 on the validation set) but remains consistent to the holdout set. The key difference here is that the defects appeared in a diverse set of parts, forcing the model to focus on learning a set of features that was invariant to the background -i.e. the defect itself. Thus the model continues to perform even on \"new\" data acquired at a different time. " }, { "figure_ref": [], "heading": "Experiment Train data Test data AUC", "publication_ref": [], "table_ref": [], "text": "5 \"Generic\" validation set of \"Generic\" 0.925 6 \"Generic\" holdout set of \"Generic\" 0.923" }, { "figure_ref": [], "heading": "Object detection model", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "Table 5 shows us that object detection model trained with the \"Mending plates\" data set is able to perform near perfectly over both the validation and the holdout set of \"Mending plates\". In both cases, the AUC is 0.99. Compared with the classifier, the object detection model is able to generalize to the holdout set, suggesting that the additional requirement to learn the bounding box location pushes the model to identify robust features that extend to new data, compared with the classifier. " }, { "figure_ref": [], "heading": "Experiment Train data", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "Test data AUC 1 \"Mending plates\" validation set of \"Mending plates\" 0.99 2 \"Mending plates\" holdout set of \"Mending plates\" 0.99\nSimilar to the experiment conducted before, Table 6 provides the results of object detection model trained on \"Generic\" data set and tested on the validation and holdout data set. The performance now approaches that for obtained on the more uniform \"Mending plates\" data, and generalizes to the holdout set." }, { "figure_ref": [], "heading": "Generalization of the classifier and object detection models", "publication_ref": [], "table_ref": [], "text": "We use the term generalization to refer to a model's ability to learn general classification or detection rules that are robust to other changes in the data. Here we explore the generalization capabilities of a " }, { "figure_ref": [], "heading": "Experiment Train data Test data AUC", "publication_ref": [], "table_ref": [], "text": "5 \"Generic\" validation set of \"Generic\" 0.994 6 \"Generic\" holdout set of \"Generic\" 0.995\nclassifier and an object detection model on the \"Mending plates\" and \"Generic\" data sets. The aim is to investigate how well the models can generalize from one data set to another." }, { "figure_ref": [], "heading": "Classifier", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "Table 7 provides the results of the predictive power that the classifier has to generalize from \"Mending plates\" to the \"Generic\" data set and vice-versa. As observed from the results, the classifier failed to perform well (i.e., AUC equals to 0.5) going from from \"Mending plates\" to the \"Generic\" data set so no generalization was obtained in this case. However, it was able to generalize from the the \"Generic\" data set to the \"Mending plates\" data set with an AUC of almost 0.91 which is in keeping with the validation and holdout set results previously obtained for this classifier. We attribute this to the model's having learned a robust set of features for identifying defects. We conclude from this performance that it is actually better for in-production performance to train a classifier on a diverse set of data without examples of the part being inspected than to train on a uniform set of images of the same part. In practice, combining data sets to include diverse examples as well as real images of the part may be most appropriate. " }, { "figure_ref": [], "heading": "Experiment Train data Test data AUC", "publication_ref": [], "table_ref": [], "text": "4 \"Mending plates\" holdout set of \"Generic\" 0.509 7 \"Generic\" holdout set of \"Mending plates\" 0.911" }, { "figure_ref": [], "heading": "Object detection model", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "Table 8 provides the results of the object detection model generalizing from \"Mending plates\" to the \"Generic\" data set and vice-versa. An object detection model trained on only the \"Mending plates\" is here able to achieve an AUC of 0.848 on the \"Generic\" data set. While this is lower than for a detector trained on the \"Generic\" train split, it shows a high degree of generalization to the defect type, despite only having seen examples of the defect in the \"Mending plates\". This suggests the bounding box prediction requirement and associated training labels are causing the model to learn a robust set of features that generalize to defects in different circumstances, even from relatively uniform examples. When trained on the \"Generic\" data set and applied to the \"Mending plates\", the model exhibits a perfect performance with an AUC of 1.0. This suggests that for a production model inspecting uniform parts, the combination of an object detection model and diverse training data can give the best performance. " }, { "figure_ref": [], "heading": "Experiment Train data", "publication_ref": [], "table_ref": [], "text": "Test data AUC 4 \"Mending plates\" holdout set of \"Generic\" 0.848 7 \"Generic\" holdout set of \"Mending plates\" 1.00" }, { "figure_ref": [], "heading": "Factors that affect generalization", "publication_ref": [ "b65", "b59" ], "table_ref": [], "text": "The main objective here is to identify the key factors that can affect the generalization of models, in order to create more representative data sets that focus on those significant factors. To achieve this, several steps are involved, beginning with image pre-processing, clustering, and finally determining the optimal number of clusters using a goodness measure. Clusters are identified using the clustimage package [66] with agglomerative approach with the Euclidean distance metric and ward linkage. The Silhouette score [60] is used to assess both the clustering and the optimal number of clusters, with a search range between 3 and 25. The silhouette score ranges from -1 to 1, with higher values indicating better clustering. We chose the number of clusters that maximizes the silhouette score, which ensures that the clusters are well-separated and internally cohesive. Finally, the performance of the model on the testing data set will be evaluated using different training data sets generated through clustering. This approach will produce multiple training data sets for subsequent experiments, each with unique characteristics and potentially different results." }, { "figure_ref": [], "heading": "Clustering on the \"Generic\" data set", "publication_ref": [ "b59" ], "table_ref": [ "tab_8" ], "text": "We analyzed 110 images from the \"Generic\" data set by clustering them and then assessed the level of generalization by testing the resulting models on the \"Mending plates\" data set. Using the \"silhouette\" [60] evaluation method, we determined that the best number of clusters was three, which is illustrated below in Figs. 9, 10, and 11, respectively. We can see that the clusters roughly represent different shapes that are present in the data set. Cluster one is the most diverse and includes smaller items with more white space around them. Cluster 2 includes bigger items and in particular some with large holes such as the metal \"8\" and outlet covers. And Cluster 3 contains many larger round objects.\nWe produced three separate training data sets by excluding the images that are present in each cluster. This way, we trained models with the remaining images available in every created training data set. The results obtained before and after utilizing our approach are presented in Table 9. Experiment 0, which did not employ any clustering, included all images in the training data set. In Experiment 1, images belonging to cluster 1 were excluded from the training data set. In Experiment 2, images belonging to cluster 3 were eliminated, and in Experiment 3, images belonging to cluster 3 were omitted. These exclusions represented 80%, 10%, and 9% of the entire training data set, respectively.\nOmitting cluster 1 in Experiment 1 has the largest impact, dropping the performance from and AUC of 0.995 to 0.923. However, even with 80% of the data removed, this performance drop only amounts to about 7%.\nThe results of Experiment 2 indicates that cluster 2 has no impact on the generalization as the AUC maintained equal compared to the case of not employing any clustering method.\nFinally, the results of experiment 3 indicates that cluster 3 contributes to the overall performance we have seen a drop in the AUC value compared to the case of not employing any clustering method, from 0.995 to 0.985.\nOverall these results suggest that we can eliminate some images from the training data without decreasing performance, and that it may be possible to improve performance by selectively including additional data in clusters tho which the performance is sensitive. A higher number of clusters could be used to fine-tune recommendations for adding or omitting data. " }, { "figure_ref": [], "heading": "Conclusion and future work", "publication_ref": [], "table_ref": [], "text": "We have examined how data diversity contributes to model robustness classification and object detection models, in a typical manufacturing inspection context. When trained on repetitive data, binary OK/NG classifiers are brittle and may not even generalize to seemingly identical held out data, as demonstrated by our experiments on \"Mending plates\" images. A classifier can be made more robust by training on diverse data where a defect is presented on different backgrounds. In our experiments, we found that training on similar defects in diverse images of flat metal parts results in a 0.92 classifier AUC on a validation set that is maintained when predicting on a held-out data set. We conclude that the diverse data forces the model to learn an invariant set of characteristics of the training data that generalizes to new image. We shows that a classifier trained on diverse data performs equally well on a uniform data set. Put together, this provides a recipe for training a model with good in-distribution performance that is known to be robust to defects in new contexts.\nBy moving from a classifier to an object detection model, we can further improve performance. The additional requirement to localize the defect acts as a constraint that forces the model to learn a robust set of features. Even a model trained on a highly uniform data set has been shown to generalize well to a diverse data set (AUC of 0.85). Combining object detection with a diverse training data set yields the best performance.\nWe used a clustering method to examine which data was most important to the model performance. This is important to minimize the effort required in data collection and labeling. Certain clusters were found to have a lower importance for performance, which can inform data collection.\nThis study focused on a single defect type, representing missing material. The background part was varied to examine how well the model generalizes. In ongoing work we are studying how well defects generalize between different types. The overall goal is a recipe for data collection and model validation that ensures robust performance that can be sustained on newly encountered defects." }, { "figure_ref": [], "heading": "Acknowledgment", "publication_ref": [], "table_ref": [], "text": "Ahmad Mohamad Mezher is a recipient of a McCain Postdoctoral Fellowship in Innovation with the Electrical and Computer Engineering department at the University of New Brunswick (UNB)." } ]
Visual quality inspection in high performance manufacturing can benefit from automation, due to cost savings and improved rigor. Deep learning techniques are the current state of the art for generic computer vision tasks like classification and object detection. Manufacturing data can pose a challenge for deep learning because data is highly repetitive and there are few images of defects or deviations to learn from. Deep learning models trained with such data can be fragile and sensitive to context, and can under-detect new defects not found in the training data. In this work, we explore training defect detection models to learn specific defects out of context, so that they are more likely to be detected in new situations. We demonstrate how models trained on diverse images containing a common defect type can pick defects out in new circumstances. Such generic models could be more robust to new defects not found data collected for training, and can reduce data collection impediments to implementing visual inspection on production lines. Additionally, we demonstrate that object detection models trained to predict a label and bounding box outperform classifiers that predict a label only on held out test data typical of manufacturing inspection tasks. Finally, we studied the factors that affect generalization in order to train models that work under a wider range of conditions.
A Novel Strategy for Improving Robustness in Computer Vision Manufacturing Defect Detection
[ { "figure_caption": "TP) False Negative (FN) NG False Positive (FP) True Negative (TN)", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: Two ROC curves for two different models.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "ResNet", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The structure of RetinaNet.(a) Backbone network; (b) decoder; (c) subnet.", "figure_data": "", "figure_id": "fig_3", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 below shows 25 example images, showing the full structure of the part being inspected.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Mending plates data set.", "figure_data": "", "figure_id": "fig_5", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 44Fig. 4 shows close-up examples of the defects introduced. The defects are of a similar nature but vary in size, position, and orientation. The data consists of 160 training and validation images, consisting of parts that were purchased, had defects introduced, and were photographed, as one batch. Another 40, (\"the holdout set\") were purchased, had defects introduced, and were photographed on a separate occasion and not involved in model training.", "figure_data": "", "figure_id": "fig_6", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Zoomed areas showing defects introduced into a subset of the parts.", "figure_data": "", "figure_id": "fig_7", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 55Fig. 5 below shows 8 example images of the Mending holdout set. They appear identical to the train and validation sets.", "figure_data": "", "figure_id": "fig_8", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Holdout Mending plates data set.", "figure_data": "", "figure_id": "fig_9", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Generic data set.", "figure_data": "", "figure_id": "fig_10", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Holdout \"Generic\" data set.", "figure_data": "", "figure_id": "fig_11", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Zoomed areas showing defects introduced into a subset of the parts (\"Generic\" data set).", "figure_data": "", "figure_id": "fig_12", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Cluster 1: images with varying subjects and compositions, featuring smaller items and more negative space.", "figure_data": "", "figure_id": "fig_13", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Cluster 2: images with larger, perforated items such as metal numbers and outlet covers.", "figure_data": "", "figure_id": "fig_14", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Cluster 3: images featuring large circular objects, with some variations in size.", "figure_data": "", "figure_id": "fig_15", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Confusion matrix for a binary classification problem", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Data sets composition.", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Set of experiments with the classifier.", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Set of experiments with the classifier.", "figure_data": "", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Set of experiments with the object detection model.", "figure_data": "", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Set of experiments with the object detection model.", "figure_data": "", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Set of experiments with the classifier.", "figure_data": "", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Set of experiments with the object detection model.", "figure_data": "", "figure_id": "tab_7", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Set of experiments over the obtained clusters.", "figure_data": "Number", "figure_id": "tab_8", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "", "figure_data": "000.9951890.9232110.9953100.985", "figure_id": "tab_9", "figure_label": "", "figure_type": "table" } ]
Ahmad Mohamad Mezher; Andrew E Marble
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "Label Studio: Data labeling software", "year": "2017" }, { "authors": "V A Adibhatla; H.-C Chih; C.-C Hsu; J Cheng; M F Abbod; J.-S Shieh", "journal": "Electronics", "ref_id": "b1", "title": "Defect detection in printed circuit boards using you-only-look-once convolutional neural networks", "year": "2020" }, { "authors": "A Bochkovskiy; C.-Y Wang; H.-Y M Liao", "journal": "", "ref_id": "b2", "title": "Yolov4: Optimal speed and accuracy of object detection", "year": "2020" }, { "authors": "A P Bradley", "journal": "Pattern Recognition", "ref_id": "b3", "title": "The use of the area under the roc curve in the evaluation of machine learning algorithms", "year": "1997" }, { "authors": "Z A Bukhsh; N Jansen; A Saeed", "journal": "Neural Computing and Applications", "ref_id": "b4", "title": "Damage detection using in-domain and cross-domain transfer learning", "year": "2021-12" }, { "authors": "B Carter; S Jain; J Mueller; D Gifford", "journal": "", "ref_id": "b5", "title": "Overinterpretation reveals image classification model pathologies", "year": "2020" }, { "authors": "B Chen; Z Yan; W Chen", "journal": "Entropy", "ref_id": "b6", "title": "Defect detection for wheel-bearings with time-spectral kurtosis and entropy", "year": "2014-01" }, { "authors": "J Dai; Y Li; K He; J Sun", "journal": "", "ref_id": "b7", "title": "R-fcn: Object detection via region-based fully convolutional networks", "year": "2016" }, { "authors": "S Das; U R Jena", "journal": "", "ref_id": "b8", "title": "Texture classification using combination of lbp and glrlm features along with knn and multiclass svm classification", "year": "2016" }, { "authors": "W Du; H Shen; J Fu; G Zhang; X Shi; Q He", "journal": "Journal of Intelligent Manufacturing", "ref_id": "b9", "title": "Automated detection of defects with low semantic information in x-ray images based on deep learning", "year": "2021-01" }, { "authors": "D Erhan; C Szegedy; A Toshev; D Anguelov", "journal": "", "ref_id": "b10", "title": "Scalable object detection using deep neural networks", "year": "2014" }, { "authors": "R Girshick", "journal": "", "ref_id": "b11", "title": "Fast r-cnn", "year": "2015" }, { "authors": "R Girshick; J Donahue; T Darrell; J Malik", "journal": "", "ref_id": "b12", "title": "Rich feature hierarchies for accurate object detection and semantic segmentation", "year": "2014" }, { "authors": "A A Hamdi; M S Sayed; M M Fouad; M M Hadhoud", "journal": "", "ref_id": "b13", "title": "Unsupervised patterned fabric defect detection using texture filtering and k-means clustering", "year": "2018" }, { "authors": "K He; G Gkioxari; P Dollár; R Girshick", "journal": "", "ref_id": "b14", "title": "Mask r-cnn", "year": "2017" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b15", "title": "Spatial pyramid pooling in deep convolutional networks for visual recognition", "year": "2015" }, { "authors": "", "journal": "", "ref_id": "b16", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "D Hendrycks; S Basart; N Mu; S Kadavath; F Wang; E Dorundo; R Desai; T Zhu; S Parajuli; M Guo; D Song; J Steinhardt; J Gilmer", "journal": "", "ref_id": "b17", "title": "The many faces of robustness: A critical analysis of out-of-distribution generalization", "year": "2021" }, { "authors": "G Huang; Z Liu; L Van Der Maaten; K Q Weinberger", "journal": "", "ref_id": "b18", "title": "Densely connected convolutional networks", "year": "2017" }, { "authors": "S.-H Huang; Y.-C Pan", "journal": "Computers in Industry", "ref_id": "b19", "title": "Automated visual inspection in the semiconductor industry: A survey", "year": "2015" }, { "authors": "K Imoto; T Nakai; T Ike; K Haruki; Y Sato", "journal": "IEEE Transactions on Semiconductor Manufacturing", "ref_id": "b20", "title": "A cnn-based transfer learning method for defect classification in semiconductor manufacturing", "year": "2019" }, { "authors": "H Jia; Y Murphey; J Shi; T.-S Chang", "journal": "", "ref_id": "b21", "title": "An intelligent real-time vision system for surface defect detection", "year": "2004" }, { "authors": "J M Keller; M R Gray; J A Givens", "journal": "IEEE Transactions on Systems, Man, and Cybernetics", "ref_id": "b22", "title": "A fuzzy k-nearest neighbor algorithm", "year": "1985" }, { "authors": "D.-H Kim; T J Y Kim; X Wang; M Kim; Y.-J Quan; J W Oh; S.-H Min; H Kim; B Bhandari; I Yang; S.-H Ahn", "journal": "International Journal of Precision Engineering and Manufacturing-Green Technology", "ref_id": "b23", "title": "Smart machining process using machine learning: A review and perspective on machining industry", "year": "2018-08" }, { "authors": "M Kim; M Lee; M An; H Lee", "journal": "Journal of Intelligent Manufacturing", "ref_id": "b24", "title": "Effective automatic defect classification process based on cnn with stacking ensemble model for tft-lcd panel", "year": "2020-06" }, { "authors": "H Kong; J Yang; Z Chen", "journal": "IEEE Transactions on Industrial Informatics", "ref_id": "b25", "title": "Accurate and efficient inspection of speckle and scratch defects on surfaces of planar products", "year": "2017" }, { "authors": "B Lakshminarayanan; A Pritzel; C Blundell", "journal": "", "ref_id": "b26", "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "year": "2016" }, { "authors": "Y Lecun; L Bottou; Y Bengio; P Haffner", "journal": "", "ref_id": "b27", "title": "Gradient-based learning applied to document recognition", "year": "1998" }, { "authors": "X Lei; Z Sui", "journal": "Measurement", "ref_id": "b28", "title": "Intelligent fault detection of high voltage line based on the faster r-cnn", "year": "2019" }, { "authors": "Y Lei; M J Zuo", "journal": "Mechanical Systems and Signal Processing", "ref_id": "b29", "title": "Gear crack level identification based on weighted k nearest neighbor classification algorithm", "year": "2009" }, { "authors": "T.-S Li; C.-L Huang", "journal": "Expert Systems with Applications", "ref_id": "b30", "title": "Defect spatial pattern recognition using a hybrid som-svm approach in semiconductor manufacturing", "year": "2009" }, { "authors": "Z Liao; A Abdelhafeez; H Li; Y Yang; O G Diaz; D Axinte", "journal": "International Journal of Machine Tools and Manufacture", "ref_id": "b31", "title": "State-of-the-art of surface integrity in machining of metal matrix composites", "year": "2019" }, { "authors": "D Lin; Y Cao; W Zhu; Y Li", "journal": "", "ref_id": "b32", "title": "Few-shot defect segmentation leveraging abundant normal training samples through normal background regularization and crop-and-paste operation", "year": "2020" }, { "authors": "J Lin; Y Yao; L Ma; Y Wang", "journal": "The International Journal of Advanced Manufacturing Technology", "ref_id": "b33", "title": "Detection of a casting defect tracked by deep convolution neural network", "year": "2018-07" }, { "authors": "M Lin; Q Chen; S Yan", "journal": "", "ref_id": "b34", "title": "Network in network", "year": "2013" }, { "authors": "T.-Y Lin; P Dollár; R Girshick; K He; B Hariharan; S Belongie", "journal": "", "ref_id": "b35", "title": "Feature pyramid networks for object detection", "year": "2016" }, { "authors": "", "journal": "", "ref_id": "b36", "title": "Feature pyramid networks for object detection", "year": "2017" }, { "authors": "T.-Y Lin; P Goyal; R Girshick; K He; P Dollár", "journal": "", "ref_id": "b37", "title": "Focal loss for dense object detection", "year": "2017" }, { "authors": "", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b38", "title": "Focal loss for dense object detection", "year": "2020" }, { "authors": "G Liu; N Yang; L Guo; S Guo; Z Chen", "journal": "Sensors", "ref_id": "b39", "title": "A one-stage approach for surface anomaly detection with background suppression strategies", "year": "2020" }, { "authors": "W Liu; D Anguelov; D Erhan; C Szegedy; S Reed; C.-Y Fu; A C Berg", "journal": "Springer International Publishing", "ref_id": "b40", "title": "Ssd: Single shot multibox detector", "year": "2016" }, { "authors": "", "journal": "Springer International Publishing", "ref_id": "b41", "title": "SSD: Single shot MultiBox detector", "year": "2016" }, { "authors": "X Lv; F Duan; J.-J Jiang; X Fu; L Gan", "journal": "Sensors", "ref_id": "b42", "title": "Deep active learning for surface defect detection", "year": "2020" }, { "authors": "J Masci; U Meier; D Ciresan; J Schmidhuber; G Fricout", "journal": "", "ref_id": "b43", "title": "Steel defect classification with max-pooling convolutional neural networks", "year": "2012" }, { "authors": "D K Mcclish", "journal": "Medical Decision Making", "ref_id": "b44", "title": "Analyzing a portion of the roc curve", "year": "1989" }, { "authors": "S Mjahed; S El Hadaj; K Bouzaachane; S Raghay", "journal": "Association for Computing Machinery", "ref_id": "b45", "title": "Engine fault signals diagnosis using genetic algorithm and k-means based clustering", "year": "2018" }, { "authors": "J Mukhoti; A Kirsch; J Van Amersfoort; P H S Torr; Y Gal", "journal": "", "ref_id": "b46", "title": "Deep deterministic uncertainty: A simple baseline", "year": "2021" }, { "authors": "M Najibi; M Rastegari; L S Davis", "journal": "", "ref_id": "b47", "title": "G-cnn: An iterative grid based object detector", "year": "2016" }, { "authors": "T P Nguyen; S Choi; S.-J Park; S H Park; J Yoon", "journal": "International Journal of Precision Engineering and Manufacturing-Green Technology", "ref_id": "b48", "title": "Inspecting method for defective casting products with convolutional neural network (cnn)", "year": "2021-03" }, { "authors": "S Niu; B Li; X Wang; H Lin", "journal": "IEEE Transactions on Automation Science and Engineering", "ref_id": "b49", "title": "Defect image sample generation with gan for improving defect recognition", "year": "2020" }, { "authors": "L Oakden-Rayner; J Dunnmon; G Carneiro; C Re", "journal": "Association for Computing Machinery", "ref_id": "b50", "title": "Hidden stratification causes clinically meaningful failures in machine learning for medical imaging", "year": "2020" }, { "authors": "A Oleff; B Küster; M Stonis; L Overmeyer", "journal": "Progress in Additive Manufacturing", "ref_id": "b51", "title": "Process monitoring for material extrusion additive manufacturing: a state-of-the-art review", "year": "2021-12" }, { "authors": "J.-K Park; B.-K Kwon; J.-H Park; D.-J Kang", "journal": "International Journal of Precision Engineering and Manufacturing-Green Technology", "ref_id": "b52", "title": "Machine learning-based imaging system for surface defect inspection", "year": "2016-07" }, { "authors": "Y Peng; G Liu; Y Quan; Q Zeng", "journal": "Optics & Laser Technology", "ref_id": "b53", "title": "The depth measurement of internal defect based on laser speckle shearing interference", "year": "2017" }, { "authors": "J Redmon; S Divvala; R Girshick; A Farhadi", "journal": "", "ref_id": "b54", "title": "You only look once: Unified, real-time object detection", "year": "2016" }, { "authors": "J Redmon; A Farhadi", "journal": "", "ref_id": "b55", "title": "Yolo9000: Better, faster, stronger", "year": "2017" }, { "authors": "", "journal": "", "ref_id": "b56", "title": "Yolov3: An incremental improvement", "year": "2018" }, { "authors": "S Ren; K He; R Girshick; J Sun", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b57", "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "year": "2017" }, { "authors": "I G Rodionova; A I Zaitsev; O N Baklanova; A Y Kazankov; V V Naumenko; G V Semernin", "journal": "Metallurgist", "ref_id": "b58", "title": "Effect of carbon steel structural inhomogeneity on corrosion resistance in chlorine-containing media", "year": "2016-01" }, { "authors": "P J Rousseeuw", "journal": "Journal of Computational and Applied Mathematics", "ref_id": "b59", "title": "Silhouettes: A graphical aid to the interpretation and validation of cluster analysis", "year": "1987" }, { "authors": "J Chen; C Li", "journal": "Journal of Iron and Steel Research, International", "ref_id": "b60", "title": "Prediction and control of thermal scratch defect on surface of strip in tandem cold rolling", "year": "2015" }, { "authors": "K Simonyan; A Zisserman", "journal": "", "ref_id": "b61", "title": "Very deep convolutional networks for large-scale image recognition", "year": "2014" }, { "authors": "X Sun; J Gu; R Huang; R Zou; B Giron Palomares", "journal": "Electronics", "ref_id": "b62", "title": "Surface defects recognition of wheel hub based on improved faster r-cnn", "year": "2019" }, { "authors": "J A K Suykens; J Vandewalle", "journal": "Neural Processing Letters", "ref_id": "b63", "title": "Least squares support vector machine classifiers", "year": "1999-06" }, { "authors": "C Szegedy; W Liu; Y Jia; P Sermanet; S Reed; D Anguelov; D Erhan; V Vanhoucke; A Rabinovich", "journal": "", "ref_id": "b64", "title": "Going deeper with convolutions", "year": "2015" }, { "authors": "E Taskesen", "journal": "", "ref_id": "b65", "title": "clustimage", "year": "2020" }, { "authors": "M Tkachenko; M Malyuk; A Holmanyuk; N Liubimov", "journal": "", "ref_id": "b66", "title": "Label Studio: Data labeling software", "year": "" }, { "authors": "I Valavanis; D Kosmopoulos", "journal": "Expert Systems with Applications", "ref_id": "b67", "title": "Multiclass defect detection and classification in weld radiographic images using geometric and texture features", "year": "2010" }, { "authors": "J Wang; P Neskovic; L N Cooper", "journal": "Pattern Recogn", "ref_id": "b68", "title": "Neighborhood size selection in the k-nearest-neighbor rule using statistical confidence", "year": "2006-03" }, { "authors": "J Wang; Y Ma; L Zhang; R X Gao; D Wu", "journal": "Journal of Manufacturing Systems", "ref_id": "b69", "title": "Deep learning for smart manufacturing: Methods and applications", "year": "2018" }, { "authors": "K.-J Wang; D A Rizqi; H.-P Nguyen", "journal": "Journal of Intelligent Manufacturing", "ref_id": "b70", "title": "Skill transfer support model based on deep learning", "year": "2021-04" }, { "authors": "T Wang; Y Chen; M Qiao; H Snoussi", "journal": "The International Journal of Advanced Manufacturing Technology", "ref_id": "b71", "title": "A fast and robust convolutional neural network-based defect detection model in product quality control", "year": "2018-02" }, { "authors": "T Wei; D Cao; X Jiang; C Zheng; L Liu", "journal": "", "ref_id": "b72", "title": "Defective samples simulation through neural style transfer for automatic surface defect segment", "year": "2019" }, { "authors": "Y Wu; X Zhang; F Fang", "journal": "Sensors", "ref_id": "b73", "title": "Automatic fabric defect detection using cascaded mixed feature pyramid with guided localization", "year": "2020" }, { "authors": "Y Wu; A Kirillov; F Massa; W.-Y Lo; R Girshick", "journal": "", "ref_id": "b74", "title": "Detectron2", "year": "2019" }, { "authors": "X Xiao; L Yu; Z Dong; R Mbelek; K Xu; C Lei; W Zhong; F Lu; M Xing", "journal": "J. Mater. Chem. B", "ref_id": "b75", "title": "Adipose stem cell-laden injectable thermosensitive hydrogel reconstructing depressed defects in rats: filler and scaffold", "year": "2015" }, { "authors": "D Yoo; S Park; J.-Y Lee; A S Paek; I S Kweon", "journal": "", "ref_id": "b76", "title": "Attentionnet: Aggregating weak directions for accurate object detection", "year": "2015" }, { "authors": "D You; X Gao; S Katayama", "journal": "IEEE Transactions on Industrial Electronics", "ref_id": "b77", "title": "Wpd-pca-based laser welding process monitoring and defects diagnosis by using fnn and svm", "year": "2015" }, { "authors": "K Yıldız; A Buldu; M Demetgul", "journal": "Journal of Industrial Textiles", "ref_id": "b78", "title": "A thermal-based defect classification method in textile fabrics with k-nearest neighbor algorithm", "year": "2016" }, { "authors": "S Zhai; D Shang; S Wang; S Dong", "journal": "IEEE Access", "ref_id": "b79", "title": "Df-ssd: An improved ssd object detection algorithm based on densenet and feature fusion", "year": "2020" }, { "authors": "G Zhang; K Cui; T.-Y Hung; S Lu", "journal": "", "ref_id": "b80", "title": "Defect-gan: High-fidelity defect synthesis for automated defect inspection", "year": "2021" }, { "authors": "Q Zhang; M Zhang; T Chen; Z Sun; Y Ma; B Yu", "journal": "", "ref_id": "b81", "title": "Recent advances in convolutional neural network acceleration", "year": "2018" }, { "authors": "X.-W Zhang; F Gong; L.-Z Xu", "journal": "Int. J. Comput. Appl. Technol", "ref_id": "b82", "title": "Inspection of surface defects in copper strip using multivariate statistical approach and svm", "year": "2012-03" }, { "authors": "Z.-Q Zhao; P Zheng; S.-T Xu; X Wu", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b83", "title": "Object detection with deep learning: A review", "year": "2019" } ]
[ { "formula_coordinates": [ 8, 207.43, 205.75, 323.33, 22.37 ], "formula_id": "formula_0", "formula_text": "IOU = Detection Result ∩ Ground Truth Detection Result ∪ Ground Truth(1)" }, { "formula_coordinates": [ 8, 242.78, 329.17, 283.74, 31.41 ], "formula_id": "formula_1", "formula_text": "AP = 1 |D| |D| d=1 1 0 p d (r)dr (2" }, { "formula_coordinates": [ 8, 526.52, 340.38, 4.24, 8.8 ], "formula_id": "formula_2", "formula_text": ")" }, { "formula_coordinates": [ 8, 258.64, 412.87, 272.13, 14.66 ], "formula_id": "formula_3", "formula_text": "p d (r) = max r≥r p d (r)(3)" }, { "formula_coordinates": [ 10, 235.97, 392.56, 294.79, 11.72 ], "formula_id": "formula_4", "formula_text": "F L(p t ) = α t (1 -p t ) γ log(p t )(4)" }, { "formula_coordinates": [ 10, 246.66, 457.25, 279.86, 23.14 ], "formula_id": "formula_5", "formula_text": "p t = p if y=1. 1-p otherwise. (5" }, { "formula_coordinates": [ 10, 526.52, 464.04, 4.24, 8.8 ], "formula_id": "formula_6", "formula_text": ")" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b12" ], "table_ref": [], "text": "One of the main challenges facing plant breeding is that of plant phenotyping [1,2]. That is the determination of plant performance and characteristics whilst plants are growing. Continued advances in genetic technologies have reduced genotyping costs for plant scientists and breeders, enabling increasingly large datasets to be generated [3]. It is important for advances in plant phenotyping techniques to be made at a similar rate to enable an understanding of plant behaviour and provide data to help understand the physiological impact of genetics. Plant imaging is one of the techniques that can be used to do this and, combined with advances in computer vision techniques, can provide data on plant performance that can show how different genotypes response to stress conditions [4][5][6]. This paper investigates the problem of measuring potato plants and relating imaging data to leaf area and leaf mass measurements.\nThere have been ongoing leaf segmentation (LSC) and counting (LCC) challenges over the past several years [7]. Various instance segmentation models have been applied to these images and Mask R-CNN [8] has been shown to perform well in such tasks [9]. Detectron2 [10], offers a framework for applying Mask R-CNN using various backbone region proposal networks and is used in this paper to compare the results of Leaf Only SAM in leaf segmentation tasks to a trained instance segmentation model. There have also been a number of studies trying to expand the generalisability of models produced for the LCC and LSC to other plant crops by using image generation methods [11] or using domain adaptation methods [12]. We have investigated whether Segment Anything can be used to produce a segmentation method in a new crop without the need for training and fine-tuning as an alternative solution to the generalisation problem.\nThe recently released Segment Anything Model (SAM) presents a \"foundation model\" that can be used to carry out an annotation free segmentation [13] and it has performed well in a variety of applications. There are several ways it can be used; to generate impressive segmentation results with limited user prompts; to generate highly accurate object masks either from a series of positive or negative points; or to go from bounding boxes to object segmentation [14]. It can however be used as an automatic segmentation technique on its own without any additional user input. A number of studies have been published which utilise SAM for various medical image analysis problems [15][16][17]. One weakness of many of these methods is that SAM cannot determine the class of the objects being segmented and will segment everything in an image detecting background objects as well as objects of interest. Some early studies have ignored this problem and have instead compared performance of the model by comparing the closest detected object to each ground truth object. This is not possible in real world settings. This limitation can be overcome by applying post processing techniques to the output to evaluate the objects detected and only keep the ones that are of interest. For example one study has used SAM to detect craters on Mars by applying a Hough transform to the detected objects to only keep circular objects [18].\nSegment anything used data from many sources online including the leaf counting and segmentation challenge dataset which was used to evaluate the performance of the model [13]. So, this is not an unseen problem for the segment anything model. The fact we have used a novel dataset not previously publicly available for this work ensures that the segment anything model has not had previous sight of the images we have used and highlights the adaptability and generalisation of the proposed approach.\nIn this paper we present new data with manual annotations on images of potato leaves. This presents a similar challenge to the leaf segmentation challenge, but we have included additional data on leaf area, leaf fresh mass, and leaf dry mass that provides an additional focus to the dataset and ensures that we are evaluating performance that closely links to relevant problem to be solved. We also encounter the limitations of image data collection which itself is not a perfect measure of the biologically relevant traits of leaf area and mass. We present a pipeline that uses Segment Anything with the addition of novel post processing steps as a zero-shot segmentation method. We compare performance of this approach with a Mask R-CNN trained on a subset of our data to see how our method compares against fine-tuned state of the art image segmentation models." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Plants and Imaging", "publication_ref": [ "b18" ], "table_ref": [], "text": "A total of 106 potato plants were grown in two batches. The plants were propagated in tissue culture and then planted into 10x10cm pots and grown in a glasshouse. The first set of plants were 57 potato plants of variety Desiree. These plants were grown in 4 trays of 15 plants with the last tray missing 3 plants which did not survive transplanting into soil. Once a week, each plant was photographed with a DLSR with a top-down shot taken roughly 80cm above the plants which were placed on a common paper background. Each week 12 plants were harvested with three plants being taken from each tray. The harvest plants had their total leaf area measured in cm2, the number of leaves was counted, and the fresh mass of the leaves was weighed. The leaves were then placed in an oven at 50°C for 7 days and then the dry mass was weighed thereafter. Fresh mass can be highly variable with the time since last watering occurred so dry mass is generally favoured as a measure for plant growth. After 5 weeks all of the plants were harvested, and the experiment was complete. A second set of plants consisting of 49 potato plants of variety Voyager was planted three weeks later than Desiree and the same process was applied but with 10 plants being harvested each week instead of 12.\nA total of 318 images of potato plants of a varying age between 1 week and 6 weeks growth were gathered. 128 images were manually annotated using the labelme software [19], with a series of points being marked on the leaf boundary to segment each leaf into individual polygons. The annotated images were from the second and third week of imaging and consisted of 45 images of one week old Voyager plants, 34 images of three week old Desiree and 49 images of two week old Desiree. For 32 of these images the plants were then harvested, meaning corresponding ground truth leaf number, leaf area and leaf mass data for these plants is available. To create our segmentation model this dataset was split into random, training (80/128), validation (32/128), and test (16/128) data sets. This resulted in 990 labelled instances of leaves in the training set and 282 and 199 in the validation and test sets respectively. Since no training was carried out for the Segment Anything Model, both the training and validation data sets were used in model development but only the test set is used for evaluation so a comparison can be made with the Mask R-CNN model. Figure 1a shows an example image of the canopy of a potato plant and Figure 1b shows the labelled leaf polygons. " }, { "figure_ref": [], "heading": "Leaf Only SAM", "publication_ref": [], "table_ref": [], "text": "We first prompted segment anything with a 32x32 grid of points evenly spaced on the image to generate fully automatic segmentation masks. Segment anything has the option to perform multi-level segmentation where the model is also applied to smaller crops of the image to improve performance in detection of smaller objects. We utilised this to also segment at an additional crop layer. This gives an output of a series of segmentation masks for the images. This includes masks corresponding to objects of interest (the plant leaves) but also many other background objects. We refer to this step as Base SAM when carrying out comparisons. An additional four post processing steps were added to the output to identify only leaf objects.\nThe first post process step was a colour checking step. This utilises the fact we know we are looking for green plant material so finds green masks in the image. The original RGB images were converted to the HSV colour space. The mean hue and saturation were then used to determine if the objects found were leaves or not by applying thresholds to these values. A mean hue of between 35 and 75 and saturation over 35 were used. We refer to this step as Green pixels when carrying out comparisons.\nOne of the problems SAM suffers from is ambiguity about the level of object wanted. In our case a segmentation of the entire plant is also a valid object and was often found. A check step was then introduced to remove this if present. If more than two masks were found for an image after the colour check was applied, then a total mask of all pixels was generated. If any objects had an Intersection over Union (IoU) of more than 90% with this mask, then they were assumed to contain the entire plant and so were removed from our list of leaf objects. We refer to this step as Not all when carrying out comparisons.\nA small number of other miscellaneous objects were still detected at this point. These were clearly not leaves due to their shape and as a result a shape filter was used to remove these objects. For every mask, the area of the mask was compared to the area of the minimum enclosing circle. If the ratio of mask area was less than 0.1 of the area of minimum enclosing circle the object was clearly not leaf shaped and so removed from our list of leaf objects. Since we wish to detect partially occluded leaves and there is reasonable diversity in leaf shape this step could not be too prescriptive on shape wanted. We refer to this step as Correct shape when carrying out comparisons. There were still some objects that were present that were a collection of multiple different leaves. We often detected both individual leaves and a mask containing several leaves covering the same area. To remove multi leaf masks we detected multi leaf objects by a simple sum of all the mask objects in the image -labelling each pixel by how many masks it was part of. Any mask with a mean score of over 1.5 was assumed to be a duplicate object. These duplicate masks were then checked to see if they were 90% contained in other masks indicating they were in fact a leaf. Masks that were not contained in other masks were then checked to see if at least 90% of their area was contained in other masks and removed if this was the case." }, { "figure_ref": [], "heading": "Mask R-CNN", "publication_ref": [ "b7" ], "table_ref": [], "text": "Mask R-CNN [8] remains a popular architecture for performing instance segmentation. A Mask R-CNN approach was developed using the Detectron2 framework to compare with the segmentation provided by the proposed Leaf Only SAM technique. Both 50 and 101 layer ResNet feature proposal networks (ResNet-50-FPN and ResNet-101-FPN) were employed as backbones to investigate the effect of CNN depth on the segmentation of our novel potato dataset and trained over 2000 iterations. Training and inference was performed using a single NVIDIA Quadro RTX 8000 with a batch size of 16 images and where images were resampled to have a maximum side length of 1333 pixels. Additional image augmentation techniques, such as rotation, mirroring, and other transformations, which improve training dataset variability were not employed in this work." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [], "table_ref": [ "tab_0", "tab_1" ], "text": "In order to evaluate the performance of the two methods applied to our leaf segmentation dataset, a number of key metrics were identified. Average Precision (AP) and Average Recall (AR) are used in assessing models applied to the Common Objects in COntext (COCO) dataset and are used here. Specifically two definitions of each are used, the first where Precision and Recall averaged over a number of IoU thresholds T ∈ [0.5 : 0.05 : 0.95] denoted as AP and AR, as well as that where T = 0.75, denoted as AP0.75 and AR0.75.\nIn addition to Precision and Recall, the DSC was used. As this poses a binary classification problem of leaf vs. non-leaf, the DSC is equivalent to the F1 score and is calculated for each proposed instance as DSC = (2 * TP)/(2 * TP+FP+FN),\nwhere TP, FP, and FN are the true positive, false positive, and false negative detections respectively.\nFor the calculation of DSC for the SAM based methods each ground truth mask was compared to the closest detected object since no classification labels are produced. It therefore measures only the accuracy of the segmentation masks themselves not the ability to determine if they are leaves or not. The performance is evaluated after each of the described steps in turn so the effect of each of these can be seen. Looking at Table 1 we can see that our Leaf Only SAM model is not able to perform as well as a finetuned mask R-CNN model. We achieved an AP75 of 60.3 and AR75 of 64.7 which while not poor scores are less than the AP75 of 74.7 and AR75 of 78.7 achieved by Mask R-CNN. This is not surprising since our model had not been trained on similar potato images like the Mask R-CNN model. We can also see that the post processing steps introduced in Leaf Only SAM are important in improving the precision of the model. Base SAM achieves an AP of only 12.6, the recall of Base SAM is slightly higher than the recall of our model but a 1.5 reduction in recall is a good trade off for a 47.7 increase in precision. The DSC of SAM and our Leaf Only SAM, which measures the accuracy of the best fit mask to each ground truth object, shows a worse performance compared to Mask R-CNN indicating that fine-tuned models can still outperform general foundation models like SAM. It may be possible to improve results of SAM by fine tuning the model more heavily on leaf data. The results for the different steps in our Segment Anything model, as displayed in Table 2, show the importance of adding additional post processing steps. Each line refers to an additional step being added as described in the methods section. Segment anything alone achieves an average precision of only 12.6. Each of the post processing steps we have developed increases precision. The first step removing objects of the wrong colour has the biggest effect, but further increases are seen with each step. There is a slight reduction in the recall of the method with the additional steps. This is a result of them removing found objects. The first step removing pixels of the wrong colour has biggest reduction most of the other steps had no reduction in recall until the final step which slightly lowers recall indicating instances it is removing correctly segmented leaves." }, { "figure_ref": [], "heading": "Results and Discussion", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "In order to determine what is causing the remaining low precision an analysis was carried out of the false positives masks generated by our Leaf Only SAM model. The was done by manually looking at the outlines of each of the false negative masks plotted on the original image. These were then put into one of 5 categories leaf (i.e., judged to be a true positive), multiple leaves together, only partial segmentation of leaf, a part of the plant that is not a leaf and finally objects which are not part of the plant. Any leaves that were occluded, and so are only partially visible, were classed as a leaf if all of the visible leaf was segmented. Figure 2 shows examples of the segmentations obtained using a) Base SAM, b) Leaf Only SAM, and c) Mask R-CNN. The yellow outlines in Figure 2b indicate a false positive detection. We can see that these are a combination of multiple leaf objects, objects missed from manual segmentation and some small objects which are inaccurately segmented. Figure 3 shows the results of this false positive analysis, we can see 37% of the false positives were judged to be actually leaves. The evaluation was not done side by side with manual annotations so some of these objects will have failed the false positive step due to not reaching the 75% IoU threshold and can therefore be thought of as correct but poor segmentations. Other false positive leaves will be those leaves which are very small or heavily occluded so were missed by manual annotation, the mean size of masks in this category was 18,500 pixels compared to 34,900 for all found masks. This shows the value of using SAM to improve manual data labelling. 23% were of things not part of the plant. The remaining 40% were of plant material but not leaves. There were more masks containing multiple leaves than mask containing only parts of leaves, but both categories were found. A model that was fine tuned on potato plants may be more able to judge where the boundaries of a full leaf are so avoiding these problems.\nIn order to help understand how our segmentation technique can be related back to real world plant measurements, a comparison looking at both leaf area and leaf dry mass was made. The correlation was calculated between pixel counts on both the manually annotated images and automatically segmented images with the leaf area and leaf dry mass measures. These results, which can be seen in Table 3, show that there is good agreement between manually annotated pixel counts and both leaf area and dry mass r<0.9. The relationship with our automatic segmentation method was weaker r=0.74 and r=0.625 respectively. The relationship of physically measured leaf area to the image derived methods is shown in Figure 4. The stronger relationship between the manually segmented data and leaf area compared to the automatically derived segmentation can be seen. This shows that improving the accuracy of the segmentation method could improve the relationship to manual measures.\nFigure 4: Plot showing the relationship between leaf area physically measured and that from images. For both manual image annotation and automatic derived data. " }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "Our pipeline builds upon segment anything and achieves reasonable accuracy on the leaf segmentation task in potato without any fine tuning or training on potato images. This shows that segment anything is a powerful tool that has potential to be used in the field of plant phenotyping. The removal of the need to have access to annotated data for model training would speed up adoption in more minor crops or growing settings.\nThere was a reduction of just over 10% for both precision and recall when compared to a fine-tuned model with a slightly larger reduction in dice score. Comparison with leaf area and leaf mass shows that improvements in leaf segmentation techniques could lead to improved relationship with manual data.\nFurther work could be done in improving the post processing steps. The inclusion of a small CNN based classifier for the masks generated by SAM, similar to the classification branch of Mask R-CNN, could also be another way to improve performance." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was supported by strategic research programme (2022-2027) funding from the Rural and Environmental Science and Analytical Services Division of the Scottish Government." }, { "figure_ref": [], "heading": "Data and Code Availability", "publication_ref": [ "b19" ], "table_ref": [], "text": "The data associated with this paper, which consists of images, image annotations and manual measurements, can be found on Zenodo here [20].\nThe code for Leaf Only SAM can be seen on Github here." } ]
Segment Anything Model (SAM) is a new "foundation model" that can be used as a zero-shot object segmentation method with the use of either guide prompts such as bounding boxes, polygons, or points. Alternatively, additional post processing steps can be used to identify objects of interest after segmenting everything in an image. Here we present a method using segment anything together with a series of post processing steps to segment potato leaves, called Leaf Only SAM. The advantage of this proposed method is that it does not require any training data to produce its results so has many applications across the field of plant phenotyping where there is limited high quality annotated data available. We compare the performance of Leaf Only SAM to a Mask R-CNN model which has been fine-tuned on our small novel potato leaf dataset. On the evaluation dataset, Leaf Only SAM finds an average recall of 63.2 and an average precision of 60.3, compared to recall of 78.7 and precision of 74.7 for Mask R-CNN. Leaf Only SAM does not perform better than the fine-tuned Mask R-CNN model on our data, but the SAM based model does not require any extra training or annotation of our new dataset. This shows there is potential to use SAM as a zero-shot classifier with the addition of post processing steps.
Leaf Only SAM: A Segment Anything Pipeline for Zero-Shot Automated Leaf Segmentation
[ { "figure_caption": "Figure 1 :1Figure 1: Example a) Image and b) Ground Truth Label pair.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :Figure 3 :23Figure 2: Leaf segmentation on the image from Figure 1 using a) Base SAM. b) Leaf Only SAM. c) Mask R-CNN", "figure_data": "", "figure_id": "fig_1", "figure_label": "23", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Comparison of segmentation performance of Segment Anything with Mask R-CNN models trained on the leaf potato dataset", "figure_data": "ModelBackboneAR75AP75ARAPDSCMask R-CNNResNet-50-FPN83.381.374.672.10.849Validation Mask R-CNNResNet-101-FPN81.979.373.770.40.84Base SAM-78.910.470.59.50.792Leaf Only SAM -78.871.070.463.40.786", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results of ablation study showing the relative performance of our different post processing steps", "figure_data": "AR75AP75ARAPDSCBase SAM64.712.659.611.70.729Green Pixels63.754.858.851.70.718Not all63.759.758.856.00.700Correct Shape63.759.958.856.20.700Remove multi-leaf Objects63.260.358.456.40.700", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Correlation between physical measures of leaf area and dry mass with image derived measurements.", "figure_data": "No. pixelsNo. pixels LeafLeaf areaLeaf dry massmanualOnly SAMLeaf area1Leaf dry mass0.8911No. pixels manual0.9300.9511No. pixels Leaf Only SAM0.7400.6250.7601", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" } ]
Dominic Williams; Fraser Macfarlane; Avril Britten
[ { "authors": "Z Li", "journal": "Computers and Electronics in Agriculture", "ref_id": "b0", "title": "A review of computer vision technologies for plant phenotyping", "year": "2020" }, { "authors": "J.-J Zhou", "journal": "Remote Sensing", "ref_id": "b1", "title": "Evaluating the Performance of Hyperspectral Leaf Reflectance to Detect Water Stress and Estimation of Photosynthetic Capacities", "year": "2021" }, { "authors": "A D J Van Dijk", "journal": "Iscience", "ref_id": "b2", "title": "Machine learning in plant science and plant breeding", "year": "2021" }, { "authors": "M L Pérez-Bueno; M Pineda; M Barón", "journal": "Frontiers in Plant Science", "ref_id": "b3", "title": "Phenotyping plant responses to biotic stress by chlorophyll fluorescence imaging", "year": "2019" }, { "authors": "D Williams", "journal": "Plant Direct", "ref_id": "b4", "title": "Raspberry plant stress detection using hyperspectral imaging", "year": "2023" }, { "authors": "R Mahum", "journal": "Human and Ecological Risk Assessment: An International Journal", "ref_id": "b5", "title": "A novel framework for potato leaf disease detection using an efficient deep learning model", "year": "2023" }, { "authors": "M Minervini", "journal": "Pattern recognition letters", "ref_id": "b6", "title": "Finely-grained annotated datasets for image-based plant phenotyping", "year": "2016" }, { "authors": "K He", "journal": "", "ref_id": "b7", "title": "Mask r-cnn", "year": "2017" }, { "authors": "K Yang; W Zhong; F Li", "journal": "Agronomy", "ref_id": "b8", "title": "Leaf segmentation and classification with a complicated background using deep learning", "year": "2020" }, { "authors": "Yuxin Wu; A K ; Francisco Massa; Wan-Yen Lo; Ross Girshick", "journal": "Detectron", "ref_id": "b9", "title": "", "year": "2019" }, { "authors": "D Kuznichov", "journal": "", "ref_id": "b10", "title": "Data augmentation for leaf segmentation and counting tasks in rosette plants", "year": "2019" }, { "authors": "M Valerio Giuffrida", "journal": "", "ref_id": "b11", "title": "Leaf counting without annotations using adversarial unsupervised domain adaptation", "year": "2019" }, { "authors": "A Kirillov", "journal": "", "ref_id": "b12", "title": "Segment anything", "year": "2023" }, { "authors": "G.-P Ji", "journal": "", "ref_id": "b13", "title": "SAM Struggles in Concealed Scenes--Empirical Study on\" Segment Anything", "year": "2023" }, { "authors": "R Deng", "journal": "", "ref_id": "b14", "title": "Segment anything model (sam) for digital pathology: Assess zero-shot segmentation on whole slide imaging", "year": "2023" }, { "authors": "S He", "journal": "", "ref_id": "b15", "title": "Accuracy of Segment-Anything Model (SAM) in medical image segmentation tasks", "year": "2023" }, { "authors": "T Zhou", "journal": "", "ref_id": "b16", "title": "Can sam segment polyps?", "year": "2023" }, { "authors": "I Giannakis", "journal": "", "ref_id": "b17", "title": "Deep learning universal crater detection using Segment Anything Model (SAM)", "year": "2023" }, { "authors": "K Wada", "journal": "", "ref_id": "b18", "title": "Labelme: Image Polygonal Annotation with Python", "year": "" }, { "authors": "D Williams; F Macfarlane; A Britten", "journal": "", "ref_id": "b19", "title": "Potato Leaf data set", "year": "2023" } ]
[]
10.1017/xxxxx
2023-05-16
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b11", "b11", "b7", "b1", "b5", "b0", "b11", "b8", "b5", "b3" ], "table_ref": [], "text": "Theoretical inquiry in neural networks is paramount in understanding the success and limits of these models. By studying the details of the construction and comparing how architectures are related, we can generate explanations that verify the behaviour of the networks. In this paper we extend existing results by Sudjianto et al. (2020) to the multilinear setting. Their work shows that ReLU networks, i.e. deep neural networks with ReLU activation functions, can be represented exactly through piecewise-linear functions whose local linear models can be found exactly on each region. We then draw a connection with SHAP Values, showing how this decomposition can provide an explicit representation and a fast procedure to compute them.\nRelated Work. This paper builds closely on (Sudjianto et al. 2020). Work by Montufar et al. (2014) presents bounds to the complexity of the neural network in terms of the number of linear regions as a function of neurons. Balestriero et al. (2018) is concerned with representing families of neural networks, including Convolutional Neural Networks LeCun et al. (1998), as compositions of affine splines.\nRoadmap. In Section 2 we present notation and theoretical background as well as extending the theory to general families of neural networks. In 3 we formalise work from Aytekin (2022). 4 proves explainability implications of our work in making the computation SHAP values Lundberg and Lee (2017) faster. Proofs to theorems are contained in the Appendices.\nThe aim of this section is to describe a neural network as piece-wise linear functions, a process that Sudjianto et al. (2020) refer to as unwrapping. After preliminaries, we take the Graph Convolutional Neural Network (GCN), of which Recurrent Neural Networks (RNNs) Rumelhart et al. (1986) and Convolutional Neural Networks (CNNs) LeCun et al. (1998) are special cases, and derive their local linear model decomposition. We further generalise the results to neural networks with tensor contractions and multiplicative interactions, as present in the Long Short Term Memory network Hochreiter and Schmidhuber (1997)." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [ "b11" ], "table_ref": [], "text": "By feed-forward neural networks we will mean deep neural networks in which the architecture is determined entirely by a composition of linear transformations and element-wise activation functions on the resulting vectors; this we will call a layer. Our focus will be on said architectures having as activation function rectified linear units.\nA feed-forward neural network N : R n → R m is a composition of L parametrised functions, which we refer to as the number of layers, with N = [n 1 , n 2 , n 3 , ..., n L ] neurons per layer, such that:\nχ (l) = σ(W (l-1) χ (l-1) + b (l-1) ) = σ(z (l) ).\nHere, for a given layer l ∈ {1, ..., L}, z (l) are referred to as preactivations, χ (l) s as activations and b (l) s as layer biases. In particular, ReLU (Rectified Linear Units) is the activation function σ : R → R applied element-wise, given by:\nχ (l) i = σ(z (l) i ) = max{0, z (l) i }.\nFor a given neuron χ (l) i , i ∈ {1, ..., n l }, the binary activation state is a function s : R → {0, 1}. Generally, define an activation state as a function of s : R → S = {0, 1, 2, .., S} for a collection of states, where |S| > 2. The function generates a natural partition of R by s -1 . Depending on the state of each neuron, we can define an activation pattern which encodes each state as implied by a given input. Every layer will have an activation vector, the collection of which we call the activation pattern.\nGiven an instance x ∈ R n and a feed-forward neural network N with L layers, each with a number of neurons described by the vector N, the activation pattern is an ordered collection of vectors\nP(x) = {P (1) (x), P (2) (x), ..., P (L) (x)} such that P (l) i (x) = s(χ (l) i ) ∈ S,\nwhere i ∈ {1, ..., n l } are indices for a activations at a layer l ∈ {1, ..., L}.\nThe collection of all points yielding the same activation pattern, which can be thought of as fibers, we will call the activation region for the network.\nWe refer to the activation region R P(x) ⊂ R n of the activation pattern as the collection of points v ∈ R n such that ∀v ∈ R P(x) , P(v) = P(x).\nImportantly, these regions are convex and partition the input space of the neural network Sudjianto et al. (2020). This is key for the characterisation of the neural network as a piece-wise linear function: the convex domain allows us to have a description of the activation regions as intersections of half-spaces." }, { "figure_ref": [], "heading": "Theorem 1", "publication_ref": [ "b11" ], "table_ref": [], "text": "Local Linear Model of a ReLU Network, Sudjianto et al. (2020) Given a feedforward neural network N : R n → R m , with ReLU activation σ, L layers and neurons in N = [n 1 , ..., n L ], the local linear model η P (z) for the activation region R P (z) of an activation pattern P(z), with z ∈ X ⊂ R n , is given by\nη P (x) = w P(z)T x + b P(z) , ∀x ∈ R P(z)\nwhere the weight parameter is given by:\nw P(z) = L h=1 W (L+1-h) D L+1-h W (0) ,\nand the bias parameter is given by b\nP(z) = L l=1 L+1-l h=1 W (L+1-h) D L+1-h b (l-1) + b (L) ,\nwhere\nD (l) = diag(P(z)),\nis the diagonal matrix of the activation pattern for a given layer l ∈ {1, ..., L}." }, { "figure_ref": [], "heading": "Unwrapping Graph and Tensor Neural Networks", "publication_ref": [ "b4" ], "table_ref": [], "text": "In the case of neural networks with convolutions, which we intend loosely as parametrised matrix or tensor operations with weights, learnable or otherwise, such as Convolutional Neural Networks LeCun et al. (1998), Graph Convolutional Networks Kipf andWelling (2016), the local linear model decomposition needs to take into account the weight sharing scheme that is implied by the convolution. GCNs encompass RNNs and CNNs, meaning that we set convolutional weights of GCNs to zero in particular ways to achieve networks that fall in the latter classes of architectures. Therefore, decomposing GCNs is in itself a significant result, which we obtain below.\nDefinition 1 (Graph Convolutional Neural Network ) Given a graph G = (V, E) with vertex and edge sets V, E respectively, a Graph Convolutional Network (GCN) is a composition of L parametrised layers, with N = [n 1 , n 2 , n 3 , ..., n L ] neurons per layer, yielding a function N G : R k×n → R k×m , where each forward pass is defined by:\nχ (l+1) = σ A • χ (l) W (l) + b\nwhere k = |V | is the number of nodes of the graph, χ (l) ∈ R k×n l and W (l) ∈ R n l-1 ×n l is a weight matrix. Finally, b ∈ R k×n l is a matrix of biases and A is a graph convolutional operator A ∈ R k×k , often taken to be an adjacency matrix or its Laplacian. This definition underscores how the GCN can be viewed as a multilinear variant of the feedforward neural network. Indeed, two operations are applied to the activations of the previous layer: a left and right matrix multiplication. Viewing these as a single linear operation on a vectorised input allows us to decompose the network similarly to how we have done in the feedforward case. This leads us to our main theorem of the section." }, { "figure_ref": [], "heading": "Theorem 2", "publication_ref": [], "table_ref": [], "text": "The local linear model of a Graph Convolutional Network at a point z ∈ R n×m is given by\nη P(z) (X) = w P(z)T vec(X) + b P(z) , ∀X ∈ R P(z)\nwhere,\nw P(z) = L h=1 (W (L+1-h) ⊗ A (L+1-h) ) ⊙ P (L+1-h) (z) T W (0)\nand the bias parameter is given by b\nP(z) = L l=1 L+1-l h=1 (W (L+1-h) ⊗ A (L+1-h) ) ⊙ P (L+1-h) (z) T b (l-1) + b (L) ,\nwhere\nP (l) (z) ∈ {0, 1} n l-1 ×n l ,\nis the matrix encoding the activation pattern of the network at layer l.\nIn the above, ⊙ is the element-wise or Hadamard product. This construction leads to their main result, which we will extend to large classes of networks. We now proceed to a generalised version of this results that allows us to generate decomposition for networks that apply tensor contractions to a distinguished tensor: the output of the previous layer. This result too relies on the vectorisation of the neural network; which enables us to encode the weight sharing scheme in the Kronecker product of matrices.\nGiven a collection of matrix contractions on a tensor X ∈ R a1×...×a k , as represented by X; A 1 , A 2 , ..., A k , also known as Tucker product, with A i ∈ R ai×a ′ i acting on the ith mode of the tensor, a Tensor Neural Network is a composition of L parametrised layers, with N = (n 1 , n 2 , n 3 , ..., n L ) collection of mode vectors for each layer, each with k l modes and dimensionality given by a vector n l = [a\n(l) 1 , ..., a (l) k l ] yielding a function N T : R ×n1 → R ×nL , where we take ×n l = a (l) 1 × ... × a (l)\nk l each forward pass is defined by:\nχ (l+1) = σ χ (l) ; A (l) 1 , A (l) 2 , ...A (l) k l + b where k = |V | is the number of nodes of the graph, χ (l) ∈ R ×n l and A (l) i ∈ R a (l-1) i ×a (l) i is a weight matrix. Finally, b ∈ R ×n l+1 is a tensor of biases." }, { "figure_ref": [], "heading": "Theorem 3", "publication_ref": [ "b3" ], "table_ref": [], "text": "The local linear model of a Tensor Neural Network at a point z ∈ R ×n1 is given by\nη P(z) (X) = w P(z)T vec(X) + b P(z) , ∀X ∈ R P(z)\nwhere,\nw P(z) = L h=1     i∈[k h ] A (h) i   ⊙ P (L+1-h) (z) T   W (0)\nand the bias parameter is given by b\nP(z) = L l=1 L+1-l h=1     i∈[k h ] A (h)T i   ⊙ P (L+1-h) (z) T   b (l-1) + b (L) ,\nwhere P(z) ∈ {0, 1} ×n l is a tensor encoding the activation pattern of N at point z ∈ R ×n1 , and X = vec(z).\nThis result is be a stepping stone to generalise for arbitrary tensor contractions, beyond the Tucker product, whenever suitable matrizations apply. In particular, these networks and their decomposition can be mapped to transformations of type f : R n → R m , for suitable choices of n, m ∈ N. The significance is two-fold: on one side, we may understand all higher order architectures as special instances of feed-forward neural networks in which weights are constrained by the Kronecker product scheme. This in turn highlights how these architectures are at best as expressive as neural networks of suitable dimension.\nWhile many layers can be recovered as a special case of the graph neural network, there are certain layer types, such as the Long Short Term Memory cell Hochreiter and Schmidhuber (1997), which involve the point-wise multiplication of two layer outputs. To that end, we show how the decomposition of a multiplicative interaction leads to higher order forms, instead of linear models." }, { "figure_ref": [], "heading": "Corollary 1", "publication_ref": [], "table_ref": [], "text": "Let multiplicative interactions be defined as the element-wise multiplication of two forward pass layers of neural networks, in the form below:\nχ (l+1) = σ(W χ (l) 1 + b) ⊙ σ(V χ (l) 2 + c).\nFor a given pair χ\n(l) 1 , χ (l)\n2 , there exists a decomposition of the layer given by:\nχ (l+1) = D (l) 1 W χ (l) ⊙ D (l) 2 χ (l) 2 + b ⊙ D (l) 2 χ (l) 2 + c ⊙ D (l) 1 W χ (l) + b ⊙ c where D (l) 1 , D (l)\n2 are diagonal matrices storing the activation pattern in their diagonal." }, { "figure_ref": [], "heading": "Symbolic Representation of Neural Networks", "publication_ref": [ "b0", "b9" ], "table_ref": [], "text": "In this Section we explore the consequences of the decomposition for a symbolic interpretation of the neural network. Indeed, the decomposition opens many paths to inspect the inner workings of the network, but two analogies are particularly fitting. By viewing every activation pattern as a leaf on a tree-based model, we can generate a surrogate that mimicks the behaviour of the neural network exactly. There are several models that can be used, for example Aytekin (2022) uses general decision trees and Schlüter et al. (2023) use Algebraic Decision Structures. We decide to use Multivariate Regression Trees, as these are easiest to define and resemble most closely the propagation of information in the network. Importantly, all of these models are white-box: computing the tree-based alternative allows us to fully comprehend the global behaviour of the network.\nThe second observation is that the half-spaces of the neural network induced by the network form a Boolean algebra in the input space. There is a close link between Boolean algebras and logic, which entails that we can understand the network's functioning as the evaluation of propositions in a first order logic. We will state the formal result after stating the conditions for activations of a given neuron." }, { "figure_ref": [], "heading": "Half Space Conditions", "publication_ref": [], "table_ref": [], "text": "For a ReLU network, every neuron of the first hidden defines a half plane in the input space as follows. Let W ∈ R n and b ∈ R. H(W, b) is the half space defined by all x ∈ R n such that:\nW T • x + b > 0.\nWe can see this applied to neural networks. Let,\nχ (1) = σ(W (1) x + b (1) ) = max(W (1) x + b (1) , 0)\nthen, for all i ∈ {1, ..., n 1 } where n 1 is the number of neurons in the first layer, we have that\ns(χ (1) i ) = 1 ⇐⇒ x ∈ H(W (1) i , b (1) i ) and s(χ (1) i ) = 0 ⇐⇒ x ∈ H(W (1) i , b(1)\ni ). This implies that a given activation pattern for the first layer P (1) , there is an intersection of space, ω P (1) given by:\nω P (1) = n1 i=1 H(W (1) i • (2P (1) i -1), b (1) i ).\n(1)\nBy iterating the recursion χ (l+1) = max{W (l) χ (l) +b (l) , 0}, we provide rules for activation of each neuron P (l)\ni , i ∈ {1, ..., n l }. This results in the following lemma. Lemma 1 (Conditions for Activation) Given a neural network N : R n → R m , with L ∈ N layers, and an activation pattern P = {P (1) , P (2) , ..., P (L) }, the region ω P ⊂ R n defined by the activation pattern is given by:\nω P = L j=1 nj i=1 H w P[j] W (j) i p (j) i , b P[j] i\nwhere p (j) i is an identity matrix with the ith diagonal is replaced by 2 • P (j) i -1, which is the activation state of the ith neuron of the jth layer, D (h) is the diagonal matrix associated with the activation pattern P (h) , given by D (h) = diag(P (h) ) for the hth layer, and P[j] = {P (1) , P (2) , ..., P (j) } ⊂ P, is the set of all activation vectors until layer j, so that w P[j] and b P[j] represent the coefficients of a local linear model up to layer j, given by 1.\nNote that for the first layer each neuron represents a half-space, while for the second layer, each neuron can represent a collection of half-spaces, depending on the previous selection. We can understand this as a hierarchy of concepts. The first partition Ω (1) = {ω P[1] : P[1] ∈ {0, 1} n1 }, defines a set of concepts, which we refine through distinctions represented by the neurons of the second layer.\nIn fact, if there are 2 n2 possible activation patterns, that would define a partition for each of the 2 n1 contexts. However, some of these activation patterns may define empty regions, and this would depend by the context, i.e. the activation pattern of the previous layers." }, { "figure_ref": [], "heading": "Networks are Trees and Theories", "publication_ref": [ "b2", "b0", "b9" ], "table_ref": [], "text": "The hierarchical description implied by the recursive partitioning of the neural networks' layers motivate the relationship between neural networks and tree based models. In particular, we search a models that can replicate the behaviour of the neural network exactly: every path representing the conditions applied by a given activation pattern and each leaf containing the data of the linear model we will apply in that case. Indeed, this description refers to the Multivariate Regression Tree De'Ath (2002), which we define in the appendix. We fit the data of the neural network, empowered by the decomposition into local linear models.\nTheorem 4 (Tree for a Neural Network ) For every feedforward ReLU neural network N there exist a MRT (M, T, e, Θ) that represents exactly the behaviour of the neural network:\nM(x) = N (x) ∀x ∈ R n .\nThis result is important insofar as it allows us to represent a neural reasoner symbolically. In particular, it proves the observations of Aytekin (2022) formally. A challenge is to find and store all the partitions.\nWe can consider individual half-space divisions as atoms in a propositional logic. The following comments reflect the spirit of Schlüter et al. (2023), who prove a correspondence between ReLU networks and algebraic decision structure. We show instead that there is an internal logic to the neural network, which can be computed by the half-space algebra.\nCorollary 2 (Internal Logic of a Network ) A ReLU feedforward neural network N : R n → R m induces a Boolean algebra which is the Lidenbaum-Tarski algebra of a theory T in classical propositional logic given by:\n• A collection of propositional variables h (l) i , l ∈ [L], i ∈ [n l ] • A collection of terms determined by arbitrary meets P = {p : p = l∈[L],i∈[n l ] h (l) i },\n• Axioms and formulas pursuant the structure of the Boolean algebra. This follows directly from assigning to each variable the truth statement of x ∈ H w P\n[j] W (j) i p (j) i , b P[j] i\nas defined in Lemma 1, for all possible activation patterns. Then, the Boolean algebra spanned by the half spaces implied by the neural network activations returns the required theory.\nHalf-space conditions are the alphabet of the neural network's reasoning, meaning that propositions are then formed by taking arbitrary intersections of these conditions. There are two important consequences. Transformations of neural networks imply transformations of their underlying grammar: by applying backpropagation we obtain a morphism of trees and theories in adequate categories. This justifies the intention to use category theory as an instrument to analyse the interplay between architecture and representation system Spivak (2021)." }, { "figure_ref": [], "heading": "Explainability", "publication_ref": [], "table_ref": [], "text": "In this section we get the SHAP Values for ReLU neural networks explicitly. We can use the previous results to compute exact local Shapley values for an instance. Precisely, given that we have an explicit local model for each region R P(z) we can state the following." }, { "figure_ref": [], "heading": "SHAP Values", "publication_ref": [], "table_ref": [], "text": "We recall the definition of SHAP values from Lundberg and Lee (2017) on a given function f . Definition 2 (SHAP Values) Given a function f : R n → R m , the SHAP values for feature i ∈ [n] are given by:\nφ v (i) = S∈P([n])/{i} (N -|S| + 1)|S|! N ! ∆ v (S, i),\nwhere P([n]) is the set of a all subsets of [n], v : P([n]) → R is a value function, and ∆ v (S, i) is the marginal contribution of a feature i on a subset S ∈ P([n]), which we refer to as a coalition of features, is given by: ∆ v (i, S) = v({i} ∪ S)v(S).\nThese provide concrete examples of how a piece-wise linear theory of architecture can support development of XAI techniques.\nLemma 2 (Local Shapley Values of a ReLU Neural Network ) Given a neural network N : R n → R m , with hyperparameters L, the number of layers and N = [n 1 , ..., n L ] the number of neurons for each layer, given a Linear Local Model decomposition with η P (x) for x ∈ R P(z) with activation pattern P(z), the Shapley value is given by: φ f (i) j = wP(z) i,j (x ixi ) so long as xS ∈ R P(z) for all coalitions, , ∀S ∈ P([n]/i).\nThis theorem implies that given a neural network decomposition, exact SHAP values can be computed simply by finding the linear model for instance of interest and its masked counterparts. Summing the coefficients according to the above formula will return the desired value. This entails a reduction in computation time as we are no longer fitting local surrogates as in KernelSHAP, whenever the decomposition is readily available. In particular, these SHAP values are exact, meeting an increasing need for faithfulness of explanations, both in practice and for regulatory purposes. We also prove a global version of this Theorem, with weaker assumptions, which can be found in the Appendix.\nas imposing a further restriction on the polytope defined by ω P[l] , leading to a collection of half spaces given by:\nn l i=1 H(W (l+1) i p (l+1) i , b (l+1) i ) ⊂ R n l .\nHowever, we stress these half spaces live in R n l . To express the conditions on R n , the input space, we project back the half space into the domain of the previous layers recursively. In particular, let χ (l+1) be the post-activation of the lth layer. Then, expanding by the linear model decomposition, we obtain the following equivalent conditions.\nχ (l+1) ∈ H(W (l+1 p (l+1) i , b (l+1)i ) ⇐⇒ (W (l+1) i p (l+1) i ) T χ (l+1) + b (l+1) i > 0 ⇐⇒ (W (l+1) i p (l+1)T i )D (l) (W (l)T χ (l) + b (l) ) + b (l+1) i > 0 ⇐⇒ (W (l+1) i p (l+1) i ) T D (l) W (l)T χ (l) + (W (l+1) i p (l+1) i ) T D (l) b (l) + b (l+1) i > 0 ... ⇐⇒ (W (l+1) i p (l+1) i w P[l] ) T x + b P[l] i + b (l+1) i > 0 ⇐⇒ x ∈ H w P[j] W (j) i p (j) i , b P[j] i .\nTaking the intersection for all i ∈ {1, ..., n l } and across layers provides us with the desired result." }, { "figure_ref": [], "heading": "Appendix E Proof of Existence of Multivariate Regression Tree for Every Neural Network", "publication_ref": [], "table_ref": [], "text": "We define the Multivariate Regression tree and prove the statement of the theorem. imposed by each bifurcation of the tree, • Θ : Λ → R n×m × R m is a function that assigns to each leaf λ ∈ Λ ⊂ P(V ) (identified as the unique path from the root) parameters for a linear model, and • M : R n → R m is a function that applies for every x ∈ R n the linear model\nη λ = (W λ ) T • x + b λ , x ∈ R n ,\nwhenever x ∈ e∈λ H(e 1 (E), e 2 (E)), the collection of half-spaces imposed by the path, where e 1 , e 2 are the two components of e.\nProof N has L layers, each with n i , i ∈ [L] neurons. Each of these neurons provides a halfspace, as given by 1. Therefore, each architecture N dictates a tree T with V ∼ = i∈[L] [n i ], the is the Local Linear Model for the activation region R P(z) and x S is the vector with S ∪ {i} masked out and xS is the vector with S masked out, and i ∈ [n], j ∈ [m]." }, { "figure_ref": [], "heading": "Proof", "publication_ref": [], "table_ref": [], "text": "The theorem follows form substituting the respective linear models for each marginal contribution of a Shapley value. This entails that the marginal contribution can be written as ∆ f (i, S) j = f (x S∪{i} , x S∪{i} ) jf (x S , x S ) j = η P(x S ) (x S ) jη P(x S ) (x S ) j )x S k .\nAveraging over S ∈ P([n])/{i} ends the proof." }, { "figure_ref": [], "heading": "Appendix A Proof of GNN Decomposition", "publication_ref": [], "table_ref": [], "text": "Proof First we realise that, whenever A, X, B are compatible matrices, we have:\nalso known as the vec trick. The key in the computation of the weights is noticing that the layerwise propagation function can be represented by this vectorisation, χ (l) = σ(A (l-1) χ (l-1) W (l-1) ) = σ((W (l-1)T ⊗ A (l-1) )vec(χ (l-1) )).\nUsing this fact allows us to treat the matrix W (l-1) = (W (l-1) ⊗ A (l-1) ) as the weight matrix of a linear neural network. To encode the activation pattern of the graph neural network we take the activation pattern, which in this case is a matrix D (l) . Combining the fact that\nwith the vec trick results in: 0) , and this exact replacement produces the bias parameters." }, { "figure_ref": [], "heading": "Appendix B Proof of TCN Decomposition", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Proof", "publication_ref": [], "table_ref": [], "text": "The key step in this proof it to realise that the vector trick generalises to higher dimensional contractions between a tensor and a matrix.\nand with the vectorisation of the Hadamard product being preserved element-wise, the proof follows exactly the replacement in the equivalent proof for the Graph Convolutional Network." }, { "figure_ref": [], "heading": "Appendix C Proof of Multiplicative Interaction Decomposition", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Proof", "publication_ref": [], "table_ref": [], "text": "As before, we realise that there exist a diagonal matrices D 1 , D 2 that hold the activation pattern and such that preserve locally the behaviour of ReLU activation is replicated. We observe that:" }, { "figure_ref": [], "heading": "Appendix D Proof of Conditions of Activations for a Neural Network", "publication_ref": [], "table_ref": [], "text": "Proof Set an activation pattern given by P = {P (1) , P (2) , ..., P (L) } where P (l) ∈ {0, 1} n l , l ∈ {1, ..., L}. We prove the statement using mathematical induction on the number of layers L.\nFor the case L = 1, with P = {P (1) }, the region is given by equation 1, which is the only layer. For the inductive step, assuming that case L = l is true, we prove it implies the formula for L = l + 1. Recall that n l+1 ∈ N, P (l+1) ∈ {0, 1} nL+1 , the activation pattern for the l + 1th layer. We can think of each neuron in the subsequent l + 1th layer set of vertices is one-to-one with the set of all neurons, and\nwhere in particular we map the activation pattern P(z) to a path λ by building e(a\n), where a\n(1) i,j is the edge representing the ith neuron of the jth layer being active, and the components of the function are defined in the proof of 1. Finally, we choose Θ(λ) = (w P(z) , b P(z) ) whenever the path λ reflects the activation pattern P(z); meaning that at every edge of the tree x ∈ H(e 1 (a i,j ), e 2 (a i,j )) ⇐⇒ a i,j ∈ λ ⇐⇒ P (j) (z) i = 1, and x / ∈ H(e 1 (a i,j ), e 2 (a i,j )) ⇐⇒ a\ni,j ∈ λ ⇐⇒ P (j) (z) i = 0. M is then determined from the definition and is by construction equal to N everywhere." }, { "figure_ref": [], "heading": "Appendix F Proof of Local SHAP Values", "publication_ref": [], "table_ref": [], "text": "Proof For a given activation pattern P(z), if x, x ∈ R P(x) , this implies that the marginal contribution for a given coalition is given by:\nwhich results to the Shapley values of a linear model, given in Lundberg and Lee (2017)." }, { "figure_ref": [], "heading": "Appendix G Statement and Proof of Global SHAP Values", "publication_ref": [], "table_ref": [], "text": "In general, the masked value will not fall in the same activation region as the sample of interest. Most of the time, it is likely that masking a value will send it to a different activation region. This informs the proof of the next, more general, result.\nTheorem 5 (Global Shapley Values of a ReLU Neural Network ) Given a neural network N : R n → R m , with hyperparameters L, the number of layers and N = [n 1 , ..., n L ] the number of neurons for each layer, given a Linear Local Model decomposition with linear regions η P(z) (x) for x ∈ R P(z) with activation pattern P(z), the global Shapley value is given by: (z) " } ]
Deep ReLU Networks can be decomposed into a collection of linear models, each defined in a region of a partition of the input space. This paper provides three results extending this theory. First, we extend this linear decompositions to Graph Neural networks and tensor convolutional networks, as well as networks with multiplicative interactions. Second, we provide proofs that neural networks can be understood as interpretable models such as Multivariate Decision trees and logical theories. Finally, we show how this model leads to computing cheap and exact SHAP values. We validate the theory through experiments with on Graph Neural Networks.
Unwrapping All ReLU Networks
[ { "figure_caption": "Definition 3 (3Multivariate Regression Tree)For a learning problem D = X × Y, where X ⊂ R n , Y ⊂ R m , a Multivariate Regression Tree (MRT) is a tuple (M, T, e, Θ) where• T = (V, E) is a binary tree, • e : E → R n × Rare edge labels representing the half space conditions H(W, b), ¬H(W, b)", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "PSince x k = xk , ∀k = i, we collect the terms to get∆ f (i, S) j = b", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" } ]
Mattia Villani; Jacopo; Mcburney Peter
[ { "authors": "Caglar Aytekin", "journal": "", "ref_id": "b0", "title": "Neural networks are decision trees", "year": "2022" }, { "authors": "Randall Balestriero", "journal": "PMLR", "ref_id": "b1", "title": "A spline theory of deep learning", "year": "2018" }, { "authors": "Glenn De; ' Ath", "journal": "Ecology", "ref_id": "b2", "title": "Multivariate regression trees: a new technique for modeling species-environment relationships", "year": "2002" }, { "authors": "Sepp Hochreiter; Jürgen Schmidhuber", "journal": "Neural computation", "ref_id": "b3", "title": "Long short-term memory", "year": "1997" }, { "authors": "N Thomas; Max Kipf; Welling", "journal": "", "ref_id": "b4", "title": "Semi-supervised classification with graph convolutional networks", "year": "2016" }, { "authors": "Yann Lecun; Léon Bottou; Yoshua Bengio; Patrick Haffner", "journal": "", "ref_id": "b5", "title": "Gradient-based learning applied to document recognition", "year": "1998" }, { "authors": "M Scott; Su-In Lundberg; Lee", "journal": "Advances in neural information processing systems", "ref_id": "b6", "title": "A unified approach to interpreting model predictions", "year": "2017" }, { "authors": "Razvan Guido F Montufar; Kyunghyun Pascanu; Yoshua Cho; Bengio", "journal": "Advances in neural information processing systems", "ref_id": "b7", "title": "On the number of linear regions of deep neural networks", "year": "2014" }, { "authors": "Geoffrey E David E Rumelhart; Ronald J Hinton; Williams", "journal": "nature", "ref_id": "b8", "title": "Learning representations by back-propagating errors", "year": "1986" }, { "authors": "Maximilian Schlüter; Gerrit Nolte; Alnis Murtovi; Bernhard Steffen", "journal": "", "ref_id": "b9", "title": "Towards rigorous understanding of neural networks via semantics-preserving transformations", "year": "2023" }, { "authors": "I David; Spivak", "journal": "", "ref_id": "b10", "title": "Learners' languages", "year": "2021" }, { "authors": "Agus Sudjianto; William Knauth; Rahul Singh; Zebin Yang; Aijun Zhang", "journal": "", "ref_id": "b11", "title": "Unwrapping the black box of deep relu networks: interpretability, diagnostics, and simplification", "year": "2020" } ]
[ { "formula_coordinates": [ 2, 152.52, 312.55, 178.68, 11.54 ], "formula_id": "formula_0", "formula_text": "χ (l) = σ(W (l-1) χ (l-1) + b (l-1) ) = σ(z (l) )." }, { "formula_coordinates": [ 2, 179.28, 376.27, 125.28, 14.64 ], "formula_id": "formula_1", "formula_text": "χ (l) i = σ(z (l) i ) = max{0, z (l) i }." }, { "formula_coordinates": [ 2, 50.64, 517.51, 272.82, 45.48 ], "formula_id": "formula_2", "formula_text": "P(x) = {P (1) (x), P (2) (x), ..., P (L) (x)} such that P (l) i (x) = s(χ (l) i ) ∈ S," }, { "formula_coordinates": [ 3, 170.4, 191.18, 160.55, 11.35 ], "formula_id": "formula_3", "formula_text": "η P (x) = w P(z)T x + b P(z) , ∀x ∈ R P(z)" }, { "formula_coordinates": [ 3, 172.08, 233.45, 157.68, 31.45 ], "formula_id": "formula_4", "formula_text": "w P(z) = L h=1 W (L+1-h) D L+1-h W (0) ," }, { "formula_coordinates": [ 3, 149.64, 284.57, 206.88, 31.45 ], "formula_id": "formula_5", "formula_text": "P(z) = L l=1 L+1-l h=1 W (L+1-h) D L+1-h b (l-1) + b (L) ," }, { "formula_coordinates": [ 3, 210.72, 336.07, 80.4, 11.54 ], "formula_id": "formula_6", "formula_text": "D (l) = diag(P(z))," }, { "formula_coordinates": [ 3, 188.28, 592.15, 119.35, 11.54 ], "formula_id": "formula_7", "formula_text": "χ (l+1) = σ A • χ (l) W (l) + b" }, { "formula_coordinates": [ 4, 137.4, 165.26, 208.31, 11.35 ], "formula_id": "formula_8", "formula_text": "η P(z) (X) = w P(z)T vec(X) + b P(z) , ∀X ∈ R P(z)" }, { "formula_coordinates": [ 4, 108.96, 196.61, 265.31, 31.33 ], "formula_id": "formula_9", "formula_text": "w P(z) = L h=1 (W (L+1-h) ⊗ A (L+1-h) ) ⊙ P (L+1-h) (z) T W (0)" }, { "formula_coordinates": [ 4, 85.2, 248.09, 317.64, 31.45 ], "formula_id": "formula_10", "formula_text": "P(z) = L l=1 L+1-l h=1 (W (L+1-h) ⊗ A (L+1-h) ) ⊙ P (L+1-h) (z) T b (l-1) + b (L) ," }, { "formula_coordinates": [ 4, 191.64, 299.81, 100.44, 11.8 ], "formula_id": "formula_11", "formula_text": "P (l) (z) ∈ {0, 1} n l-1 ×n l ," }, { "formula_coordinates": [ 4, 50.64, 463.39, 382.45, 30 ], "formula_id": "formula_12", "formula_text": "(l) 1 , ..., a (l) k l ] yielding a function N T : R ×n1 → R ×nL , where we take ×n l = a (l) 1 × ... × a (l)" }, { "formula_coordinates": [ 4, 50.64, 520.63, 381.41, 47.06 ], "formula_id": "formula_13", "formula_text": "χ (l+1) = σ χ (l) ; A (l) 1 , A (l) 2 , ...A (l) k l + b where k = |V | is the number of nodes of the graph, χ (l) ∈ R ×n l and A (l) i ∈ R a (l-1) i ×a (l) i is a weight matrix. Finally, b ∈ R ×n l+1 is a tensor of biases." }, { "formula_coordinates": [ 4, 137.4, 607.7, 208.31, 11.35 ], "formula_id": "formula_14", "formula_text": "η P(z) (X) = w P(z)T vec(X) + b P(z) , ∀X ∈ R P(z)" }, { "formula_coordinates": [ 5, 137.76, 67.42, 225.83, 35 ], "formula_id": "formula_15", "formula_text": "w P(z) = L h=1     i∈[k h ] A (h) i   ⊙ P (L+1-h) (z) T   W (0)" }, { "formula_coordinates": [ 5, 108.96, 122.74, 288.24, 35 ], "formula_id": "formula_16", "formula_text": "P(z) = L l=1 L+1-l h=1     i∈[k h ] A (h)T i   ⊙ P (L+1-h) (z) T   b (l-1) + b (L) ," }, { "formula_coordinates": [ 5, 168.84, 400.99, 164.16, 14.52 ], "formula_id": "formula_17", "formula_text": "χ (l+1) = σ(W χ (l) 1 + b) ⊙ σ(V χ (l) 2 + c)." }, { "formula_coordinates": [ 5, 138.6, 420.67, 28.67, 14.52 ], "formula_id": "formula_18", "formula_text": "(l) 1 , χ (l)" }, { "formula_coordinates": [ 5, 59.64, 449.59, 339.34, 31.56 ], "formula_id": "formula_19", "formula_text": "χ (l+1) = D (l) 1 W χ (l) ⊙ D (l) 2 χ (l) 2 + b ⊙ D (l) 2 χ (l) 2 + c ⊙ D (l) 1 W χ (l) + b ⊙ c where D (l) 1 , D (l)" }, { "formula_coordinates": [ 6, 208.2, 195.77, 67.44, 11.8 ], "formula_id": "formula_20", "formula_text": "W T • x + b > 0." }, { "formula_coordinates": [ 6, 141, 230.95, 201.75, 11.54 ], "formula_id": "formula_21", "formula_text": "χ (1) = σ(W (1) x + b (1) ) = max(W (1) x + b (1) , 0)" }, { "formula_coordinates": [ 6, 50.64, 277.63, 267.5, 46.55 ], "formula_id": "formula_22", "formula_text": "s(χ (1) i ) = 1 ⇐⇒ x ∈ H(W (1) i , b (1) i ) and s(χ (1) i ) = 0 ⇐⇒ x ∈ H(W (1) i , b(1)" }, { "formula_coordinates": [ 6, 158.88, 357.05, 165.96, 31.21 ], "formula_id": "formula_23", "formula_text": "ω P (1) = n1 i=1 H(W (1) i • (2P (1) i -1), b (1) i )." }, { "formula_coordinates": [ 6, 161.28, 482.57, 154.65, 32.17 ], "formula_id": "formula_24", "formula_text": "ω P = L j=1 nj i=1 H w P[j] W (j) i p (j) i , b P[j] i" }, { "formula_coordinates": [ 7, 197.28, 336.53, 107.28, 11.92 ], "formula_id": "formula_25", "formula_text": "M(x) = N (x) ∀x ∈ R n ." }, { "formula_coordinates": [ 7, 49.56, 496.63, 372.6, 29.39 ], "formula_id": "formula_26", "formula_text": "• A collection of propositional variables h (l) i , l ∈ [L], i ∈ [n l ] • A collection of terms determined by arbitrary meets P = {p : p = l∈[L],i∈[n l ] h (l) i }," }, { "formula_coordinates": [ 7, 87.23, 557.11, 71.02, 14.64 ], "formula_id": "formula_27", "formula_text": "[j] W (j) i p (j) i , b P[j] i" }, { "formula_coordinates": [ 8, 143.04, 276.56, 197.64, 28.54 ], "formula_id": "formula_28", "formula_text": "φ v (i) = S∈P([n])/{i} (N -|S| + 1)|S|! N ! ∆ v (S, i)," }, { "formula_coordinates": [ 11, 178.08, 82.73, 145.68, 31.33 ], "formula_id": "formula_29", "formula_text": "n l i=1 H(W (l+1) i p (l+1) i , b (l+1) i ) ⊂ R n l ." }, { "formula_coordinates": [ 11, 94.8, 188.83, 315.06, 123 ], "formula_id": "formula_30", "formula_text": "χ (l+1) ∈ H(W (l+1 p (l+1) i , b (l+1)i ) ⇐⇒ (W (l+1) i p (l+1) i ) T χ (l+1) + b (l+1) i > 0 ⇐⇒ (W (l+1) i p (l+1)T i )D (l) (W (l)T χ (l) + b (l) ) + b (l+1) i > 0 ⇐⇒ (W (l+1) i p (l+1) i ) T D (l) W (l)T χ (l) + (W (l+1) i p (l+1) i ) T D (l) b (l) + b (l+1) i > 0 ... ⇐⇒ (W (l+1) i p (l+1) i w P[l] ) T x + b P[l] i + b (l+1) i > 0 ⇐⇒ x ∈ H w P[j] W (j) i p (j) i , b P[j] i ." }, { "formula_coordinates": [ 11, 187.92, 552.53, 126, 11.8 ], "formula_id": "formula_31", "formula_text": "η λ = (W λ ) T • x + b λ , x ∈ R n ," } ]
2023-05-16
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b1", "b0", "b2", "b3", "b4", "b5" ], "table_ref": [], "text": "Structural health monitoring (SHM), is a predictive tool which provides an online damage detection and condition monitoring strategy using data recorded from an individual structure [1]. Data are often unavailable or incomplete, measurements can be limited [2] and, as a consequence, a data set which would be used to train a model could be insufficient to provide reliable results. Part of the increased motivation for SHM systems is from the growing number of structures which are reaching the end of their design life [1]; if the condition of the structures can be accurately assessed online for damage, then the structures can continue to operate. A key element to a SHM system is that it is accurate, as the cost of uninformative predictions is not just economical but could have safety implications too.\nMulti-task learning (MTL) refers to a suite of algorithms which learn tasks simultaneously, as opposed to in isolation from each other. MTL can be applied to improve generalisation of tasks, hence the accuracy of predictions, and therefore it could be applied to improve SHM systems. Caruana [3], developed one of the first forms of multi-task learning, a neural network (NN) with back propagation, to train tasks simultaneously and improve generalisation between tasks. Since the original work, multi-task learners have been applied to a lot of different machine-learning algorithms, from support vector machines [4] to decision trees [5,6]. Within this paper, multi-task learning will be discussed purely in relation to NNs. Improved generalisation may improve accuracy of predictions across multiple tasks and hence could be beneficial to SHM systems.\nThe purpose of this paper is to provide a discussion of the applicability of the different types of multi-task learning problem settings with respect to the field of SHM. To distinguish between whether an NN is multi-task or not, a non-MTL NN will be referred to as an independent learner and a multi-task NN will be referred to as MTL." }, { "figure_ref": [], "heading": "BACKGROUND", "publication_ref": [ "b6", "b7", "b8", "b9", "b10", "b11" ], "table_ref": [], "text": "It is useful to start with the architecture of a simple NN, as shown in Figure 1. Data entered via the input layer, where each node represents an individual feature/measurement that is added to the network. The hidden layer represents a set of latent features which are constructed from the measured features; there can be multiple hidden layers and hence multiple sets of latent features. Finally, the output layer generates the final output from the network.\nNNs have been applied within the context of SHM in varying applications. Elkordy et al. [7] is an early example of the use of NNs in the context of SHM for damage detection. Manson et al. [8] and Mustapha et al. [9] each use a multi-layer perceptron NN to classify the location of damage, on an aircraft wing and isotropic plate, respectively. Since these early examples of the use of NNs in SHM, there has been a plethora of work utilising NNs, from using auto-associative NNs to detect damage before a crack is visibly seen [10], to using a convolutional NN to classify wind turbine tower vibration health states [11], a systematic review of the use of convolutional NNs in SHM is given in [12].\nThe structure of an MTL NN is an extension of the independent NN, as shown by Figure 2. Just as with the independent formulations, the MTL NN has an input layer, hidden layers and an output layer. The main difference between Figure 1 and Figure 2 is the additional output nodes in the output layer. The additional output nodes represent different tasks, i.e. the multi-task nature of the network. The hidden layer could be more or less unchanged in the MTL setting, or it can take a very different structure. For a deep NN, the MTL NN may share all of the hidden layers or only several of the bottom hidden . . . . . .\nL 1 L n I 1 I 2 I 3 I n O 1" }, { "figure_ref": [], "heading": "Input Layer", "publication_ref": [], "table_ref": [], "text": "Hidden Layer\nOutput Layer Figure 1. NN with single task.\n. . . . . . . . .\nL 1 L n I 1 I 2 I 3 I n O 1 O n" }, { "figure_ref": [], "heading": "Input Layer", "publication_ref": [ "b2" ], "table_ref": [], "text": "Hidden Layer\nOutput Layer Figure 2. NN with multiple tasks.\nlayers (an example of this structure is shown in figure 3). The input layer may have the same number of input features as with the independent learner; however, it may also have more input nodes to represent more features being added to the model. The structure of the NN will be dependent on the mechanism of MTL which is being implemented. There are three main themes of multi-task learning discussed here which have been inspired by Caruana's paper on MTL learning [3], all which have applicability within SHM problem settings. Briefly these are:\n• Natural occurrence of multiple tasks -additional tasks which make sense to learn together.\n• Using outputs as \"inputs\" -adding data which cannot be added as an input as an output so that the data can influence the model. This can be used when the data is not accessible on the same time scale as the other data inputs.\n• Additional loss functions to provide different insights -repeating the output of a NN but using a different loss function.\nThe following sections will detail each of the mechanisms and provide insight into potential problem settings of them. \nI 1 I 2 I 3 I n O 1 O m O m+1 O n\nInput Layer" }, { "figure_ref": [], "heading": "Shared Hidden Layer", "publication_ref": [], "table_ref": [], "text": "Hidden Layer\nOutput Layer Figure 3. Example of a deep NN with both a shared hidden layer and split hidden layer." }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "PROBLEM SETTING 1: NATURAL OCCURRENCE OF MULTIPLE TASKS", "publication_ref": [ "b12", "b13", "b14", "b15", "b1", "b16", "b17", "b18", "b19" ], "table_ref": [], "text": "Arguably, one of the most intuitive uses of MTL is when there is a set of tasks which are related and therefore make sense to learn together. Often the definition of MTL is given in relation to this problem setting, Zhang et al. [13] provides an overview of multitask learning24the paper defines the aims of MTL as: to leverage useful information contained in multiple learning tasks to help learn a more accurate learner for each task. For this problem setting of MTL, learning the tasks together provides a synergy that improves the performance of several tasks compared to if an independent learner was applied to each task individually. The structure of an MTL for this problem setting is given in Figure 4.\nA example of the natural occurrence of multiple tasks in the context of SHM is the use of data from several nominally-identical structures to obtain information about their damage state, e.g. from wind turbines in a wind farm. With multiple structures, generalisation may be improved such that physical changes may be more likely to be identified.\nObtaining information from damage states can be expensive; however, if data from different structures are combined, then damage-state information can be leveraged between structures and provide increased information about the structures. One methodology of obtaining damage states for a structure is to synthesise it. Synthesised structures could be used within MTL to improve the performance of the actual structures which feature in the model. Synthesised data has the benefit of being significantly cheaper to obtain than the cost associated with providing damage to actual structures.\nThe majority of MTL which has been conducted in the field of SHM takes this form. In [14], a multi-task Gaussian process regression is used for missing sensor data reconstruction across a dam sensor network. The data are not missing simultaneously; however, data points were missing from different sensors in the network and by looking at the data together, the reconstruction performance improved. In [15], multi-task sparse Bayesian learning was conducted on two damaged structures with supplementary information provided via simulations of the structures. Learning the two structures together, the damage patterns are more reliably detected. Finally, [16] used an artificial neural network using data from six aluminium plates to predict fatigue crack length and remaining fatigue life. Although not explicitly called MTL in the paper, the neural network has two output tasks which are learnt jointly.\nA recent area of research is into population-based SHM (PBSHM) [2,[17][18][19][20], which aims to utilise how data can be transferred and shared between populations of structures to allow inferences to be shared across the population. An applicable example of a population of structures that may benefit from PBSHM is one of offshore wind farms. There now exist a lot of wind farms and hence a lot of structures which require SHM from a safety perspective, a cost perspective but also from an efficiency-of-power-generated perspective. This problem setting of MTL fits within the remit of PBSHM and could be explored within this context. The second problem setting of MTL is using outputs as inputs, at first this may appear to be somewhat of an oxymoron; however, there are some features which are not available as inputs to a model, and therefore can only feature as outputs, see the structure in Figure 5. An output influences a NN during back-propagation, which occurs during training, by having an impact on the weights within the model. Hence, although added as an output, the additional task will influence the model and can be viewed as a form of input. There are several different, but linked, forms that this can take: transfer learning, non-operational features, and regression for classification. Each of the different forms is expanded below." }, { "figure_ref": [], "heading": "PROBLEM SETTING 2: USING OUTPUTS AS INPUTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Single Neural Network", "publication_ref": [ "b20", "b21", "b2" ], "table_ref": [], "text": "Certain problem settings of MTL may be labelled as transfer learning. Gardner et al. [21] broadly categorises transfer learning into two categories: training a model with data from an auxiliary task and fine tuning it with the main task data, and, performing domain adaptation such that two tasks can share a latent space. The latter category of transfer learning is applicable to SHM problem settings, with the aim to transfer knowledge between a source domain and a target domain [22]. Transfer learning does not require tasks to be learnt simultaneously; however, it is when tasks are learnt simultaneously that transfer learning is also multi-task learning.\nAn example of MTL transfer learning is given in [3], in order to predict the medical risk of a patient, there are several features that can be measured as inputs (e.g. height), however, there also may be medical test results which would be a useful feature to add to the model. Medical test results take time to process and may not be available as an input for all patients. Therefore, the test results can be added as an output to the model and form an auxiliary task. During training, the test results help to inform the medical risk.\nWhen tested, the target output is the risk and this can be calculated from the input features which are available. In this example information from the medical test results has been transferred to inform the medical risk. A potential synonymous example for SHM is using the results of model analysis as an additional output for a neural network which is used for system identification.\nThe examples above are also useful when considering the use of non-operational features during training. A further problem setting of using outputs as inputs to consider features that are available during training (which is likely offline, such as modal analysis), in comparison to features which are available during normal operation. The non-operational features that are available during training can be added as outputs to a NN which could improve the generalisation of the model and therefore improve the performance of the model during online utilisation.\nThe final category for the problem setting of using outputs as inputs is using additional regression outputs to inform a classification task. For classification tasks, the classification is either True or False, and hence, there is a level of quantisation. Whereas regression tasks are on a sliding scale which can contain a lot more information than simply: True or False. Hence, there may be additional information in a regression task that would inform a classification task. It is a method of using a larger continuous space whilst solving a discrete problem. Additional classification tasks can also be used to improve the main classification task as the quantisation may be different to the original task.\nThere is little research for this problem setting of MTL within the context of SHM. Aforementioned, there are strong links with transfer learning and there are promising problem settings of using outputs as inputs within the context of SHM." }, { "figure_ref": [], "heading": "PROBLEM SETTING 3: ADDITIONAL LOSS FUNCTIONS TO PROVIDE DIF-FERENT INSIGHTS", "publication_ref": [ "b22" ], "table_ref": [], "text": "When constructing a NN with a single task, the developer has to choose a loss function with which to train the network. During training, the weights and biases of the NN will be tuned to obtain the best performance of the NN, which is determined by minimising the loss function. Different loss functions may have affinities to different values of the weights and biases. Hence, the solution of the NN for different loss functions could provide different solutions and ould provide different insights from the network, even though the inputs into the network are exactly the same.\nThis problem setting of MTL duplicates outputs but uses different loss functions for each. Caruana [23] demonstrates how the rankprop error metric performs poorly at the lower end of a continuous spectrum. Following this, an MTL NN with both the rankprop error metric and sum-of-square errors metric is used to improve the performance of the main task at the lower end of the spectrum. Adding the additional loss function, in this case, improved performance at a critical end of a spectrum.\nA potential example of how this could be implemented within the field of SHM is in the context of fatigue testing. There are two parameters which would be of interest in fatigue testing: the number of cycles and the amplitude of the force experienced. To gauge the overall time signal, an L2-norm might be chosen; overall, it is anticipated that this would give a reasonable average response; however, it might not be sensitive to accurately detect the time when the loading switches from compression to tension. For this related task, a different regularisation parameter may be used, which focuses on when the response passes through 0. To understand the maximum load at the peak tension/compression, an L4-norm could be applied, which is useful at modelling extremities. As different regularisation may be used for different interpretations of the data, this is a good example of where MTL could improve the performance jointly over a range of tasks.\nThis problem setting could have interesting applications within SHM." }, { "figure_ref": [], "heading": "CONCLUDING REMARKS", "publication_ref": [], "table_ref": [], "text": "This paper discusses the different uses of MTL NNs and how then may be applicable in the field of SHM. MTL with multiple tasks arising naturally is the most explored of the problem settings, however, to date, there is still limited work using this approach. Transfer learning has been explored within PBSHM but the intersect of transfer learning and MTL is yet to be explored with regards to NNs. Arguably the least explored problem setting of MTL is the use of additional loss functions to provide different insights. As the cost of error in SHM can be quite high this problem setting could be very beneficial for improving prediction accuracy in the field. Overall there is a lot of potential for the use of MTL NNs in SHM, which can be researched." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENT", "publication_ref": [], "table_ref": [], "text": "This research was supported by grants from the Engineering and Physical Sciences Research Council (EPSRC), UK, and Natural Environment Research Council, UK via grant number, EP/S023763/1 and EP/W005816/1 -Revolutionising Operational Safety and Economy for High-value Infrastructure using Population-based SHM (ROSEHIPS)." } ]
Multi-task neural networks learn tasks simultaneously to improve individual task performance. There are three mechanisms of multi-task learning (MTL) which are explored here for the context of structural health monitoring (SHM): (i) the natural occurrence of multiple tasks; (ii) using outputs as inputs (both linked to the recent research in populationbased SHM (PBSHM)); and, (iii) additional loss functions to provide different insights. Each of these problem settings for MTL is detailed and an example is given.
When is an SHM problem a Multi-Task-Learning problem?
[ { "figure_caption": "Figure 4 .4Figure 4. Problem setting 1: two tasks, A and B, put into one neural network.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Problem setting 2: Using additional inputs as an output.", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Problem setting 3: Additional outputs for a NN.", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" } ]
S C Bee; L A Bull; N Dervilis; K Worden
[ { "authors": "C R Farrar; K Worden", "journal": "John Wiley & Sons, Ltd", "ref_id": "b0", "title": "Structural Health Monitoring: A Machine Learning Perspective", "year": "2013" }, { "authors": "L A Bull; P A Gardner; J Gosliga; T J Rogers; N Dervilis; E J Cross; E Papatheou; A E Maguire; C Campos; K Worden", "journal": "Mechanical Systems and Signal Processing", "ref_id": "b1", "title": "Foundations of population-based SHM, Part I: Homogeneous populations and forms", "year": "2021" }, { "authors": "R Caruana", "journal": "Machine Learning", "ref_id": "b2", "title": "Multitask Learning", "year": "1997" }, { "authors": "H.-T Shiao; V Cherkassky", "journal": "IEEE", "ref_id": "b3", "title": "Implementation and Comparison of SVM-Based Multi-Task Learning Methods", "year": "2012" }, { "authors": "Q Wang; L Zhang; C Mingmin; ; ; J Guo", "journal": "IOS Press", "ref_id": "b4", "title": "MTForest: Ensemble Decision Trees based on Multi-Task Learning", "year": "2008" }, { "authors": "O Chapelle; P Shivaswamy; S Vadrevu; K Weinberger; Y Zhang; B Tseng", "journal": "Machine Learning", "ref_id": "b5", "title": "Boosted multi-task learning", "year": "2011" }, { "authors": "M F Elkordy; K C Chang; G C Lee", "journal": "Computer-Aided Civil and Infrastructure Engineering", "ref_id": "b6", "title": "A Structural Damage Neural Network Monitoring System", "year": "1994" }, { "authors": "G Manson; K Worden; D Allman", "journal": "Journal of Sound and Vibration", "ref_id": "b7", "title": "Experimental validation of a structural health monitoring methodology: Part III. Damage location on an aircraft wing", "year": "2003" }, { "authors": "F Mustapha; G Manson; K Worden; S G Pierce", "journal": "Mechanical Systems and Signal Processing", "ref_id": "b8", "title": "Damage location in an isotropic plate using a vector of novelty indices", "year": "2007" }, { "authors": "N Dervilis; M Choi; S G Taylor; R J Barthorpe; G Park; C R Farrar; K Worden", "journal": "Journal of Sound and Vibration", "ref_id": "b9", "title": "On damage diagnosis for a wind turbine blade using pattern recognition", "year": "2014" }, { "authors": "M Khazaee; P Derian; A Mouraud", "journal": "Renewable Energy", "ref_id": "b10", "title": "A comprehensive study on Structural Health Monitoring (SHM) of wind turbine blades by instrumenting tower using machine learning methods", "year": "2022" }, { "authors": "S Sony; K Dunphy; A Sadhu; M Capretz", "journal": "Engineering Structures", "ref_id": "b11", "title": "A systematic review of convolutional neural network-based structural condition assessment techniques", "year": "2021" }, { "authors": "Y Zhang; Q Yang", "journal": "National Science Review", "ref_id": "b12", "title": "An overview of multi-task learning", "year": "2018" }, { "authors": "Y Li; T Bao; Z Chen; Z Gao; X Shu; K Zhang", "journal": "Measurement: Journal of the International Measurement Confederation", "ref_id": "b13", "title": "A missing sensor measurement data reconstruction framework powered by multi-task Gaussian process regression for dam structural health monitoring systems", "year": "2021" }, { "authors": "Y Huang; J L Beck; H Li", "journal": "Computer-Aided Civil and Infrastructure Engineering", "ref_id": "b14", "title": "Multitask Sparse Bayesian Learning with Applications in Structural Health Monitoring", "year": "2019" }, { "authors": "H J Lim; H Sohn; Y Kim", "journal": "Mechanical Systems and Signal Processing", "ref_id": "b15", "title": "Data-driven fatigue crack quantification and prognosis using nonlinear ultrasonic modulation", "year": "2018" }, { "authors": "J Gosliga; P A Gardner; L A Bull; N Dervilis; K Worden", "journal": "Mechanical Systems and Signal Processing", "ref_id": "b16", "title": "Foundations of Population-based SHM, Part II: Heterogeneous populations -Graphs, networks, and communities", "year": "2021" }, { "authors": "P Gardner; L A Bull; J Gosliga; N Dervilis; K Worden", "journal": "Mechanical Systems and Signal Processing", "ref_id": "b17", "title": "Foundations of population-based SHM, Part III: Heterogeneous populations -Mapping and transfer", "year": "2021" }, { "authors": "G Tsialiamanis; C Mylonas; E Chatzi; N Dervilis; D J Wagg; K Worden", "journal": "Mechanical Systems and Signal Processing", "ref_id": "b18", "title": "Foundations of population-based SHM, Part IV: The geometry of spaces of structures and their feature spaces", "year": "2021" }, { "authors": "L A Bull; D D Francesco; M Dhada; O Steinert; T Lindgren; A K Parlikad; A B Duncan; M Girolami", "journal": "Computer-Aided Civil and Infrastructure Engineering", "ref_id": "b19", "title": "Hierarchical Bayesian modeling for knowledge transfer across engineering fleets via multitask learning", "year": "2023" }, { "authors": "P Gardner; L A Bull; N Dervilis; K Worden", "journal": "Mechanical Systems and Signal Processing", "ref_id": "b20", "title": "On the application of kernelised Bayesian transfer learning to population-based structural health monitoring", "year": "2022" }, { "authors": "P Gardner; X Liu; K Worden", "journal": "Mechanical Systems and Signal Processing", "ref_id": "b21", "title": "On the application of domain adaptation in structural health monitoring", "year": "2020" }, { "authors": "R Caruana", "journal": "", "ref_id": "b22", "title": "A Dozen Tricks with Multitask Learning", "year": "1998" } ]
[ { "formula_coordinates": [ 3, 154.13, 102.09, 122.78, 164.93 ], "formula_id": "formula_0", "formula_text": "L 1 L n I 1 I 2 I 3 I n O 1" }, { "formula_coordinates": [ 3, 370.61, 105.45, 122.78, 164.93 ], "formula_id": "formula_1", "formula_text": "L 1 L n I 1 I 2 I 3 I n O 1 O n" }, { "formula_coordinates": [ 4, 245.19, 104.98, 197.48, 222.04 ], "formula_id": "formula_2", "formula_text": "I 1 I 2 I 3 I n O 1 O m O m+1 O n" } ]
2023-05-17
[ { "figure_ref": [ "fig_0" ], "heading": "", "publication_ref": [ "b0", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b11", "b13" ], "table_ref": [], "text": "widely utilized in the detection of thyroid nodules, breast tumors, and gonadal tissues [1]. However, ultrasound images often suffer from noise, low contrast and resolution, which makes manual analysis a time-consuming and laborious process. Furthermore, subjective factors such as the experience and mental state of radiologists can lead to misdiagnosis. Automated medical image analysis can effectively overcome these limitations. Deep learning technology has made significant progress in various automated medical image analysis tasks, especially in medical image segmentation using convolutional neural networks (CNNs). However, training a robust model requires a large amount of labeled data, which is challenging in the case of medical images due to patient privacy protection and the need for domain experts for accurate annotation. In addition, ultrasound images can exhibit significant variability in appearance and shape due to differences in patient anatomy, imaging equipment, and imaging protocols. Therefore, performing medical image segmentation with a limited number of high-quality labeled ultrasound images is a challenging task.\nRecently, there has been significant research focused on training robust models with limited annotated images, which can be broadly categorized into two approaches: weaklysupervised learning and semi-supervised learning (SSL). Weakly-supervised learning involves using annotated images that are easy to collect but imprecise for model training, whereas SSL leverages a limited set of labeled images along with a large number of unlabeled images to train a powerful network. However, in some scenarios, the variation in style and content of images acquired from different ultrasound devices can negatively impact the effectiveness of SSL. Furthermore, obtaining a substantial amount of unlabeled medical images from various sources can prove to be unfeasible due to privacy concerns.\nFortunately, generative models offer a practical solution to the challenges mentioned above by generating a significant number of synthetic images from existing ones [3]. This approach effectively addresses ethical concerns surrounding the acquisition of future images [4] while also improving the performance of target tasks [5], [6]. While generative adversarial models (GANs) [7] have been extensively used for image generation, they suffer from limited pattern coverage and cannot effectively capture the true diversity of features. Moreover, GANs are prone to training instability and mode collapse [8]. Currently, the Denoising Diffusion Probability Model (DDPM) [9] and Latent Diffusion Model [10] are the most advanced generative models for image generation tasks, owing to their exceptional pattern coverage and quality of the generated samples [11]. In this study, we focus on using a large number of synthetic samples as unlabeled data for semi-supervised learning. This approach not only significantly reduces the burden of data labeling, but also overcomes the obstacles posed by collecting a large amount of unlabeled private data. Moreover, we believe that synthetic samples generated by LDM possess target domain probability distribution knowledge that is absent in CNNs, and we establish a connection between them using semi-supervised learning.\nTo effectively leverage both labeled and unlabeled synthetic and real data, while also exploring the impact of the relationship between labeled and unlabeled data on semisupervised models, we propose a multi-level global context cross-consistency framework (MGCC), as shown in Figure 1. In the first stage, we employ LDM to generate a substantial quantity of valuable synthetic samples from existing medical image data. In the second stage, inspired by ConvMixer [12] and cross-consistency training [13], we propose a fully convolutional semi-supervised medical image segmentation network with multi-level global context cross-consistency to effectively exploit the unlabeled synthetic samples.\nTo solve the limitation of ordinary convolution locality in fully convolutional networks, Trockman et al. proposed ConvMixer [12] which uses large convolutional kernels to mix remote spatial locations to obtain global context information. Compared to the Transformer, ConvMixer, which employs convolutional inductive bias, is better suited for computer vision tasks and has a lower computational overhead than the self-attention mechanism.\nTo leverage unlabeled synthetic and real data and improve the model's robustness to various noise perturbations in global context information, we introduce different level global context noise perturbation to the decoders and maintain consistency among multiple decoding outputs. Specifically, we introduce different length of ConvMixer with perturbation in shared encoder to obtain different level global context noise perturbation. And we use them as inputs for multiple auxiliary decoders and maintain consistency between the main decoder and auxiliary decoders outputs.\nMoreover, to suppress irrelevant features and enhance valuable encoder knowledge transfer, we propose a multi-scale attention gate in skip-connection stage to select significant features using different receptive fields. Our main contributions are as follows:\n• We propose a novel semi-supervised medical image segmentation framework that utilizes diffusion models to generate synthetic medical ultrasound images and employs these generated images as unlabeled data for semi-supervised learning. By utilizing synthetic samples generated by LDM, we address the challenges faced by collecting large amounts of unlabeled medical privacy samples. And by leveraging semi-supervised learning, we establish a connection between the diffusion probability distribution of the target domain and semantic representa-tions, enabling effective transfer of diffusion probability distribution knowledge from the target domain to the segmentation network. To our knowledge, this work is the first to utilize data generated by diffusion models as unlabeled samples for semi-supervised learning. • We propose a fully convolutional semi-supervised medical segmentation network with multi-level global context cross-consistency, where different levels of global context noise perturbation are introduced to the auxiliary decoder while maintaining consistency among decoder outputs. This approach improves the network's representational ability for segmented objects with varying positions and significant morphological differences. • Experiments on public medical datasets demonstrate the effectiveness of the proposed semi-supervised methods and confirm that our proposed method can maximize the ability of the segmentation model to learn diffusion probability knowledge. This work is an extension of our previous work published on ISBI-2023 [14]. Unlike our previous work, we significantly expand our proposed network to enable semi-supervised learning and investigate the feasibility of utilizing synthetic samples from LDM as unlabeled data." }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Medical Image Generation", "publication_ref": [ "b14", "b15", "b17", "b8", "b9", "b18", "b19", "b5", "b20", "b9" ], "table_ref": [], "text": "Previous studies have utilized generative adversarial models (GANs) [15] to generate synthetic medical images [16]- [18]. However, due to their architecture, GANs have limitations in generating high-resolution medical images with diverse internal details. Recently, the Denoising Diffusion Probability Model (DDPM) [9], [10] has gained considerable attention for addressing various deep learning generation problems [19]. DDPM's flexible model architecture and ability to perform exact log-likelihood computations make it a promising alternative to GANs.\nDespite the impressive achievements of DDPM in generating various medical images, such as MRI [20], ophthalmology, lung CT, histopathology [6], and genomic information images [21], there has been limited research on utilizing DDPM for ultrasound image generation. In this study, we utilize the Latent Diffusion Model (LDM) [10] to generate ultrasound synthesis images. Compared to traditional DDPM, LDM substantially reduces the computational and inference costs while generating high-resolution ultrasound images." }, { "figure_ref": [], "heading": "B. Medical Image Segmentation", "publication_ref": [ "b21", "b22", "b23", "b24", "b25", "b26", "b27", "b29", "b27", "b30", "b13" ], "table_ref": [], "text": "Convolutional neural networks (CNNs) have gained popularity in medical image segmentation due to their powerful deep learning capabilities, and among the various CNN variants, U-Net [22] stands out for its superior performance. U-Net is a pyramid-structured segmentation network based on an encoder-decoder architecture that transfers semantic information between the encoder and decoder through skipconnections. In recent years, several U-Net based medical segmentation networks have been proposed, such as U-Net++ [23], Attention U-Net [24], Unet3+ [25] and UNeXt [26].\nThe limitation of the convolution operation is its restricted ability in capturing global contextual information, which is essential for accurate medical image segmentation. Recently, several networks based on the Transformer model [27] have been applied to medical image segmentation [28]- [30] due to their ability to effectively extract global information from images. TransUnet [28] employs Vit [31] to obtain global context with CNN, but it requires massive medical images and computing overhead. Compared to the hybrid CNN and Transformer structure, we believe that a fully convolutional network can still perform medical ultrasound image segmentation efficiently and effectively. In our previous work, we proposed a fully convolutional medical image segmentation network, CMU-Net [14], which uses ConvMixer instead of Transformer to extract global context information." }, { "figure_ref": [], "heading": "C. Semi-Supervised Medical Image Segmentation", "publication_ref": [ "b12", "b31", "b37", "b31", "b32", "b32", "b33", "b36", "b37", "b37", "b12", "b36" ], "table_ref": [], "text": "Manual pixel-wise labeling by medical professionals is a time-consuming task. SSL based medical segmentation methods [13], [32]- [38] utilize a large amount of unlabeled data and a small number of high-quality labeled data to alleviate the labeling burden on medical professionals. Previous work on semi-supervised segmentation can be categorized into two major approaches: pseudo-label-based iterative learning [32] and consistency-based joint training [33]. Tarvainen and Valpola [33] proposed the mean teacher (MT) model, which leverages supervised loss and consistency loss for labeled and unlabeled data, respectively. Additionally, Yu et al. [34] applied the uncertainty estimation of Monte Carlo Dropout to the MT model. Chen et al. [37] performed cross-pseudo supervision through two differently initialized networks of the same structure. Luo et al. [38] extended it to crossteaching between CNN and Transformer. Furthermore, Luo et al. [38] built consistency by performing both segmentation and level set regression tasks. Ouali et al. [13] proposed a segmentation method based on cross-consistency training, which employs multiple perturbed encoder feature as auxiliary decoder inputs and ensures decoder output consistency to enhance the encoder's representations.\nMost existing semi-supervised segmentation methods use U-Net as the backbone, but due to the locality limitation of ordinary convolution, these networks cannot effectively extract global context. Although Luo et al. [37] introduced Transformer into the framework for cross teaching, training a powerful self-attention mechanism network requires a significant amount of data and computational resources. In addition, even though SSL reduces the time and labor involved in labeling, it still faces challenges in obtaining a large number of unlabeled images in medical scenarios. In contrastto traditional SSL, we use generated images as unlabeled samples and introduce ConvMixer in SSL to achieve multi-level global context consistency. We also establish a link between representation learning and generative learning and leverage SSL to obtain the diffusion probability distribution knowledge." }, { "figure_ref": [], "heading": "III. METHODOLOGY", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "A. Overview", "publication_ref": [], "table_ref": [], "text": "Our proposed multi-level global context cross-consistency framework is illustrated in Figure 1 and consists of two steps:\n(1) Medical image synthesis: Leverage the LDM to generate a large number of valuable synthetic samples. (2) Multi level global context cross-consistency network: To utilize synthesized unlabeled images, we input different levels of global context noise perturbation to different auxiliary decoders and maintain consistency between the output of the main and auxiliary decoders. We aim to establish the network's ability to produce consistent segmentation results on varying levels of global contextual noise perturbations.\nIn this work, the training set includes three subsets: labeled dataset D l N with N annotated samples, unlabeled dataset D u M with M unannotated samples and unlabeled synthetic dataset D u A with A unannotated images which generated by LDM from D l N and D u M . Thus, the entire unlabeled training set can be denoted as\nD u M +A = D u M ∪ D u A and the entire training set is D N +M +A = D l N ∪ D u M ∪ D u A . We presuming that an image is x i , x i has ground truth y i if x i ∈ D l N . On the contrary, if x i ∈ D u\nM +A , its label does not exist." }, { "figure_ref": [], "heading": "B. Medical Ultrasound Image Generation Based on Latent Diffusion Model", "publication_ref": [ "b9", "b1" ], "table_ref": [], "text": "The traditional diffusion model operates in pixel space and requires significant computational resources and time for training. The LDM [10] is utilized to generate latent space codes, which are then decoded into pixel space, significantly reducing computational and inference costs while producing high-resolution images. The implementation of the LDM is divided into two parts: Pixel-level Image Compression and Latent Denoising Diffusion.\n1) Pixel-level Image Compression: Pixel-level image compression involves training an encoder E to generate latent codes that correspond to the images. Additionally, the decoder D restores the latent codes to a high-resolution image. As shown in formula (1): the Gaussian noise. The entire denoising learning process corresponds to the inverse process of a fixed Markov chain of length T. The loss function of LDM can be formulated as formula (2).\nz = E(x) x = D(z)(1\nL LDM = E z, ∼N (0,1),t [|| -θ (z t , t)|| 2 2 ](2)\nwhere θ (z t , t), t = 1...T represents the denoising autoencoder which is used to predict the denoising distribution variable of z t , and z t is the result of diffusing latent space code z 0 with t times." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "C. Multi-Level Global Context Cross Consistency", "publication_ref": [ "b21", "b11", "b38", "b6", "b7", "b9", "b10", "b11", "b10", "b32", "b33" ], "table_ref": [], "text": "To implement multi-level global context cross-consistency learning for ultrasound image segmentation, we integrate multi-scale attention gates and a ConvMixer module into our proposed framework. The architecture is illustrated in Figure 2, which consists of a shared encoder E, a main decoder D main , and K auxiliary decoders {D aux1 , D aux2 , . . . , D auxK }. We adopt the encoder-decoder structure of U-Net [22] in our work. In addition, we embed ConvMixer block of varying lengths between a shared encoder and multiple decoders to extract different levels of global context information. Furthermore, we integrate multi-scale attention gates with the skip-connection to enhance the efficient transfer of valuable encoder features.\n1) ConvMixer Module and Multi-scale Attention Gates: In this section, we present the implementation details of the ConvMixer block and multi-scale attention gates employed in our method.\nConvMixer Module: The ConvMixer module [12] utilizes large convolution kernels to mix remote spatial locations to obtain global context information. As shown in Figure 2, the ConvMixer block is composed of l ConvMixer layers. A single ConvMixer layer consists of depthwise convolution (kernel size k×k) and pointwise convolution (kernel size 1×1).\nThe number of group channels of the depthwise convolution kernel is equal to the channels of the input feature map. Each convolution is followed by a GELU [39] activation and batch normalization. The ConvMixer module is defined by formula (3) and ( 4):\nf l = BN (σ 1 {DepthwiseConv (f l-1 )}) + f l-1 (3) f l = BN (σ 1 {P ointwiseConv (f l )})(4)\nwhere f l represents the output feature map of layer l in the ConvMixer block, σ 1 represents the GELU activation, and BN represents the batch normalization. It is worth mentioning that the feature maps from all layers in the ConvMixer module maintain the same resolution and size.\nMulti-scale Attention Gate: Our proposed multi-scale attention gate is depicted in Figure 2. The multi-scale attention gate is integrated with skip-connections, which are used to suppress unimportant features and enhance valuable ones. Specifically, to select encoder features adaptively according to different resolutions, we use three types of convolutions with different receptive fields for feature extraction: pointwise convolution, ordinary convolution (i.e., kernel size 3×3 and stride of 1 and padding of 1) and dilated convolution (kernel size of 3×3, stride of 1, padding of 2 and dilation rate of 2), all three convolutions generate feature maps of the same size. Each convolution has a batch normalization layer, and we concatenate the output feature maps before a ReLU activation. We then select valuable features by another pointwise convolution, as shown in formula ( 5) and ( 6):\nf Concat = σ 2 (Concat{BN {P ointwiseConv(f )}, BN {OrdinaryConv(f )}, BN {DilationConv(f )}}) (5) f s = f ×σ 3 (P ointwiseConv(f Concat )) + f (6)\nWhere f represents encoding features, f Concat is the concatenated feature, f s is the output feature from the multi-scale attention gate, and σ 2 and σ 3 denotes the ReLU and Sigmoid activation, respectively.\n2) Multi-Level Global Content Consistency: To ensure consistency in the predictions of the main decoder and auxiliary decoder for different levels of global context information, we embed shared ConvMixer layers of varying lengths following the encoder and add different noise perturbation settings to the input of the auxiliary encoder, as shown in formula (7) and (8).\nŷk = D auxk (f p l k ), f p l k = P k (f l k )(7)\nŷ = D main (f l K )(8)\nwhere P k represents the kth different noise perturbation, ŷk and ŷ are the predicted outputs of the kth auxiliary decoder and main decoder, respectively. In addition, f l0 represents the encoder output and f p l k represents the noise-perturbed version output by the shared ConvMixer layers with length l k . f p l k is the input to the kth auxiliary decoder. Moreover, f l K is the input to the main decoder, which is output by the total-length of ConvMixer module. For each labeled sample x i ∈ D l N and its ground truth y i , the supervised loss L s is calculated according to formula (9):\nL s = 1 (K + 1) • N N i=1 K k=1 L (ŷ i k , y i ) + L (ŷ i , y i )(9)\nwhere K is the number of auxiliary decoders. ŷi k and ŷi represent the ith labeled sample output of the kth auxiliary decoder and the main decoder, respectively. L (.) is a standard supervised combination loss, which is defined between the ith prediction ŷi and the ground truth y i as a combination of binary cross entropy (BCE) and dice loss (Dice) as shown in formula (10):\nL (.) = 0.5 • BCE (ŷ, y) + Dice (ŷ, y)(10)\nTo leverage valuable knowledge from the unlabeled dataset, we ensure output predictions consistency between the main decoder and auxiliary decoders. Specifically, we calculate and minimize the difference between the main and auxiliary encoders outputs on the unlabeled data D u M +A . The unsupervised loss function L u is defined as formula (11):\nL u = 1 K • (M + A) M +A j=1 K k=1 ||ŷ j -ŷj k ||(11)\nwhere ŷj k and ŷj represents the jth unlabeled sample output of the kth auxiliary decoder and the main decoder, respectively. Then, we optimize the combined loss function L total to learn from both labeled and unlabeled data as formula (12) :\nL total = L s + λ • L u (12\n)\nwhere L s and L u are presented in formula ( 9) and (11), respectively. λ is the Gaussian warming up function [33,34],\nλ = w max • e (-5(1-t tmax ) 2 )\n, where w max represents regularization weight and t max is the maximal training step." }, { "figure_ref": [], "heading": "IV. EXPERIMENT", "publication_ref": [ "b39", "b25", "b42", "b40", "b41", "b46" ], "table_ref": [], "text": "A. Datasets 1) BUSI Dataset: The Breast UltraSound Images (BUSI) [40] open-source dataset consists of 780 breast ultrasound images from 600 female patients, covering 133 normal cases, 487 benign cases and 210 malignant cases with their corresponding ground truths. Following recent studies [26,43], we utilize all cases and randomly split the BUSI dataset into 70-30 ratios three times (i.e., 526 samples for training and 254 samples for validating) to ensure fair comparison.\n2) BUS Dataset: The breast ultrasound (BUS) [41] opensource dataset consists of 562 breast ultrasound images acquired from female patients aged 26 to 78 years, collected by six institutions using five ultrasound devices. BUS includes 306 benign cases and 256 malignant cases, and we use BUS as unlabeled data.\n3) B Dataset: The Dataset B (B) [42] consists of 163 breast ultrasound images from multiple female patients collected at the UDIAT Diagnostic Center of Parc Taulí in Sabadell, Spain. The B dataset includes 110 benign cases and 53 malignant cases, and we utilize B as unlabeled data.\n4) TUS Dataset: The private Thyroid UltraSound dataset (TUS) was collected using three different ultrasound machines from the Ultrasound Department of the Affiliated Hospital of Qingdao University. It includes 192 cases, with a total of 1942 thyroid ultrasound images and corresponding segmentation results by three experienced radiologists. To ensure fair comparison, we randomly split the TUS dataset into 70-30 ratios three times (i.e., 1359 samples for training and 583 samples for validation).\n5) TNSCUI2020 Dataset: The Thyroid Nodule Segmentation and Classification in Ultrasound Images 2020 (TNSCUI2020) [47] public dataset was collected using various ultrasound machines by the Chinese Artificial Intelligence Alliance for Thyroid and Breast Ultrasound (CAAU). The TNSCUI2020 dataset includes 3644 cases of different ages and genders, and we use it as unlabeled data." }, { "figure_ref": [], "heading": "B. Experimental Settings 1) Generative Network Training and Hyperparameter Setting:", "publication_ref": [ "b9", "b44", "b21", "b45", "b12", "b12", "b43", "b32", "b37", "b32", "b33", "b34", "b36", "b37", "b12", "b32", "b21" ], "table_ref": [], "text": "Our training methodology is based on the Stable Diffusion model [10], which is divided into two stages. In the first stage, we train the autoencoder using VAE [45]. To generate highresolution images, the input image is resized to 512×512 and mapped into a 64×64 latent space using the VAE encoder. The latent code is then directly decoded into pixel space, and the Mean Squared Error (MSE) is used as the reconstruction loss to minimize the difference between the input and reconstructed pixel images. In the second stage of training, the pretrained autoencoder (VAE) with frozen weights encodes the pixel image to the latent space, and the latent code is then diffused into Gaussian noise. Specifically, we utilize U-Net [22] for inverse denoising estimation.\nIn the generating experiment, we use Adam optimizer with a learning rate of 0.000001 for training the autoencoder. The batch size is set to 4, and the training epoch is 1000. For the latent diffusion model, we train it using the AdamW optimizer for 1000 epochs with a learning rate of 0.0001. Additionally, we diffuse Gaussian noise with 1000 steps. To reduce the cost of image generation, we utilize the Denoising Diffusion Implicit Model (DDIM) [46] for generating synthesis samples (t=100 denoising steps).\n2) Semi-Supervised Segmentation Network Training and Hyperparameter Setting: To reduce computational costs, we deploy three auxiliary decoder (K=3) and utilize three different noise perturbation functions on the input features of the decoders: F-Noise [13], F-Drop [13] and Dropout [44].\nFor the semi-supervised segmentation experiment in our work and the comparison method, we use the SGD optimizer with a weight decay of 0.0001 and momentum of 0.9 to train the networks for 300 epochs. The initial learning rate is set to 0.01, and use the poly learning rate strategy to adjust the learning rate. We set the batch size to 8, and each batch consists of 4 labeled and 4 unlabeled images. For the Gaussian warming up function , we follow previous work [33,38] and set t max to 0.1. Additionally, we resize all images to 256×256 and perform random rotation and flip for data augmentation.\n3) Evaluation Metrics and Comparison Methods: We adopt four commonly used metrics, namely Intersection over Union (IoU), Recall, Precision and F1-score, to quantitatively evaluate the performance of different segmentation models. Additionally, we compare our method with seven SSL methods for medical image segmentation, including MeanTeacher (MT) [33], Uncertainty-Aware Mean Teacher (UAMT) [34], Deep Co-Training (DCT) [35], Cross Pseudo Supervision (CPS) [37], Cross Teaching Between CNN and Transformer (CTBCT) [38] and Cross-Consistency training (CCT) [13]. Following previous work [33], we set the decay parameter of exponential moving average (EMA) to 0.9 in MT and UAMT. Moreover, all methods utilize U-Net [22] as the backbone and are implemented on an NVIDIA GeForce RTX4090 GPU using the Pytorch framework for a fair comparison." }, { "figure_ref": [ "fig_2" ], "heading": "C. Results of Generative Network", "publication_ref": [], "table_ref": [], "text": "In this experiment, we used LDM to generate images on the BUSI and TUS training sets. We observed a few low-quality synthetic samples in the generated results, we consulted professionals with domain knowledge and selected high-quality synthetic samples (i.e., 725 synthetic samples for BUSI and 3644 samples synthetic of TUS) to create unlabeled synthetic datasets. Some breast and thyroid ultrasound synthetic samples generated by LDM are shown in Figure 3. We found that LDM is capable of generating realistic and diverse breast and thyroid ultrasound synthetic images." }, { "figure_ref": [], "heading": "D. Results of Semi-Supervised Segmentation on Self-Domain", "publication_ref": [ "b13" ], "table_ref": [ "tab_0" ], "text": "In this study, we investigate the impact of using unlabeled samples from the self-domain (i.e., both labeled and unlabeled images are from the same dataset) on segmentation performance. We randomly selected 263 training samples (i.e., 50%) from the BUSI dataset as the labeled set, and the remaining 263 samples as the unlabeled set. Similarly, we randomly selected 679 training samples (i.e., 50%) from the TUS dataset as the labeled set, and the remaining 680 samples as the unlabeled set. The segmentation results of BUSI and TUS are presented in Table I. On the BUSI dataset, our proposed method achieves excellent results, with an IoU of 68.06%, which is 1.9% to 2.94% higher than other methods. Furthermore, our method also demonstrates competitive performance compared to fullysupervised methods that use 526 labeled images, with the IoU score being only 1.63% and 2.75% lower than U-Net and CMU-Net [14] respectively. In addition, our proposed method (MGCC) also achieved excellent results on the TUS dataset, with IoU and F1-score reaching 81.45% and 88.98%, respectively. These results demonstrates that our method is effective in capturing global context information to achieve accurate lesion localization." }, { "figure_ref": [], "heading": "E. Results of Semi-Supervised Segmentation on Similar Domains", "publication_ref": [], "table_ref": [ "tab_1", "tab_0" ], "text": "In this experiment, we analyzed the impact of using unlabeled samples from a similar domain (i.e., using the B, BUS and TNSCUI2020 datasets as unlabeled datasets for the BUSI and TUS labeled datasets, respectively) on segmentation performance. On the BUSI dataset, we randomly selected 263 of the 526 BUSI training samples (i.e., 50%) as the labeled set, and added the B dataset (163 images) and the BUS dataset (562 images) to the remaining 263 BUSI images to form an unlabeled set (988 unlabeled images in total). In the TUS experiment, we randomly selected 679 TUS training samples (i.e., 50%) as the labeled set and combined the remaining 680 TUS images with the TNSCUI2020 dataset (3644 images) as an unlabeled set (4324 unlabeled images in total).\nThe segmentation results are presented in Table II, and our proposed method achieves the best results on all four metrics for the BUSI and TUS datasets. However, compared to Table I, we observed that the IoU and F1 scores of most methods decreased to varying degrees after adding the similar domain unlabeled datasets (B and BUS datasets or TNSCUI2020 dataset). Our findings suggest that the similar domain has a negative impact on learning of the target domain (BUSI or TUS). Images obtained from different instruments exhibit variations in content and style due to the influence of acquisition operators and instrument parameters. These variations in content and style between different source images widen the domain gap and impede the transferability of knowledge from the B and BUS datasets in the BUSI experiment or the TNSCUI2020 dataset in the TUS experiment." }, { "figure_ref": [], "heading": "F. Results of Semi-Supervised Segmentation on Synthesis Dataset", "publication_ref": [], "table_ref": [ "tab_2", "tab_0", "tab_5" ], "text": "We investigate the effectiveness of utilizing synthetic samples generated by LDM as an unlabeled dataset for semisupervised segmentation. In the BUSI and TUS experiment, we randomly selected annotations from 263 BUSI samples (i.e., 50%) and 679 TUS samples (i.e., 50%) as the labeled set, respectively. For fair comparison, to form the BUSI unlabeled dataset, we combined 725 synthetic samples generated by LDM with the remaining 263 BUSI images (988 unlabeled images in total). We also added 3644 synthetic samples generated by LDM to the remaining 680 TUS images to form an unlabeled set (4324 unlabeled images in total). Table III presents the comparison results, and our method outperforms other methods for all metrics by a significant margin on both datasets. On the BUSI dataset, compared to the fully supervised model, we achieve an IoU score that is only 0.99% lower than U-Net. When compared to Table I We believe that this is due to the small ratio of labeled and unlabeled samples (1:7), where labeled samples are flooded by unlabeled samples, leading to overfitting. Furthermore, we conducted a deeper analysis of the semisupervised segmentation performance on unlabeled synthetic images. We used all training images of BUSI and TUS as the labeled dataset and the synthesized images as the unlabeled set, respectively. Surprisingly, as shown in Table IV, all semisupervised learning methods achieved higher IoU scores than the fully supervised U-Net. These findings suggest that the segmentation network was capable of effectively utilizing semi-supervised learning to acquire knowledge of diffusion probability in LDM. In addition, compared to fully supervised and semi-supervised learning segmentation methods, our proposed method allows for the maximum possible transfer of diffusion probability knowledge to the model, achieving an IoU of 73.73% and 83.27%, and F1-score of 64.96% and 90.33% on BUSI and TUS, respectively." }, { "figure_ref": [], "heading": "G. Ablation and Comparison Studies", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "In this study, we conduct comprehensive ablation and comparison experiments on our proposed method with the BUSI dataset and analysis the contribution of each part of our proposed method. We randomly selected 263 training samples (i.e., 50%) from the BUSI dataset as the labeled set and the remaining 263 samples as the unlabeled set for semisupervised learning.\n1) Ablation Study on Multi-Scale Attention Gates: In this ablation experiment, we placed the multi-scale attention gate in the skip-connections between different decoders and shared encoders to explore its effect on the segmentation model. The results are shown in Table V, and compared to adding multi-scale attention gates between the partial decoders and the shared encoder or not deploying, deploying multi-scale attention gates between all decoders and the shared encoder achieves higher IoU and F1-score. It shows that the multi-scale attention gates can effectively magnify the influence of helpful encoder features in knowledge transfer.\n2) Comparison Study on ConvMixer: The multi-level global context of our proposed method is mainly embodied in the output of different length ConvMixer layers, which are connected with the decoders. With the increasing length of ConvMixer layers, the level of global context information is upgraded. V. DISCUSSION" }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "A. Visualization of Results", "publication_ref": [], "table_ref": [], "text": "We validated the effectiveness of our proposed method by visualizing some segmentation results in Figure 4. On the BUSI dataset, it can be seen that MT, UAMT, DCT, CPS, and CCT have produced poor segmentation. Although CTBCT has improved the results by making use of the global context extracted by Transformer for cross-teaching, it still performs poorly on small targets and generates disconnected regions. In comparison, our method gives more accurate spatial localization and lesion shape. Even for challenging examples with low contrast and unclear echo boundaries, we achieve more complete and convex segmentation results. On the TUS In addition, for lesions at different scales (rows 3 and 4 in Figure 4(b)), other methods have produced poor results. On the contrary, our proposed method achieves more accurate lesion area and shape by learning the global context information." }, { "figure_ref": [], "heading": "B. Computational Efficiency and Cost", "publication_ref": [], "table_ref": [ "tab_8" ], "text": "The experiment analyzed the average inference time, parameter quantity, and GFLOPs of different methods by using a batch size of 1 testing method. It is worth mentioning that our proposed method does not involve auxiliary encoders during the testing phase, but only includes the main encoder, ConvMixer module, multi-scale attention gates, and main decoder, while other comparison methods are based on U-Net as the benchmark model. The results are shown in Table VII. When the network proposed does not add multi-scale attention gates, the inference time and GFLOPs are close to the baseline, and the parameter quantity is 1.28 times larger than the baseline. In addition, when multi-scale attention gates are added, inference time, parameter quantity, and GFLOPs all increase significantly. It can be balanced based on specific actual scenarios between accuracy and inference costs." }, { "figure_ref": [], "heading": "VI. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In conclusion, we have proposed a novel multi-level global context cross-consistency (MGCC) framework for medical image segmentation. The framework utilizes the Latent Diffusion Model (LDM) to generate synthetic medical images, reducing the workload of data annotation and addressing privacy concerns associated with collecting medical data. It also includes a fully convolutional semi-supervised segmentation network with multi-level global context cross-consistency, enhancing the network's representational ability for lesions with unfixed positions and significant morphological differences. Through the framework, we successfully leveraged semi-supervised learning to establish a connection between the probability distribution of the target domain and their semantic representations, enabling effective transfer knowledge to the segmentation network.\nExperiments on both public and private medical ultrasound image datasets demonstrate the effectiveness of the proposed method. Overall, our proposed method has the potential to be a valuable tool for assisting radiologists in the diagnosis and treatment of breast and thyroid diseases. In the future, our method can be applied to other segmentation tasks to improve segmentation accuracy in small sample scenarios." }, { "figure_ref": [], "heading": "U LTRASOUND that has been", "publication_ref": [], "table_ref": [], "text": "This work is supported by Shandong Natural Science Foundation of China (ZR2020MH290) and by the Joint Funds of the National Natural Science Foundation of China (U22A2033)." } ]
Medical image segmentation is a critical step in computer-aided diagnosis, and convolutional neural networks are popular segmentation networks nowadays. However, the inherent local operation characteristics make it difficult to focus on the global contextual information of lesions with different positions, shapes, and sizes. Semisupervised learning can be used to learn from both labeled and unlabeled samples, alleviating the burden of manual labeling. However, obtaining a large number of unlabeled images in medical scenarios remains challenging. To address these issues, we propose a Multi-level Global Context Cross-consistency (MGCC) framework that uses images generated by a Latent Diffusion Model (LDM) as unlabeled images for semi-supervised learning. The framework involves of two stages. In the first stage, a LDM is used to generate synthetic medical images, which reduces the workload of data annotation and addresses privacy concerns associated with collecting medical data. In the second stage, varying levels of global context noise perturbation are added to the input of the auxiliary decoder, and output consistency is maintained between decoders to improve the representation ability. Experiments conducted on open-source breast ultrasound and private thyroid ultrasound datasets demonstrate the effectiveness of our framework in bridging the probability distribution and the semantic representation of the medical image. Our approach enables the effective transfer of probability distribution knowledge to the segmentation network, resulting in improved segmentation accuracy. The code is available at https://github.com/FengheTan9/Multi-Level-Global-Context-Cross-Consistency.
Multi-Level Global Context Cross Consistency Model for Semi-Supervised Ultrasound Image Segmentation with Diffusion Model
[ { "figure_caption": "Fig. 1 .1Fig. 1. Overview of our proposed MGCC framework.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Overview of our proposed MGCC network.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Synthetic ultrasound images generated by LDM. The lesion area gradually increases from the first to third rows. (a) Real breast ultrasound images (BUSI). (b) Synthesis breast ultrasound images. (c) Real thyroid ultrasound images (TUS). (d) Synthesis thyroid ultrasound images.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Visual segmentation result comparison of state-of-the-art methods. (a) BUSI dataset segmentation result. The first row presents challenging segmentation examples with low contrast, the second row presents segmentation examples with unclear echo boundaries, and the third and fourth rows represent segmentation examples for larger and smaller lesions, respectively. (b) TUS dataset segmentation result. The first row presents challenging examples which thyroid nodule has blurred edges, the second row presents a lesion with multiple microcalcifications, and the third and fourth rows are examples of larger and smaller lesions.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "ON SELF-DOMAIN. WE REPORT THE MEAN AND STDEV WITH THREE RUNS.", "figure_data": "Metrics(%)MethodVenueBUSITUSIoURecallPrecisionF1IoURecallPrecisionF1FullyU-Net [22]MICCAI'1569.69±0.9562.84±3.4470.74±3.2262.96±3.5482.63±0.2390.27±0.4890.62±0.2489.81±0.05SupervisedCMU-Net [14]ISBI'2370.81±0.3964.00±2.7872.10±1.7664.14±2.4483.04±0.1290.24±0.2590.98±0.2290.08±0.11MT [33]NIPS'1765.95±2.3360.58±1.7169.49±3.5460.31±1.8180.10±0.6188.26±0.0789.25±0.5487.74±0.35UA-MT [34]MICCAI'1965.12±1.1361.32±1.3370.27±1.9960.55±1.3480.18±0.2288.18±0.1489.44±0.3287.79±0.38Semi SupervisedDCT [35] CPS [37] CTBCT [38]ECCV'18 CVPR'21 MIDL'2265.90±1.25 65.97±1.46 65.89±2.6761.39±2.00 61.06±1.73 62.17±1.7169.84±1.38 69.97±0.78 70.41±3.7460.81±2.06 60.59±1.62 61.52±1.9580.25±0.22 80.79±0.57 81.29±0.1588.63±0.50 88.94±0.32 89.34±0.3289.01±0.78 89.55±0.81 89.83±0.3887.91±0.28 88.42±0.49 88.83±0.09CCT [13]CVPR'2066.16±0.9161.84±1.2669.87±1.9760.97±1.6380.49±0.3888.88±0.4989.30±0.8988.14±0.29MGCC (Ours)This paper68.06±2.4561.49±2.8273.06±2.6362.53±2.9981.45±0.3589.35±0.9689.96±0.1888.98±0.45", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "ON SIMILAR DOMAINS. WE REPORT THE MEAN AND STDEV WITH THREE RUNS. ↑ OR ↓ INDICATE AN INCREASE OR DECREASE RELATIVE TO THE CORRESPONDING INDICATORS IN TABLE I, AND PARENTHESES INDICATE THE DIFFRERNCE.", "figure_data": "Metrics(%)MethodVenueBUSIMTSIoURecallPrecisionF1IoURecallPrecisionF1MT [33]NIPS'17 65.54±0.96↓(0.41) 60.77±0.73↑(0.19) 69.25±2.30↓(0.24) 60.13±0.91↓(0.17) 80.16±0.33↑(0.06) 88.68±0.34↑(0.42) 89.12±0.60↓(0.12) 87.87±0.30↑(0.12)UA-MT [34] MICCAI'19 65.59±1.77↑(0.46) 59.86±1.27↓(1.46) 69.24±2.81↓(1.03) 59.77±1.55↓(0.78) 80.31±0.12↑(0.12) 88.44±0.06↑(0.25) 89.25±0.30↓(0.19) 87.86±0.20↑(0.06)DCT [35]ECCV'18 65.88±1.51↓(0.01) 60.87±0.61↓(0.52) 69.72±2.42↓(0.12) 60.36±0.96↓(0.45) 80.20±0.19↓(0.05) 88.80±0.61↑(0.16) 88.95±1.02↓(0.06) 87.92±0.19↑(0.01)CPS [37]CVPR'21 66.18±1.05↑(0.21) 60.55±1.49↓(0.51) 69.31±1.45↓(0.66) 60.30±1.40↓(0.28) 80.35±0.11↓(0.44) 88.57±0.14↓(0.37) 89.34±0.43↓(0.20) 88.07±0.21↓(0.35)CTBCT [38]MIDL'22 65.81±2.05↓(0.07) 62.22±0.25↑(0.04) 70.66±2.28↑(0.25) 61.51±0.65↓(0.01) 80.79±0.20↓(0.50) 89.23±0.35↓(0.10) 89.35±0.12↓(0.48) 88.46±0.19↓(0.37)CCT [13]CVPR'20 65.94±1.23↓(0.22) 62.02±1.06↑(0.18) 70.26±1.79↑(0.39) 61.06±1.71↑(0.08) 80.40±0.48↓(0.09) 88.62±0.36↓(0.26) 89.21±0.36↓(0.09) 87.97±0.46↓(0.17)MGCC (Ours) This paper 67.92±1.16↓(0.13) 62.48±1.54↑(0.98) 72.33±0.87↓(0.72) 62.66±1.74↑(0.12) 81.09±0.42↓(0.35) 88.85±0.49↓(0.50) 89.80±0.27↓(0.16) 88.56±0.40↓(0.41)", "figure_id": "tab_1", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "ON SYNTHESIS DATASET DOMAIN. WE REPORT THE MEAN AND STDEV WITH THREE RUNS. ↑ OR ↓ INDICATE AN INCREASE OR DECREASE RELATIVE TO THE CORRESPONDING INDICATORS IN", "figure_data": "", "figure_id": "tab_2", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "TABLE I, AND PARENTHESES INDICATE THE DIFFRERNCE. This paper 68.70±2.58↑(0.64) 62.59±1.72↑(1.09) 72.08±0.73↓(0.98) 62.88±1.70↑(0.34) 81.30±0.28↓(0.14) 89.32±0.78↓(0.03) 89.74±0.66↓(0.22) 88.83±0.34↓(0.14)", "figure_data": "Metrics(%)MethodVenueBUSIMTSIoURecallPrecisionF1IoURecallPrecisionF1MT [33]NIPS'17 66.22±1.42↑(0.26) 61.19±1.70↑(0.61) 69.00±2.65↓(0.48) 60.46±1.85↑(0.14) 80.18±0.06↑(0.08) 88.44±0.57↑(0.18) 89.04±0.73↓(0.21) 87.81±0.20↑(0.06)UA-MT [34] MICCAI'19 66.63±2.24↑(1.51) 60.81±1.40↓(0.51) 69.25±3.03↓(1.02) 60.56±1.50↑(0.01) 80.28±0.22↑(0.09) 88.84±0.14↑(0.66) 88.89±0.32↓(0.54) 87.91±0.38↑(0.12)DCT [35]ECCV'18 67.05±1.35↑(1.15) 60.25±1.45↓(1.51) 68.32±0.85↓(1.51) 60.22±1.61↓(0.59) 80.31±0.16↓(0.06) 88.57±0.52↑(0.06) 89.17±0.83↓(0.16) 87.91±0.18↑(0.01)CPS [37]CVPR'21 66.25±1.39↑(0.28) 59.96±2.72↓(1.10) 68.19±0.76↓(1.78) 59.70±2.27↓(0.89) 80.39±0.45↓(0.44) 88.31±0.44↓(0.37) 89.70±0.72↓(0.20) 88.05±0.32↓(0.35)CTBCT [38]MIDL'22 66.56±1.11↑(0.67) 62.24±2.14↑(0.06) 70.13±2.45↓(0.27) 61.60±2.01↑(0.08) 81.00±0.31↓(0.28) 88.98±0.32↓(0.36) 89.80±0.39↓(0.03) 88.56±0.27↓(0.27)CCT [13]CVPR'20 65.93±1.58↓(0.23) 60.65±2.06↓(1.18) 69.86±2.05↓(0.01) 60.39±1.75↓(0.58) 80.44±0.43↓(0.05) 88.47±0.34↓(0.41) 89.35±0.27↓(0.05) 87.98±0.34↓(0.15)MGCC (Ours)", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "and TableII, we observe that the IoU scores of most methods have improved to varying degrees. Additionally, for the TUS dataset, compared with TableI, the IoU and F1-score of most methods have improved to varying degrees. However, compared with TableII, the metrics of most methods in the experiment still declined.", "figure_data": "", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "ON SYNTHESIS DATASET. WE REPORT THE MEAN AND STDEV WITH THREE RUNS.", "figure_data": "Metrics(%)MethodVenueBUSITUSIoURecallPrecisionF1IoURecallPrecisionF1FullyU-Net [22]MICCAI'1569.69±0.9562.84±3.4470.74±3.2262.96±3.5482.63±0.2390.27±0.4890.62±0.2489.81±0.05SupervisedCMU-Net [14]ISBI'2370.81±0.3964.00±2.7872.10±1.7664.14±2.4483.04±0.1290.24±0.2590.98±0.2290.08±0.11MT [33]NIPS'1770.03±1.8762.91±2.2670.35±2.5462.83±2.7082.87±0.3990.51±0.7090.72±0.2890.01±0.41UA-MT [34]MICCAI'1970.18±1.1163.72±1.8371.18±3.2763.57±2.7682.77±0.2690.59±0.4390.43±0.3589.90±0.37Semi SupervisedDCT [35] CPS [37] CTBCT [38]ECCV'18 CVPR'21 MIDL'2269.86±1.23 70.29±0.91 70.87±1.5762.09±2.77 62.68±2.28 65.01±2.3170.59±2.19 70.45±2.27 71.85±1.8762.49±3.01 62.79±3.13 64.51±2.1782.89±0.34 82.54±0.57 83.10±0.2290.57±0.53 90.01±0.32 90.75±0.5590.70±0.06 90.87±0.81 90.72±0.3090.03±0.26 89.80±0.49 90.20±0.29CCT [13]CVPR'2070.59±0.3463.19±3.0470.97±2.3863.39±2.9882.88±0.1090.70±0.2890.75±0.1490.09±0.08MGCC (Ours)This paper73.73±1.1465.08±3.1070.04±1.9364.96±3.3283.27±0.3290.67±0.3690.96±0.4990.33±0.22", "figure_id": "tab_5", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "AND F1-VALUE OF OUR METHOD WITH MULTI-SCALE ATTENTION GATE ABLATION STUDY ON BUSI DATASET. WE REPORT THE MEAN AND STDEV WITH THREE RUNS.", "figure_data": "Multi-scale attention gate locationMetrics(%)Aux Decoder 1Aux Decoder 2Aux Decoder 3Main DecoderIoUF167.58±2.22 62.42±1.7067.72±1.32 62.28±2.1767.42±1.58 62.49±2.8468.06±2.45 62.53±2.99", "figure_id": "tab_6", "figure_label": "V", "figure_type": "table" }, { "figure_caption": "AND F1-VALUE OF OUR METHOD WITH CONVMIXER COMPARISON STUDY ON BUSI DATASET. WE REPORT THE MEAN AND STDEV WITH THREE RUNS.", "figure_data": "ConvMixer layer Layer1 Layer2 Layer3Kernel SizeIoUMetrics(%)F1246767.38±1.0662.58±2.46259766.91±1.7162.75±2.51479766.86±0.8162.52±2.49149768.01±1.3662.43±1.86369768.06±2.4562.53±2.99Table VI lists the comparative experiments with different de-coders connecting with different lengths of ConvMixer layerswith various kernel sizes. The results show that connectingauxiliary decoder 2, 3 and the main decoder at the ConvMixerlayers with the lengths of 3, 6 and 9, respectively, achievesthe best performance.", "figure_id": "tab_7", "figure_label": "VI", "figure_type": "table" }, { "figure_caption": "AND INFERENCE TIME OF THE TESTED METHODS, FOR A BATCH SIZE OF 1.", "figure_data": "MethodInference Speed(ms)Params cost(M)GFLOPsU-Net(Baseline)20.4434.5365.44MGCC(w/o msag)22.9344.4767.98MGCC32.0752.1491.66", "figure_id": "tab_8", "figure_label": "VII", "figure_type": "table" } ]
Fenghe Tang; Jianrui Ding; Lingtao Wang; Min Xian; Chunping Ning; Jian- Rui Ding; Jianrui Ding
[ { "authors": "Q Huang; Z Fan; X Li", "journal": "BioMed Research International", "ref_id": "b0", "title": "Machine Learning in Ultrasound Computer-Aided Diagnostic Systems: A Survey", "year": "2018" }, { "authors": "T Lei; R Wang; Y Wan; X Du; H Meng; A K Nandi", "journal": "", "ref_id": "b1", "title": "Medical Image Segmentation Using Deep Learning: A Survey", "year": "2020" }, { "authors": "G M Harshvardhan", "journal": "Computer Science Review", "ref_id": "b2", "title": "A comprehensive survey and analysis of generative models in machine learning", "year": "2020" }, { "authors": "T Han", "journal": "Science Advances", "ref_id": "b3", "title": "Breaking medical data sharing boundaries by using synthesized radiographs", "year": "2020-12" }, { "authors": "J Krause", "journal": "The Journal of pathology", "ref_id": "b4", "title": "Deep learning detects genetic alterations in cancer histology generated by adversarial networks", "year": "2021" }, { "authors": "G Müller-Franzes", "journal": "", "ref_id": "b5", "title": "Diffusion Probabilistic Models beat GANs on Medical Images", "year": "2022" }, { "authors": "I Goodfellow", "journal": "Communications of the ACM", "ref_id": "b6", "title": "Generative adversarial networks", "year": "2020" }, { "authors": "D Saxena; J Cao", "journal": "", "ref_id": "b7", "title": "Generative Adversarial Networks (GANs): Challenges, Solutions, and Future Directions", "year": "2020-10" }, { "authors": "J Ho; A Jain; P Abbeel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b8", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "R Rombach; A Blattmann; D Lorenz; P Esser; B Ommer", "journal": "", "ref_id": "b9", "title": "High resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "A Kazerouni", "journal": "", "ref_id": "b10", "title": "Diffusion models for medical image analysis: A comprehensive survey", "year": "2022" }, { "authors": "A Trockman; J Z Kolter", "journal": "", "ref_id": "b11", "title": "Patches are all you need?", "year": "2022" }, { "authors": "Y Ouali; C Hudelot; M Tami", "journal": "", "ref_id": "b12", "title": "Semi-supervised semantic segmentation with cross-consistency training", "year": "2020-06" }, { "authors": "F Tang; L Wang; C Ning; M Xian; J Ding", "journal": "", "ref_id": "b13", "title": "CMU-Net: A Strong ConvMixer-based Medical Ultrasound Image Segmentation Network", "year": "2022" }, { "authors": "I Goodfellow", "journal": "Communications of the ACM", "ref_id": "b14", "title": "Generative adversarial networks", "year": "2020" }, { "authors": "M J Chuquicusma; S Hussein; J Burt; U Bagci", "journal": "", "ref_id": "b15", "title": "How to fool radiologists with generative adversarial networks? A visual turing test for lung cancer diagnosis", "year": "2018" }, { "authors": "C Baur; S Albarqouni; N Navab", "journal": "", "ref_id": "b16", "title": "MelanoGANs: high resolution skin lesion synthesis with GANs", "year": "2018" }, { "authors": "F Calimeri; A Marzullo; C Stamile; G Terracina", "journal": "", "ref_id": "b17", "title": "Biomedical data augmentation using generative adversarial neural networks", "year": "2017-09" }, { "authors": "A Kazerouni", "journal": "", "ref_id": "b18", "title": "Diffusion models for medical image analysis: A comprehensive survey", "year": "2022" }, { "authors": "W H Pinaya", "journal": "Springer Nature Switzerland", "ref_id": "b19", "title": "Brain Imaging Generation with Latent Diffusion Models", "year": "2022-09-22" }, { "authors": "P A Moghadam", "journal": "", "ref_id": "b20", "title": "A morphology focused diffusion probabilistic model for synthesis of histopathology images", "year": "2022" }, { "authors": "O Ronneberger; P Fischer; T Brox", "journal": "Springer", "ref_id": "b21", "title": "U-net: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "Z Zhou; M M Rahman Siddiquee; N Tajbakhsh; J Liang", "journal": "Springer", "ref_id": "b22", "title": "Unet++: A nested u-net architecture for medical image segmentationDeep learning in medical image analysis and multimodal learning for clinical decision support", "year": "2018" }, { "authors": "O Oktay", "journal": "", "ref_id": "b23", "title": "Attention u-net: Learning where to look for the pancreas", "year": "2018" }, { "authors": "H Huang", "journal": "IEEE", "ref_id": "b24", "title": "Unet 3+: A full-scale connected unet for medical image segmentation", "year": "2020" }, { "authors": "J M J Valanarasu; V M Patel", "journal": "", "ref_id": "b25", "title": "UNeXt: MLP-Based Rapid Medical Image Segmentation Network", "year": "" }, { "authors": "A Vaswani", "journal": "Advances in neural information processing systems", "ref_id": "b26", "title": "Attention is all you need", "year": "2017" }, { "authors": "J Chen", "journal": "", "ref_id": "b27", "title": "Transunet: Transformers make strong encoders for medical image segmentation", "year": "2021" }, { "authors": "J M J Valanarasu; P Oza; I Hacihaliloglu; V M Patel", "journal": "", "ref_id": "b28", "title": "Medical transformer: Gated axial-attention for medical image segmentation", "year": "2021" }, { "authors": "W Wang; C Chen; M Ding; H Yu; S Zha; J Li", "journal": "", "ref_id": "b29", "title": "Transbts: Multimodal brain tumor segmentation using transformer", "year": "2021" }, { "authors": "A Dosovitskiy", "journal": "", "ref_id": "b30", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "D H Lee", "journal": "", "ref_id": "b31", "title": "Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks", "year": "2013" }, { "authors": "A Tarvainen; H Valpola", "journal": "Advances in neural information processing systems", "ref_id": "b32", "title": "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results", "year": "2017" }, { "authors": "L Yu; S Wang; X Li; C W Fu; P A Heng", "journal": "Springer International Publishing", "ref_id": "b33", "title": "Uncertaintyaware self-ensembling model for semi-supervised 3D left atrium segmentation", "year": "2019" }, { "authors": "S Qiao; W Shen; Z Zhang; B Wang; A Yuille", "journal": "", "ref_id": "b34", "title": "Deep co-training for semi-supervised image recognition", "year": "2018" }, { "authors": "X Chen; Y Yuan; G Zeng; J Wang", "journal": "", "ref_id": "b35", "title": "Semi-supervised semantic segmentation with cross pseudo supervision", "year": "2021" }, { "authors": "X Luo; M Hu; T Song; G Wang; S Zhang", "journal": "PMLR", "ref_id": "b36", "title": "Semi-supervised medical image segmentation via cross teaching between cnn and transformer", "year": "2022" }, { "authors": "X Luo; J Chen; T Song; G Wang", "journal": "", "ref_id": "b37", "title": "Semi-supervised medical image segmentation through dual-task consistency", "year": "2021" }, { "authors": "D Hendrycks; K Gimpel", "journal": "", "ref_id": "b38", "title": "Gaussian error linear units (gelus)", "year": "2016" }, { "authors": "W Al-Dhabyani; M Gomaa; H Khaled; A Fahmy", "journal": "Data in brief", "ref_id": "b39", "title": "Dataset of breast ultrasound images", "year": "2020" }, { "authors": "Y Zhang", "journal": "Healthcare. MDPI", "ref_id": "b40", "title": "BUSIS: a benchmark for breast ultrasound image segmentation", "year": "2022" }, { "authors": "M H Yap", "journal": "IEEE Journal of Biomedical and Health Informatics", "ref_id": "b41", "title": "Automated Breast Ultrasound Lesions Detection Using Convolutional Neural Networks", "year": "2018-07" }, { "authors": "C Xue", "journal": "Medical image analysis", "ref_id": "b42", "title": "Global guidance network for breast lesion segmentation in ultrasound images", "year": "2021" }, { "authors": "N Srivastava; G Hinton; A Krizhevsky; I Sutskever; R Salakhutdinov", "journal": "The journal of machine learning research", "ref_id": "b43", "title": "Dropout: a simple way to prevent neural networks from overfitting", "year": "2014" }, { "authors": "D P Kingma; M Welling", "journal": "ICLR", "ref_id": "b44", "title": "Auto-Encoding Variational Bayes", "year": "2014" }, { "authors": "J Song; C Meng; S Ermon", "journal": "", "ref_id": "b45", "title": "Denoising diffusion implicit models", "year": "2020" }, { "authors": "J Zhou", "journal": "", "ref_id": "b46", "title": "Thyroid Nodule Segmentation and Classification in Ultrasound Images", "year": "2020-03" } ]
[ { "formula_coordinates": [ 3, 311.97, 264.51, 251.06, 47.17 ], "formula_id": "formula_0", "formula_text": "D u M +A = D u M ∪ D u A and the entire training set is D N +M +A = D l N ∪ D u M ∪ D u A . We presuming that an image is x i , x i has ground truth y i if x i ∈ D l N . On the contrary, if x i ∈ D u" }, { "formula_coordinates": [ 3, 396.91, 520.83, 162.25, 8.96 ], "formula_id": "formula_1", "formula_text": "z = E(x) x = D(z)(1" }, { "formula_coordinates": [ 4, 92.7, 385.63, 207.32, 12.69 ], "formula_id": "formula_2", "formula_text": "L LDM = E z, ∼N (0,1),t [|| -θ (z t , t)|| 2 2 ](2)" }, { "formula_coordinates": [ 4, 337.65, 404.28, 225.38, 28.66 ], "formula_id": "formula_3", "formula_text": "f l = BN (σ 1 {DepthwiseConv (f l-1 )}) + f l-1 (3) f l = BN (σ 1 {P ointwiseConv (f l )})(4)" }, { "formula_coordinates": [ 4, 327.39, 693.63, 235.64, 56.68 ], "formula_id": "formula_4", "formula_text": "f Concat = σ 2 (Concat{BN {P ointwiseConv(f )}, BN {OrdinaryConv(f )}, BN {DilationConv(f )}}) (5) f s = f ×σ 3 (P ointwiseConv(f Concat )) + f (6)" }, { "formula_coordinates": [ 5, 109.66, 190.43, 190.36, 14.58 ], "formula_id": "formula_5", "formula_text": "ŷk = D auxk (f p l k ), f p l k = P k (f l k )(7)" }, { "formula_coordinates": [ 5, 140.89, 215.53, 159.13, 10.27 ], "formula_id": "formula_6", "formula_text": "ŷ = D main (f l K )(8)" }, { "formula_coordinates": [ 5, 61.98, 356.78, 238.04, 30.55 ], "formula_id": "formula_7", "formula_text": "L s = 1 (K + 1) • N N i=1 K k=1 L (ŷ i k , y i ) + L (ŷ i , y i )(9)" }, { "formula_coordinates": [ 5, 95.59, 487.9, 204.43, 8.96 ], "formula_id": "formula_8", "formula_text": "L (.) = 0.5 • BCE (ŷ, y) + Dice (ŷ, y)(10)" }, { "formula_coordinates": [ 5, 90.71, 586.74, 209.31, 30.55 ], "formula_id": "formula_9", "formula_text": "L u = 1 K • (M + A) M +A j=1 K k=1 ||ŷ j -ŷj k ||(11)" }, { "formula_coordinates": [ 5, 131.55, 683.12, 164.32, 9.65 ], "formula_id": "formula_10", "formula_text": "L total = L s + λ • L u (12" }, { "formula_coordinates": [ 5, 295.87, 683.44, 4.15, 8.64 ], "formula_id": "formula_11", "formula_text": ")" }, { "formula_coordinates": [ 5, 69.77, 724.38, 118.85, 13.98 ], "formula_id": "formula_12", "formula_text": "λ = w max • e (-5(1-t tmax ) 2 )" } ]
2023-05-16
[ { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Introduction", "publication_ref": [ "b10", "b53", "b34", "b14", "b30", "b57", "b43", "b59" ], "table_ref": [], "text": "In recent years, Label Distribution Learning (LDL) [Geng, 2016] has drawn much attention in machine learning, with its effectiveness demonstrated in various applications [Geng et al., 2013;Zhang et al., 2015;Qi et al., 2022]. Unlike single-label learning (SLL) and multi-label learning (MLL) [Gibaja and Ventura, 2014;Moyano et al., 2019;Zhao et al., 2022], LDL can provide information on how much each label describes a sample, which helps to deal with the problem of label ambiguity [Geng, 2016]. However, Obtaining label distributions is more challenging than logical labels, as it requires many annotators to manually indicate the degree to Figure 1: An example of label enhancement. Features contain the full information of samples with many redundancies, while logical labels possess significant information but are not comprehensive. The generation of label distributions makes full use of the important knowledge in logical labels and supplements the sample details according to the features.\nwhich each label describes an instance and accurately quantifying this degree remains difficult. Thus, [Xu et al., 2019] proposed Label Enhancement (LE), leveraging the topological information in the feature space and the correlation among the labels to recover label distributions from logical labels.\nMore specifically, LE can be seen as a preprocessing of LDL [Zheng et al., 2021], which takes the logically labeled datasets as inputs and outputs label distributions. As shown in Figure 1, this image reflects the complete information of the sample including some details. Meanwhile, its corresponding logical labels only highlight the most salient features, such as the sky, lake, mountain, and forest. Features contain comprehensive information about samples with many redundancies, while logical labels hold arresting information but are not allsided. Therefore, it is reasonable to assume that features and logical labels can be regarded as two descriptions of instances from different views, possessing complete and salient information of samples. The purpose of LE tasks can be simplified as enhancing the significant knowledge in logical labels by utilizing detailed features. Subsequently, each label is allocated a descriptive degree according to its importance.\nMost existing LE methods concentrate on establishing the mapping relationship between features and label distributions under the guidance of logical labels. Although these previous works have achieved good performance for LE problem, they neglect that features and labels are descriptions of two dif-ferent dimensions related to the same samples. Furthermore, logical labels can only indicate the conspicuous information of each sample without obtaining the label description ranking. The label distributions may appear to be quite different even if the logical labels present the same results.\nTo address these issues, we propose the ConLE method which fuses features and logic labels to generate the highlevel features of samples by contrastive learning strategy. More specifically, we elaborately train a representation learning model, which forces the features and logical labels of the same instance to be close in projection space, while those of different instances are farther away. By concatenating the representations of features and logical labels in projection space, we get high-level features including knowledge of logic labels and features. Accordingly, label distributions can be recovered from high-level features by the feature mapping network. Since it is expected that the properties of labels in the recovered label distributions should be consistent with those in the logical labels, we design a training strategy with label-level consistency to guide the learning of the feature mapping network.\nOur contributions can be delivered as follows:\n• Based on our analysis of label enhancement, we recognize that features and logical labels offer distinct perspectives on instances, with features providing comprehensive information and logical labels highlighting salient information. In order to leverage the intrinsic relevance between these two views, we propose the Contrastive Label Enhancement (ConLE) method, which unifies features and logical labels in a projection space to generate high-level features for label enhancement.\n• Since all possible labels should have similar properties in logical labels and label distributions, we design a training strategy to keep the consistency of label properties for the generation of label distributions. This strategy not only maintains the attributes of relevant and irrelevant labels but also minimizes the distance between logical labels and label distributions.\n• Extensive experiments are conducted on 13 benchmark datasets, experimental results validate the effectiveness and superiority of our ConLE compared with several state-of-the-art LE methods." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b45", "b39", "b47", "b5", "b17", "b20", "b7", "b15", "b22", "b24", "b2", "b41", "b55", "b36", "b0" ], "table_ref": [], "text": "In this section, we mainly introduce the related work of this paper from two research directions: label enhancement and contrastive learning.\nLabel Enhancement. Label enhancement is proposed to recover label distributions from logical labels and provide data preparation for LDL. For example, the Graph Laplacian LE (GLLE) method proposed by [Xu et al., 2021] makes the learned label distributions close to logical labels while accounting for learning label correlations, making similar samples have similar label distributions. The method LESC proposed by [Tang et al., 2020] uses low-rank representations to excavate the underlying information contained in the feature space. [Xu et al., 2022] proposed LEVI to infer label distributions from logical labels via variational inference. The method RLLE formulates label enhancement as a dynamic decision process and uses prior knowledge to define the target for LE [Gao et al., 2021]. The kernel-based label enhancement (KM) algorithm maps each instance to a highdimensional space and uses a kernel function to calculate the distance between samples and the center of the group, in order to obtain the label description. [Jiang et al., 2006]. The LE algorithm based on label propagation (LP) recovers label distributions from logical labels by using the iterative label propagation technique [Li et al., 2015]. Sequential label enhancement (Seq LE) formulates the LE task as a sequential decision procedure, which is more consistent with the process of annotating the label distributions in human brains [Gao et al., 2022]. However, these works neglect the essential connection between features and logical labels. In this paper, we regard features and logical labels as sample descriptions from different views, where we can create faithful high-level features for label enhancement by integrating them into the unified projection space.\nContrastive Learning. The basic idea of contrastive learning, an excellent representation learning method, is to map the original data to a feature space. Within this space, the objective is to maximize the similarities among positive pairs while minimizing those among negative pairs. [Grill et al., 2020;Li et al., 2020]. Currently, contrastive learning has achieved good results in many machine learning domains [Li et al., 2021;Dai and Lin, 2017]. Here we primarily introduce several contrastive learning methods applied to multi-label learning. [Wang et al., 2022] designed a multi-label contrastive learning objective in the multi-label text classification task, which improves the retrieval process of their KNN-based method. [Zhang et al., 2022] present a hierarchical multilabel representation learning framework that can leverage all available labels and preserve the hierarchical relationship between classes. [Qian et al., 2022] propose two novel models to learn discriminative and modality-invariant representations for cross-modal retrieval. [Bai et al., 2022] propose a novel contrastive learning boosted multi-label prediction model based on a Gaussian mixture variational autoencoder (C-GMVAE), which learns a multimodal prior space and employs a contrastive loss. For ConLE, the descriptions of one identical sample are regarded as positive pairs and those of different samples are negative pairs. We pull positive pairs close and negative pairs farther away in projection space by contrastive learning to obtain good highlevel features, which is really beneficial for the LE process." }, { "figure_ref": [ "fig_0" ], "heading": "The ConLE Approach", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "In this paper, we use the following notations. The set of instances is denoted by\nX = {x 1 , x 2 , ..., x n } ∈ R dim1×n ,\nwhere dim 1 is the dimensionality of each instance and n is the number of instances. Y = {y 1 , y 2 , ..., y c } denotes the complete set of labels, where c is the number of classes. For an instance x i , its logical label is represented by L i = (l y1 xi , l y2 xi , . . . , l yc xi ) T , where l yj xi can only take values of 0 or 1. The label distribution for x i is denoted by andF2) to project X and L into a unified projection space, which results in two representations Z and Q. These representations are then concatenated into high-level features (H). To obtain good high-level features, ConLE utilizes a contrastive learning strategy that brings two representations of the same sample closer together while pushing representations of different samples farther apart from each other. Additionally, ConLE employs a reliable training strategy to generate label distributions D from high-level features H by the feature mapping network F3. This strategy minimizes the distance between logical labels and label distributions, ensuring that the restored label distributions are close to the existing logical labels. Meanwhile, it also demands the description degree of relevant labels marked as 1 in the logical labels is larger than that of the irrelevant labels marked as 0. In this way, ConLE can guarantee the consistency of label attributes in logical labels and label distributions.\nD i = (d y1 xi , d y2 xi , . . . , d yc xi )\nT , where d yj xi depicts the degree to which x i belongs to label y j . It is worth noting that the sum of all label description degrees for x i is equal to 1. The purpose of LE tasks is to recover the label distribution D i of x i from the logical label L i and transform the logically labeled dataset\nS = {(x i , L i )|1 ≤ i ≤ n} into the LDL training set E = {(x i , D i )|1 ≤ i ≤ n}.\nThe proposed Contrastive Label Enhancement (ConLE) in this paper contains two important components: the generation of high-level features by contrastive learning and the training strategy with label-level consistency for LE. Overall, the loss function of ConLE can be formulated as follows:\nL ConLE = l con + l att .\n(1)\nwhere l con denotes the contrastive loss for high-level features, l att indicates the loss of training strategy with label-level consistency. The framework of ConLE and the detailed procedure of these two parts is shown in Figure 2." }, { "figure_ref": [], "heading": "The Generation of High-Level Features by Contrastive Learning", "publication_ref": [ "b24" ], "table_ref": [], "text": "The first section provides a detailed analysis of the essence of LE tasks. We regard features and logic labels as two descriptions of samples. Features contain complete information, while logic labels capture prominent details. Label distributions show the description degree of each label. We can't simply focus on the salient information in logical labels, but make good use of salient information and supplement the detailed information according to the original features. To effectively excavate the knowledge of features and logical labels, we adopt the contrastive learning of sample-level consistency.\nTo reduce the information loss induced by contrastive loss, we do not directly conduct contrastive learning on the feature matrix [Li et al., 2021]. Instead, we project the features (X) and logical labels (L) of all samples into a unified projection space via two mapping networks (F 1 (•; θ),F 2 (•; φ)), and then get the representations Z and Q. Specifically, the representations of features and logic labels in the projection space can be obtained by the following formula:\nZ m = F 1 (x m ; θ),(2)\nQ m = F 2 (L m ; φ),(3)\nwhere x m and L m represent the features and logical labels of the m-th sample, Z m and Q m denote their embedded representations in the dim 2 -dimensional space. θ and φ refer to the corresponding network parameters.\nContrastive learning aims to maximize the similarities of positive pairs while minimizing those of negative ones. In this paper, we construct positive and negative pairs at the instance level with Z and Q where {Z m , Q m } is positive pair and leave other (n -1) pairs to be negative. The cosine similarity is utilized to measure the closeness degree between pairs:\nh(Z m , Q m ) = (Z m )(Q m ) T ||Z m || ||Q m || . (4\n)\nTo optimize pairwise similarities without losing their generality, the form of instance-level contrastive loss between Z m and Q m is defined as:\nl m = l Zm + l Qm ,(5)\nwhere l Zm denotes the contrastive loss for Z m and l Qm indicates loss of Q m . Specifically, the item l Zm is defined as:\nl Zm = -log e (h(Zm,Qm)/τ I ) n s=1,s =m [e (h(Zm,Zs)/τ I ) + e (h(Zm,Qs)/τ I ) ]\n,\n(6) and the item l Qm is formulated as:\nl Qm = -log\ne (h(Qm,Zm)/τ I ) n s=1,s =m [e (h(Qm,Qs)/τ I ) + e (h(Qm,Zs)/τ I ) ] ,\n(7) where τ I is the instance-level temperature parameter to control the softness. Further, the instance-level contrastive loss is computed across all samples as:\nl con = 1 n n m=1 l m .(8)\nThe expressions Z and Q updated by contrastive learning strategy will be concatenated as high-level features H, which are taken as inputs of the feature mapping network to learn the label distributions:\nH = concat(Z, Q).\n(9)" }, { "figure_ref": [], "heading": "The Training Strategy With Label-Level Consistency for LE", "publication_ref": [ "b19", "b50" ], "table_ref": [], "text": "Based on the obtained high-level features, we introduce a feature mapping network F 3 to generate label distributions. In other words, we have the following formula:\nD m = F 3 (H m ; ϕ),(10)\nwhere D m is the recovered label distribution of the m-th sample and H m is the high-level feature, and ϕ denote the parameter of feature mapping network F 3 .\nIn ConLE, we consider the consistency of label attributes in logical labels and label distributions. Firstly, because of recovered label distributions should be close to existing logical labels, we expect to minimize the distance between logical labels and the recovered label distributions, which is normalized by the softmax normalization form. This criterion can be defined as:\nl dis = n m=1 ||F 3 (H m ; ϕ) -L m || 2 , (11\n)\nwhere D m and L m represents the recovered label distribution and logic label of the m-th sample. Moreover, logical labels divide all possible labels into relevant labels marked 1 and irrelevant labels marked 0 for each sample. We hope to ensure Obtain the high-level features H by Eq. ( 9); 5:\nObtain label distributions D by Eq. ( 10); 6:\nOptimize θ, φ, ϕ through Eq. (1); 7: end while 8: return D that the attributes of relevant and irrelevant labels are consistent in label distributions and logical labels. This idea is considered in many multi-label learning methods [Kanehira and Harada, 2016;Yan et al., 2016]. Under their inspiration, we apply a threshold strategy to ensure that the description degree of relevant labels should be greater than that of irrelevant labels in the recovered label distributions. This strategy can be written as follows:\nd y + xm -d y - xm > 0 s.t. y + ∈ P m , y -∈ N m (12)\nwhere P m is used to indicate the set of relevant labels in x m , N m represents the set of irrelevant labels in x m , d y + xm and d y - xm are the prediction results of LE process.\nIn this way, we can get the loss function of threshold strategy:\nl thr = 1 n n m=1 y + ∈Pm y -∈Nm [max(d y - xm -d y + xm + , 0)],(13)\nwhere is a hyperparameter that determines the threshold.\nThe formula can be simplified to:\nl thr = 1 n n m=1 [max(max d y - xm -min d y + xm + , 0)],(14)\nFinally, the loss function of training strategy for label-level consistency can be formulated as follows:\nl att = λ 1 l dis + λ 2 l thr ,(15)\nwhere λ 1 and λ 1 are two trade-off parameters. This designed training strategy can guarantee that label attributes are the same in the logical labels and label distributions, thus obtaining a better feature mapping network to recover label distributions. The full optimization process of ConLE is summarized in Algorithm 1." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b26", "b51" ], "table_ref": [], "text": "We conduct comprehensive experiments on 13 real-world datasets to verify the effectiveness of our method. To be specific, SJAFFE dataset [Lyons et al., 1998] and SBU-3DFE dataset [Yin et al., 2006] " }, { "figure_ref": [], "heading": "Measure Formula", "publication_ref": [ "b3" ], "table_ref": [ "tab_0" ], "text": "Kullback-Leibler↓ Dis1(D, D) = c j=1 djln datasets is rated for six different emotions (i.e., happiness, sadness, surprise, fear, anger, and disgust) using 5-level scale.\nd j dj Chebyshev↓ Dis2(D, D) = maxj|dj -dj| Clark↓ Dis3(D, D) = c j=1 (d j -dj ) 2 (d j + dj ) 2 Canberra↓ Dis4(D, D) = c j=1 |d j -dj | 2 d j + dj Cosine↑ Sim1(D, D) =\nThe Natural Scene dataset is collected from 2000 natural scene images. Dataset Movie is about the user rating for 7755 movies. Yeast datasets are derived from biological experiments on gene expression levels of budding yeast at different time points [Eisen et al., 1998]. The basic statistics of these datasets are shown in Table 1." }, { "figure_ref": [], "heading": "Evaluation Measures", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "The performance of the LE algorithm is usually calculated by distance or similarity between the recovered label distributions and the real label distributions. According to [Geng, 2016], we select six measures to evaluate the recovery performance, i.e., Kullback-Leibler divergence (K-L)↓, Chebyshev distance (Cheb)↓, Clark distance (Clark)↓, Canberra metric (Canber)↓, Cosine coefficient (Cosine)↑ and Intersection similarity (Intersec)↑. The first four are distance measures and the last two are similarity measures. The formulae for these six measures are summarized in Table 2." }, { "figure_ref": [], "heading": "Comparison Methods", "publication_ref": [ "b9", "b17", "b20", "b45", "b47", "b39" ], "table_ref": [], "text": "We compare ConLE with six advanced LE methods, including FCM [Gayar et al., 2006], KM [Jiang et al., 2006], LP [Li et al., 2015], GLLE [Xu et al., 2021], LEVI-MLP [Xu et al., 2022] and LESC [Tang et al., 2020]. The following are the datails of comparison algorithms used in our experiments: 1) FCM: This method makes use of membership degree to determine which cluster each instance belongs to according to fuzzy C-means clustering.\n2) KM: It is a kernel-based algorithm that uses the fuzzy SVM to get the radius and center, obtaining the membership degree as the final label distribution.\n3) LP: This approach applies label propagation (LP) in semi-supervised learning to label enhancement, employing graph models to construct a label propagation matrix and generate label distributions." }, { "figure_ref": [], "heading": "4) GLLE:", "publication_ref": [], "table_ref": [], "text": "The algorithm recovers label distributions in the feature space guided by the topological information.\n5) LEVI-MLP: It regards label distributions as potential vectors and infers them from the logical labels in the training datasets by using variational inference.\n6) LESC: This method utilizes the low-rank representation to capture the global relationship of samples and predict implicit label correlation to achieve label enhancement." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [ "b38", "b28", "b32" ], "table_ref": [ "tab_3" ], "text": "Implementation Details. In ConLE, we adopt the SGD optimizer [Ruder, 2016] for optimization and utilize the LeakyReLU activation function [Maas et al., 2013] to implement the networks. The code of this method is implemented by PyTorch [Paszke et al., 2019] on one NVIDIA Geforce GTX 2080ti GPU with 11GB memory. All experiments for our selected comparison algorithms follow the optimal settings mentioned in their papers and we run the programs using the code provided by their relevant authors. All algorithms are evaluated by ten times ten-fold cross-validation for fairness. When comparing with other algorithms, the hyperparameters of ConLE are set as follows: λ 1 is set to 0.5, λ 2 is set to 1 and the temperature parameter τ I is 0.5.\nRecovery Performance. The detailed comparison results are presented in Table 3, with the best performance on each dataset highlighted in bold. For each evaluation metric, ↓ shows the smaller the better while ↑ shows the larger the better. The average rankings of each algorithm across all the datasets are shown in the last row of each table.\nThe experimental results clearly indicate that our ConLE method exhibits superior recovery performance compared to the other six advanced LE algorithms. Specifically, ConLE can achieve the ranking of 1.00, 1.23, 1.00, 1.07, 1.15 and 1.00 respectively for the six evaluation metrics. ConLE obtains excellent performance both on large-scale datasets such as movie and small-scale datasets such as SJAFFE. ConLE can attain significant improvements both in comparison with algorithm adaption and specialized algorithms by exploring the description consistency of features and logical labels in the same sample. We integrate features and logical labels into the unified projection space to generate high-level features \nL ConLE h = λ 1 l dis + λ 2 l thr ,(16)\nIn ConLE h , we only explore the consistency information of label attributes without considering the description consistency of features and labels. Secondly, we need to remove the strategy that ensures the consistency of label attributes. To ensure the normal training process, we still keep the strategy of minimizing the distance between label distributions and logical labels. The loss func-tion of the comparison function ConLE l : \nL ConLE l = λ 1 l dis + l con .(17)" }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we propose Contrastive Label Enhancement (ConLE), a novel method to cope with the (Label Enhancement) LE problem. ConLE regards features and logic labels as descriptions from different views, and then elegantly integrates them to generate high-level features by contrastive learning. Additionally, ConLE employs a training strategy that considers the consistency of label attributes to estimate the label distributions from high-level features. Experimental results on 13 datasets demonstrate its superior performance over other state-of-the-art methods." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This work was supported by the National Key R&D Program of China under Grant 2020AAA0109602." } ]
Label distribution learning (LDL) is a new machine learning paradigm for solving label ambiguity. Since it is difficult to directly obtain label distributions, many studies are focusing on how to recover label distributions from logical labels, dubbed label enhancement (LE). Existing LE methods estimate label distributions by simply building a mapping relationship between features and label distributions under the supervision of logical labels. They typically overlook the fact that both features and logical labels are descriptions of the instance from different views. Therefore, we propose a novel method called Contrastive Label Enhancement (ConLE) which integrates features and logical labels into the unified projection space to generate high-level features by contrastive learning strategy. In this approach, features and logical labels belonging to the same sample are pulled closer, while those of different samples are projected farther away from each other in the projection space. Subsequently, we leverage the obtained high-level features to gain label distributions through a welldesigned training strategy that considers the consistency of label attributes. Extensive experiments on LDL benchmark datasets demonstrate the effectiveness and superiority of our method.
Contrastive Label Enhancement
[ { "figure_caption": "Figure 2 :2Figure2: Framework of the proposed ConLE. ConLE approaches the LE problem by regarding features (X) and logical labels (L) as sample descriptions from two views. It uses two mapping networks (F1 and F2) to project X and L into a unified projection space, which results in two representations Z and Q. These representations are then concatenated into high-level features (H). To obtain good high-level features, ConLE utilizes a contrastive learning strategy that brings two representations of the same sample closer together while pushing representations of different samples farther apart from each other. Additionally, ConLE employs a reliable training strategy to generate label distributions D from high-level features H by the feature mapping network F3. This strategy minimizes the distance between logical labels and label distributions, ensuring that the restored label distributions are close to the existing logical labels. Meanwhile, it also demands the description degree of relevant labels marked as 1 in the logical labels is larger than that of the irrelevant labels marked as 0. In this way, ConLE can guarantee the consistency of label attributes in logical labels and label distributions.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Algorithm 11The optimization of ConLE Input: Training instances X = {x 1 , x 2 , ..., x n }; Logical labels L = {L 1 , L 2 , ..., L n }; Temperature parameter τ I Output: label distributions D = {D 1 , D 2 , ..., D n } 1: Random Initialize θ, φ and ϕ; 2: while not converged do 3:Obtain {Z m , Q m } n m=1 by Eq. (2) and Eq. (3); 4:", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Influence of parameters λ1 and λ2 on dataset SBU-3DFE.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Convergence curve on dataset Movie.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "are obtained from the two facial expression databases, JAFFE and BU-3DFE. Each image in Statistics of the 13 datasets.", "figure_data": "No.DatasetExamples Features Labels1SJAFFE21324362SBU-3DFE250024363Natural-Scene200029494Movie7755186955Yeast-alpha246524186Yeast-cdc246524157Yeast-elu246524148Yeast-diau24652479Yeast-dtt246524410Yeast-heat246524611Yeast-cold246524412Yeast-spo246524613Yeast-spo52465243", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Introduction to evalution measures.", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Recovery results evaluated by six measures. Ablation studies are conducted to verify the effectiveness of the two modules in our method.Therefore, we first remove the part of ConLE that generates high-level features and get a comparison algorithm ConLE h , whose loss function can be written as:", "figure_data": "and keep the consistency of label attributes in the process oflabel enhancement.Ablation Studies. Our ConLE method consists of two maincomponents: generating high-level features by contrastivelearning and a training strategy with label-level consistencyfor LE.", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "provides the recovery results of ConLE h , ConLE l and ConLE. Due to the limitation of space, only the representative results measured on Kullback-Leibler, Clark, Canberra and Intersection are shown in the table. From the experimental results, we can observe that ConLE is superior to ConLE h and ConLE l in all cases. Compared with ConLE h , ConLE considers the inherent relationship between features and logical labels. It grasps the description consistency of samples and constructs high-level features for training. Compared with ConLE l , ConLE considers label-level consistency of logical labels and label distributions. It makes that each relevant label in the logical labels has a greater description degree in the label distributions. Therefore, our experimental results have verified that both modules of ConLE play essential roles in achieving excellent recovery performance. The integration of these modules in the complete ConLE method has been demonstrated to be highly effective.", "figure_data": "MetricsKullback-Leibler ↓Clark ↓Canberra ↓Intersection ↑MethodsConLE hConLE lConLEConLE hConLE lConLEConLE hConLE lConLEConLE hConLE lConLESJAFFE0.3990.0440.0280.3200.3050.2690.6510.7130.5450.8880.8920.907SBU-3DFE0.0510.0600.0390.3650.4050.2970.7670.8500.6700.8670.8420.886Natural-Scene0.7950.7730.7572.4632.4432.4506.8026.6956.7080.4970.5030.537Movie0.0730.0680.0600.5170.4910.4630.9230.8770.8370.8580.8660.871Yeast-alpha0.0070.0100.0050.2440.3420.2140.7280.7990.6960.9200.8910.961Yeast-cdc0.0060.0060.0040.2100.2310.1780.6180.6090.5050.9590.9600.966Yeast-elu0.0060.0070.0040.1990.2040.1650.5820.5990.4800.9590.9550.966Yeast-diau0.0180.0140.0090.2480.1980.1750.5090.4050.3650.9300.9370.949Yeast-dtt0.0130.0150.0090.1560.2010.1140.2980.3490.1990.9420.9300.950Yeast-heat0.0160.0120.0070.3020.2670.1360.4120.3700.2680.9290.9410.956Yeast-cold0.0120.0110.0090.1900.1620.1190.3310.2860.2030.9390.9310.950Yeast-spo0.0190.0160.0130.2850.2460.1770.4430.4060.3530.9140.9270.942Yeast-spo50.0140.0150.0130.1570.1720.1270.2480.2300.1920.9230.9290.939", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Recovery results of ConLE h , ConLE l and ConLE on 13 real-world datasets.", "figure_data": "", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" } ]
Yifei Wang; Yiyang Zhou; Jihua Zhu; Xinyuan Liu; Wenbiao Yan; Zhiqiang Tian
[ { "authors": " Bai", "journal": "", "ref_id": "b0", "title": "", "year": "2022" }, { "authors": "Junwen Bai; Shufeng Kong; Carla P Gomes", "journal": "PMLR", "ref_id": "b1", "title": "Gaussian mixture variational autoencoder with contrastive learning for multi-label classification", "year": "2022" }, { "authors": "Lin ; Dai; Bo Dai; Dahua Lin", "journal": "", "ref_id": "b2", "title": "Contrastive learning for image captioning", "year": "2017" }, { "authors": " Eisen", "journal": "", "ref_id": "b3", "title": "", "year": "1998" }, { "authors": "Paul T Michael B Eisen; Patrick O Spellman; David Brown; Botstein", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b4", "title": "Cluster analysis and display of genome-wide expression patterns", "year": "1998" }, { "authors": " Gao", "journal": "", "ref_id": "b5", "title": "", "year": "2021" }, { "authors": "Yongbiao Gao; Yu Zhang; Xin Geng", "journal": "", "ref_id": "b6", "title": "Label enhancement for label distribution learning via prior knowledge", "year": "2021" }, { "authors": " Gao", "journal": "", "ref_id": "b7", "title": "", "year": "2022" }, { "authors": "Yongbiao Gao; Ke Wang; Xin Geng", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b8", "title": "Sequential label enhancement", "year": "2022" }, { "authors": " Gayar", "journal": "Springer", "ref_id": "b9", "title": "A study of the robustness of knn classifiers trained using soft labels", "year": "2006" }, { "authors": " Geng", "journal": "", "ref_id": "b10", "title": "", "year": "2013" }, { "authors": "Xin Geng; Zhi-Hua Chao Yin; Zhou", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b11", "title": "Facial age estimation by learning from label distributions", "year": "2013" }, { "authors": " Geng", "journal": "", "ref_id": "b12", "title": "", "year": "2016" }, { "authors": "Xin Geng", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b13", "title": "Label distribution learning", "year": "2016" }, { "authors": "Ventura Gibaja", "journal": "Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery", "ref_id": "b14", "title": "Eva Gibaja and Sebastián Ventura. Multi-label learning: a review of the state of the art and ongoing research", "year": "2014" }, { "authors": " Grill", "journal": "", "ref_id": "b15", "title": "", "year": "2020" }, { "authors": "Jean-Bastien Grill; Florian Strub; Florent Altché; Corentin Tallec; Pierre Richemond; Elena Buchatskaya; Carl Doersch; Bernardo Avila Pires; Zhaohan Guo; Mohammad Gheshlaghi Azar", "journal": "Advances in neural information processing systems", "ref_id": "b16", "title": "Bootstrap your own latent-a new approach to self-supervised learning", "year": "2020" }, { "authors": " Jiang", "journal": "", "ref_id": "b17", "title": "", "year": "2006" }, { "authors": "Xiufeng Jiang; Zhang Yi; Jian Cheng; Lv ", "journal": "Neural Computing & Applications", "ref_id": "b18", "title": "Fuzzy svm with a new fuzzy membership function", "year": "2006" }, { "authors": "Harada Kanehira; Atsushi Kanehira; Tatsuya Harada", "journal": "", "ref_id": "b19", "title": "Multi-label ranking from positive and unlabeled data", "year": "2016" }, { "authors": " Li", "journal": "", "ref_id": "b20", "title": "", "year": "2015" }, { "authors": "Yu-Kun Li; Min-Ling Zhang; Xin Geng", "journal": "IEEE", "ref_id": "b21", "title": "Leveraging implicit relative labeling-importance information for effective multi-label learning", "year": "2015" }, { "authors": " Li", "journal": "", "ref_id": "b22", "title": "", "year": "2020" }, { "authors": "Junnan Li; Pan Zhou; Caiming Xiong; Steven Ch Hoi", "journal": "", "ref_id": "b23", "title": "Prototypical contrastive learning of unsupervised representations", "year": "2020" }, { "authors": " Li", "journal": "", "ref_id": "b24", "title": "", "year": "2021" }, { "authors": "Yunfan Li; Peng Hu; Zitao Liu; Dezhong Peng; Joey Tianyi Zhou; Xi Peng", "journal": "", "ref_id": "b25", "title": "Contrastive clustering", "year": "2021" }, { "authors": " Lyons", "journal": "", "ref_id": "b26", "title": "", "year": "1998" }, { "authors": "Michael Lyons; Shigeru Akamatsu; Miyuki Kamachi; Jiro Gyoba", "journal": "IEEE", "ref_id": "b27", "title": "Coding facial expressions with gabor wavelets", "year": "1998" }, { "authors": " Maas", "journal": "", "ref_id": "b28", "title": "", "year": "2013" }, { "authors": " Andrew L Maas; Andrew Y Awni Y Hannun; Ng", "journal": "", "ref_id": "b29", "title": "Rectifier nonlinearities improve neural network acoustic models", "year": "2013" }, { "authors": " Moyano", "journal": "", "ref_id": "b30", "title": "", "year": "2019" }, { "authors": "Eva L Jose M Moyano; Krzysztof J Gibaja; Sebastián Cios; Ventura", "journal": "Information Fusion", "ref_id": "b31", "title": "An evolutionary approach to build ensembles of multi-label classifiers", "year": "2019" }, { "authors": " Paszke", "journal": "", "ref_id": "b32", "title": "", "year": "2019" }, { "authors": "Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga", "journal": "", "ref_id": "b33", "title": "Pytorch: An imperative style, highperformance deep learning library", "year": "2019" }, { "authors": " Qi", "journal": "", "ref_id": "b34", "title": "", "year": "2022" }, { "authors": "Lei Qi; Jiaying Shen; Jiaqi Liu; Yinghuan Shi; Xin Geng", "journal": "", "ref_id": "b35", "title": "Label distribution learning for generalizable multi-source person re-identification", "year": "2022" }, { "authors": " Qian", "journal": "", "ref_id": "b36", "title": "", "year": "2022" }, { "authors": "Shengsheng Qian; Dizhan Xue; Quan Fang; Changsheng Xu", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b37", "title": "Integrating multi-label contrastive learning with dual adversarial graph neural networks for cross-modal retrieval", "year": "2022" }, { "authors": "Sebastian Ruder; Ruder", "journal": "", "ref_id": "b38", "title": "An overview of gradient descent optimization algorithms", "year": "2016" }, { "authors": " Tang", "journal": "", "ref_id": "b39", "title": "", "year": "2020" }, { "authors": "Haoyu Tang; Jihua Zhu; Qinghai Zheng; Jun Wang; Shanmin Pang; Zhongyu Li", "journal": "", "ref_id": "b40", "title": "Label enhancement with sample correlations via low-rank representation", "year": "2020" }, { "authors": " Wang", "journal": "", "ref_id": "b41", "title": "", "year": "2022" }, { "authors": "Ran Wang; Xinyu Dai", "journal": "", "ref_id": "b42", "title": "Contrastive learning-enhanced nearest neighbor mechanism for multilabel text classification", "year": "2022" }, { "authors": " Xu", "journal": "", "ref_id": "b43", "title": "", "year": "2019" }, { "authors": "Ning Xu; Yun-Peng Liu; Xin Geng", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b44", "title": "Label enhancement for label distribution learning", "year": "2019" }, { "authors": " Xu", "journal": "", "ref_id": "b45", "title": "", "year": "2021" }, { "authors": "N Xu; Y Liu; X Geng", "journal": "IEEE Transactions on Knowledge; Data Engineering", "ref_id": "b46", "title": "Label enhancement for label distribution learning", "year": "2021-04" }, { "authors": " Xu", "journal": "", "ref_id": "b47", "title": "", "year": "2022" }, { "authors": "Ning Xu; Jun Shu; Renyi Zheng; Xin Geng; Deyu Meng; Min-Ling Zhang", "journal": "IEEE Transactions on Pattern Analysis & Machine Intelligence", "ref_id": "b48", "title": "Variational label enhancement", "year": "2022" }, { "authors": "Yan ", "journal": "", "ref_id": "b49", "title": "", "year": "2016" }, { "authors": "Yan Yan; Xu-Cheng Yin; Chun Yang; Bo-Wen Zhang; Hong-Wei Hao", "journal": "Springer", "ref_id": "b50", "title": "Multi-label ranking with lstm 2 for document classification", "year": "2016" }, { "authors": " Yin", "journal": "", "ref_id": "b51", "title": "", "year": "2006" }, { "authors": "Lijun Yin; Xiaozhou Wei; Yi Sun; Jun Wang; Matthew J Rosato", "journal": "IEEE", "ref_id": "b52", "title": "A 3d facial expression database for facial behavior research", "year": "2006" }, { "authors": " Zhang", "journal": "", "ref_id": "b53", "title": "", "year": "2015" }, { "authors": "Zhaoxiang Zhang; Mo Wang; Xin Geng", "journal": "Neurocomputing", "ref_id": "b54", "title": "Crowd counting in public video surveillance by label distribution learning", "year": "2015" }, { "authors": " Zhang", "journal": "", "ref_id": "b55", "title": "", "year": "2022" }, { "authors": "Shu Zhang; Ran Xu; Caiming Xiong; Chetan Ramaiah", "journal": "", "ref_id": "b56", "title": "Use all the labels: A hierarchical multi-label contrastive learning framework", "year": "2022" }, { "authors": " Zhao", "journal": "", "ref_id": "b57", "title": "", "year": "2022" }, { "authors": "Xingyu Zhao; Yuexuan An; Ning Xu; Xin Geng", "journal": "", "ref_id": "b58", "title": "Fusion label enhancement for multi-label learning", "year": "2022" }, { "authors": " Zheng", "journal": "", "ref_id": "b59", "title": "", "year": "2021" }, { "authors": "Qinghai Zheng; Jihua Zhu; Haoyu Tang; Xinyuan Liu; Zhongyu Li; Huimin Lu", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b60", "title": "Generalized label enhancement with sample correlations", "year": "2021" } ]
[ { "formula_coordinates": [ 2, 408.68, 627.87, 149.32, 11.23 ], "formula_id": "formula_0", "formula_text": "X = {x 1 , x 2 , ..., x n } ∈ R dim1×n ," }, { "formula_coordinates": [ 3, 54, 409.05, 103.06, 12.19 ], "formula_id": "formula_1", "formula_text": "D i = (d y1 xi , d y2 xi , . . . , d yc xi )" }, { "formula_coordinates": [ 3, 54, 465.42, 243, 20.61 ], "formula_id": "formula_2", "formula_text": "S = {(x i , L i )|1 ≤ i ≤ n} into the LDL training set E = {(x i , D i )|1 ≤ i ≤ n}." }, { "formula_coordinates": [ 3, 129.52, 547.05, 91.96, 9.65 ], "formula_id": "formula_3", "formula_text": "L ConLE = l con + l att ." }, { "formula_coordinates": [ 3, 400.66, 552.75, 157.34, 9.65 ], "formula_id": "formula_4", "formula_text": "Z m = F 1 (x m ; θ),(2)" }, { "formula_coordinates": [ 3, 399.09, 587.72, 158.92, 9.65 ], "formula_id": "formula_5", "formula_text": "Q m = F 2 (L m ; φ),(3)" }, { "formula_coordinates": [ 4, 115.61, 73.68, 177.52, 24.58 ], "formula_id": "formula_6", "formula_text": "h(Z m , Q m ) = (Z m )(Q m ) T ||Z m || ||Q m || . (4" }, { "formula_coordinates": [ 4, 293.13, 82.09, 3.87, 8.64 ], "formula_id": "formula_7", "formula_text": ")" }, { "formula_coordinates": [ 4, 140.18, 144.64, 156.82, 9.65 ], "formula_id": "formula_8", "formula_text": "l m = l Zm + l Qm ,(5)" }, { "formula_coordinates": [ 4, 59.04, 198.67, 228.95, 27.21 ], "formula_id": "formula_9", "formula_text": "l Zm = -log e (h(Zm,Qm)/τ I ) n s=1,s =m [e (h(Zm,Zs)/τ I ) + e (h(Zm,Qs)/τ I ) ]" }, { "formula_coordinates": [ 4, 57.77, 263.7, 50.15, 9.65 ], "formula_id": "formula_10", "formula_text": "l Qm = -log" }, { "formula_coordinates": [ 4, 135.99, 332.48, 161.01, 22.31 ], "formula_id": "formula_11", "formula_text": "l con = 1 n n m=1 l m .(8)" }, { "formula_coordinates": [ 4, 135.28, 410.58, 80.45, 8.74 ], "formula_id": "formula_12", "formula_text": "H = concat(Z, Q)." }, { "formula_coordinates": [ 4, 136.86, 495.41, 160.14, 9.65 ], "formula_id": "formula_13", "formula_text": "D m = F 3 (H m ; ϕ),(10)" }, { "formula_coordinates": [ 4, 107.23, 625.03, 185.62, 30.2 ], "formula_id": "formula_14", "formula_text": "l dis = n m=1 ||F 3 (H m ; ϕ) -L m || 2 , (11" }, { "formula_coordinates": [ 4, 292.85, 635.76, 4.15, 8.64 ], "formula_id": "formula_15", "formula_text": ")" }, { "formula_coordinates": [ 4, 384.64, 303.58, 173.36, 29.25 ], "formula_id": "formula_16", "formula_text": "d y + xm -d y - xm > 0 s.t. y + ∈ P m , y -∈ N m (12)" }, { "formula_coordinates": [ 4, 319.98, 401.79, 238.02, 32.04 ], "formula_id": "formula_17", "formula_text": "l thr = 1 n n m=1 y + ∈Pm y -∈Nm [max(d y - xm -d y + xm + , 0)],(13)" }, { "formula_coordinates": [ 4, 323.51, 469.36, 234.49, 22.31 ], "formula_id": "formula_18", "formula_text": "l thr = 1 n n m=1 [max(max d y - xm -min d y + xm + , 0)],(14)" }, { "formula_coordinates": [ 4, 390.57, 525.67, 167.43, 9.65 ], "formula_id": "formula_19", "formula_text": "l att = λ 1 l dis + λ 2 l thr ,(15)" }, { "formula_coordinates": [ 5, 79.94, 292.97, 196.7, 103.93 ], "formula_id": "formula_20", "formula_text": "d j dj Chebyshev↓ Dis2(D, D) = maxj|dj -dj| Clark↓ Dis3(D, D) = c j=1 (d j -dj ) 2 (d j + dj ) 2 Canberra↓ Dis4(D, D) = c j=1 |d j -dj | 2 d j + dj Cosine↑ Sim1(D, D) =" }, { "formula_coordinates": [ 6, 117.34, 611.17, 179.66, 10.32 ], "formula_id": "formula_21", "formula_text": "L ConLE h = λ 1 l dis + λ 2 l thr ,(16)" }, { "formula_coordinates": [ 6, 383.93, 504.89, 174.07, 10.32 ], "formula_id": "formula_22", "formula_text": "L ConLE l = λ 1 l dis + l con .(17)" } ]
10.1109/FUZZ45933.2021.9494444
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b0", "b1", "b2", "b3", "b4" ], "table_ref": [], "text": "Processes constitute a useful way of representing and structuring the activities and resources involved in organization Information Systems (IS) from almost any domain. Daily, more event data is being produced and recorded, making it necessary to provide organizations with tools capable of processing such vast amounts of data and extracting the valuable knowledge hidden in it.\nProcess data is recorded as event logs, but its behavior is usually represented as process models (in a variety of notations [1]) that represent in a graphical manner the activities that take place in a process as well as the dependencies between them. Other relevant properties of the process tend to be included in the model such as temporal properties, process executionrelated statistics, etc. Apart from process models, information about these properties is conveyed to users through visual analytics, as they are commonly used when providing advanced analytics [1]. However, in real life scenarios process models are very complex, with a high number of relations between activities. Furthermore, the amount of information that can be added to enhance the process model is very high, and the visual analytics related to said process information are quite difficult to be understood and related to the underlying process for users, as deep knowledge of process modeling and analytics is required. This research was funded by the Spanish Ministry for Science, Innovation and Universities, the Galician Ministry of Education, University and Professional Training and the ERDF/FEDER program (grants TIN2017-84796-C2-1-R, ED431C2018/29 and ED431G2019/04).\nIn the Natural Language Generation (NLG) [2] and Linguistic Descriptions of Data (LDD) [3] fields, different methods for generating insights on data through natural language have been under development. Through different techniques, they aim to provide users with natural language texts that capture or summarize the most characteristic aspects of some data. This information can be easily consumed by users, as i) natural language is the inherent way of communicating for humans, therefore it does not rely on their capabilities to identify or understand patterns, trends, etc. from visual representations; and ii) it may include uncertain terms or expressions, which are very effective for communication. In this sense, research suggests that in some domains knowledge and expertise are required to understand graphical information [4] and proves that domain experts can take better decisions based on textual descriptions than on graphical displays [5]. Therefore, natural language descriptions seem a good approach to enable or enhance the understanding of processes and its analytics as they can summarize, combine and communicate information in ways it would not be possible with visual representations.\nIn this paper, we investigate a real-life use case of a process in the health-care domain which could potentially benefit from natural language descriptions in order to achieve a better understanding of what is really happening in it.\nWe propose a series of fuzzy temporal protoforms (fuzzy linguistic descriptions of data) in the framework of the automatic generation of quantitative and qualitative natural language descriptions of processes. With a general model that includes temporal and causal information from processes and its attributes we are able to recall causal relations and temporal distances between events, among other features. The use of fuzzy linguistic descriptions of data allows for modeling and managing the inherent imprecision of linguistic terms, which is very useful when summarizing temporal and other data. By introducing imprecision in descriptions related to frequency and temporal characteristics of processes the expressiveness of the approximation is enhanced. As fuzzy linguistic variables represent a language abstraction that compacts information and relations about sets of data, fuzzy quantified statements provide a more human-friendly interface than process models or visualization techniques. This approach also introduces the description of causal and temporal relationships between activities of a process, including both frequency and temporal characteristics.\nSection II gives a deeper background in NLG, LDD and its applications on process data and event logs. It also contains basic concepts of fuzzy quantified statements and process mining used in the proposed solution. Section III introduces the proposed protoforms and an overview of the generation process. Section IV presents an evaluation of the proposal and some concluding remarks." }, { "figure_ref": [], "heading": "II. BACKGROUND AND RELATED WORK", "publication_ref": [ "b1", "b5", "b6", "b7" ], "table_ref": [], "text": "The generation of natural language texts from data is a task which originated within the NLG field [2]. Particularly, the generation of natural language descriptions over data has been traditionally a task tackled by the Data-to-Text (D2T) research community [6]. Parallel to NLG and D2T systems, in the fuzzy logic realm, the paradigms of Computing with Words and Linguistic Descriptions (or summaries) of Data (LDD) emerge for modelling and managing uncertainty in natural language with the use of fuzzy sets [7]. These paradigms use the concepts of linguistic descriptions of data and protoform [8], which aim on providing summaries involving linguistic terms with some degree of uncertainty or ambiguity present on them." }, { "figure_ref": [], "heading": "A. Linguistic Descriptions of Data", "publication_ref": [ "b6", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b13", "b18", "b13", "b18" ], "table_ref": [ "tab_0" ], "text": "Linguistic summaries or descriptions of data have been investigated by many researchers and applied in multiple domains. Classically, fuzzy quantified sentences of type-I and type-II have been the most used ones in the literature since their inception in early 1980's [7]. From generating weather forecasts [9] or data-base summarization [10] to temporal series summarization [11]. However, they have been only investigated succinctly on process data [12], [13].\nLinguistic descriptions of data are understood as sets of instanced fuzzy quantified statements that are computed according to a dataset and a knowledge base for a given application domain for summarizing knowledge about variables and their values [14]. Fuzzy quantified statements follow predefined formal structures or templates, that are referred to as protoforms which are composed mainly of four aspects:\n• A referential X is a set of objects for which certain property or set of properties holds (e.g. the set of cases from an event log). • A summarizer A used to indicate some property or aggregation of properties (e.g. \"long waiting time\" or \"long waiting time and high number of medical tests\") of the object or referential of interest. • A (fuzzy) quantifier Q (e.g. \"several\") used to express the quantity or proportion of data from the referential which fulfills the properties indicated by the summarizer. • The degree of truth T used to relate the validity of the protoform. Instancing a protoform involves assigning values to its elements (referential, quantifier and summarizer) and computing its truth degree. The truth degree can be calculated using any valid quantification model [15], [16].\nCombining these elements, a sentence like \"In several cases there was a long waiting time between the Medical Surgical (MS) session of a patient and its surgery\" can be created from type-I protoform:\nQ X s are A(1)\nIn some cases, one may want a finer-grained description. A qualifier can be added to the description to better define the scope of the sentence, giving place to type-II protoforms.\n• A qualifier B, can make reference to any property or aggregation of properties of the referential. It defines a subset of it which fulfills a property to a certain degree and will be evaluated against summarizer and quantifier. Sentences like \"In most cases where patients were males, there was a long waiting time between the MS session of a patient and its surgery\" can be created from a type-II protoform:\nQ BX s are A(2)\nBoth summarizer, qualifier and quantifier take the form of a linguistic variable. Linguistic variables model the partitioning of the domain of a numeric or categorical variable into several properties (e.g., waiting time = {really short, short, as expected, long, extremely long}), where each property is known as a linguistic value and is associated to a membership function that measures the degree in which different values of the original variable fulfill that property. These membership functions are usually represented as trapezoid functions T [a, b, c, d]. This way, the degree to which a value fulfills a property can be computed with its membership function as follows:\nµ T [a,b,c,d] (x) =          0, (x ≤ a) or (x > d) x-a b-a a < x ≤ b 1, b < x ≤ c d-x d-c c < x ≤ d(3)\nSome limitations exist when using linguistic summaries of data. In the literature mostly type-I and type-II protoforms are used without diving in a deeper natural language realization, however, presenting the user with a linguistic summary composed of multiple isolated type-I and type-II descriptions is not the most appropriate solution due to their lack of expressiveness and limited semantics [14], [19]. One direction which has been followed recently in order to improve this limited semantics is their extension with additional elements; as the temporal dimension (due to great availability of time series data) or other domain specific information. Table I includes some of the type-I and type-II protoforms that have been proposed in the literature on recent times that may be of inspiration in our case. On [14], [19] a more extensive review of protoforms and applications can be found." }, { "figure_ref": [ "fig_0" ], "heading": "B. Process Mining", "publication_ref": [ "b16", "b10", "b17", "b11", "b12", "b0", "b11", "b12", "b0", "b0", "b20" ], "table_ref": [ "tab_0" ], "text": "Process execution is recorded in event logs. Process mining goal is to exploit that recorded event data, by automatically discovering the underlying process model, to extract with it valuable, process related information in a meaningful way. [17] 1999 X was A in T Temperature was high in the last minutes In T, X was A Shortly after the increase in pressure, temperature was high Castillo-Ortega et al. [11] 2001 Q of D are A Most days of the cold season patient inflow was high Almeida et al. [18] 2013 Q Y's are P Qt times Most patients have high blood pressure most of the time. Q Y's with C are P Most patients with disease X have low blood pressure. Wilbik and Dijkman [12], [13] 2015, 2017\nIn Q cases there was P In most cases there was a short throughput time In Q cases, when condition R was fulfilled there was P In most cases when \"Registration\" was short, there was a short throughput time.\nThis information can be used to understand what is really happening in a process by providing insights which help to anticipate problems and streamline and improve processes [1]. Process mining serves as a bridge between classical business process model analysis and data mining or data science techniques. On the one hand, classical process model analysis is a model-centric discipline; it puts all its emphasis on theoretic process models without giving much attention to the real execution data. However, the value of a process model is limited if too little attention is paid to the alignment between the model and reality (recorded event data). On the other hand, data mining techniques focus completely on the data without paying any attention to the model (or end-to-end processes). These techniques are able to recall frequencies of events, number of events per case, and basic case statistics, but can not be used to analyze bottlenecks, expected behaviors, deviation, etc. so are not able to answer the most frequent questions when dealing with processes. Current linguistic summarization techniques for process data [12], [13] focus solely on data mining techniques (they only use event log data) without paying attention to the underlying process model. This makes evident the need to propose a new series of protoforms which do take in count both aspects of a process and are based on process-mining techniques.\nIn order to better describe the protoforms presented in this paper it is necessary to introduce some of the basic elements in a process and in an event log.\nAn activity α ∈ A, being A the set of all activities, is each well-defined step in a process. Events e represent the execution of an activity α in a particular instant. They are characterized by two mandatory attributes: the executed activity α and the timestamp of the event; but they can have additional attributes such as its associated resources, their time duration, etc.\nA trace is an ordered list of events where each event occurs at a certain instant relative to the other events in it i.e. it represents the sequence of events a case follows. A case c ∈ C, being C the set of all cases in the process, then represents a particular execution of the process and, as events, cases have attributes. The most mandatory attributes of a case are its corresponding trace and its identifier. Other attributes may be its throughput time, the customer involved in the case, the country of an order, etc. Table II shows an example of an event log, a multiset of cases L = [ĉ 1 , ..., ĉn ].\nBy applying discovery algorithms [1] the model of a process can be extracted from an event log without any additional a- priori information. The discovered model shows which activities take place (as nodes), its ordering and relations by describing causal dependencies between them (as arcs connecting the nodes). Figure 1 shows a simplified process model (only the top 0.03% most common behavior is represented) of the use case here presented. Establishing a relation between a process model and the event log the process model is extracted from is a key element in process mining. This can be done by replaying [1] an event log over its corresponding process model, and allows to exploit the four perspectives of process mining: organizational (information about resources), control-flow (ordering of activities), case (attributes of the cases) and time (timing and frequency of events) perspectives. In this proposal we will focus mainly on the control-flow, case and time perspectives, since these are the perspectives where most questions are posed by experts in domains such as healthcare [21]. Putting special emphasis on the time perspective and providing ways to relate the three perspectives through new proposed protoforms." }, { "figure_ref": [], "heading": "III. PROPOSAL", "publication_ref": [ "b21" ], "table_ref": [], "text": "In this section we propose a new series of protoforms for processes, using as a guide a case study in the healthcare medical domain: the process related to the patients' management in the Valvulopathy Unit of the Cardiology Department of the University Hospital of Santiago de Compostela. In this Unit, consultations and medical examinations, such as echocardiograms or Computed Tomography scans are performed to patients with aortic stenosis [22] in order to decide their treatment (including surgery). Other information like unexpected events (e.g. non-programmed admissions) and patient management activities (e.g. inclusion in the process) are also recorded in the event log.\nMedical experts show real interest in applying process mining techniques to this process, since it allows to extract valuable knowledge like, relationships between patients attributes (case attributes), relationships and delays between crucial activities (timing and frequency of events) or different paths of the process patients with different attributes follow (control-flow perspective). However, understanding this information is highly difficult for non process-expert users, thus why medical experts show interest in natural language descriptions of health-care medical process. The protoforms presented in this paper are derived from experts needs in order to fulfill their information requirements." }, { "figure_ref": [], "heading": "A. Temporal contextualization of attributes", "publication_ref": [ "b2", "b2", "b16", "b14", "b15" ], "table_ref": [], "text": "This first protoform aims at describing how attributes of patients behave during different stages of the process e.g. \"In year 2020, most patients had emergency admittance\". The extended protoform, also allows to see if any type of relation between attributes holds e.g. \"In year 2020, most patients who underwent a surgery where older than 80\". We extend protoforms ( 1) and ( 2) to the following ones, respectively:\nIn T i, Q patients had attribute P(4)\nIn T i, Q patients with attribute C had attribute P (5\n)\nWhere Q is a quantifier, T i is a time interval and C and P make reference to any of the attributes of a case that are present on the event log, or any additional attributes computed with expert knowledge after the process model is discovered. This process of computing new attributes can be seen as the Feature Engineering task of Machine Learning. Process mining techniques allow us to compute new attributes that domain experts show high interest on such as the waiting time between two particular activities, the number of times one activity triggers the execution of one another, etc.\nAttributes will be represented as linguistic variables to which the elements of the referential (patients or cases) will be evaluated against as in (3). In cases where the summarizer or qualifier are categorical, as well as for property T i (defined by its start and end dates) we propose the use of crisp linguistic values. The membership of a value to these properties can be computed straightforward by taking a = b and c = d in expression (3).\nTruth value of ( 4) is computed directly following [17], furthermore, we propose an extension of it to calculate the truth value of (5) as:\nT = µ Q n i=1 µ T i (p i ) ∧ µ P (p i ) ∧ µ C (p i ) n i=1 µ T (p i ) ∧ µ C (p i ) (6)\nwhere n is the number of patients (cases), p represents a patient (case), ∧ represents the t-norm minimum which is used as conjunction. In (5) Zadeh's quantification model is used, although any valid quantification model [15], [16] could be considered." }, { "figure_ref": [], "heading": "B. Causality and temporal relationship between events", "publication_ref": [ "b0", "b22", "b8", "b16" ], "table_ref": [], "text": "The time perspective is highly relevant in health-care medical processes, as wait times between activities can have a heavy impact on whether the treatment of a patient is successful or not. Descriptions as \"In year 2020, in most cases, patient evaluation takes place shortly after its inclusion\" or in an extended form \"In year 2020, in most cases where patients had emergency admittance, patient intervention takes place shortly after its MS session\" can be generated thanks to the causal relationships between events that process mining is able to extract. Each event is characterized by its relationships with \"input\" and \"output\" events (i.e. events that happen consecutively before and consecutively after it) and the waiting time between their execution. These relationships are the ones described by the arcs in a process model [1]. Thus, a set of relationships and wait times between all activities in a process can be computed, allowing for the generation of the following proposed protoforms:\nIn T i, in Q cases R(7)\nIn T i, in Q cases where patient had attribute C R (8\n)\nWhere T i, Q and C are as before and R is a temporal relation between two activities A and B following the algebra proposed by Allen [23]. In this case, as the proposed process only has registered end timestamps, only precedence (after, before) relationships can be expressed. The relationship between two events a and b can be computed as:\nr a,b (x) = 0,\nactivities not causally related T a -T b , activities are causally related (9) So r a,b is zero if events are not causally related, positive if origin event precedes destination event and negative if destination event precedes origin event. R can be computed for each pair of activities (for all the executions of each activity) in the process (in both directions A before B, B before A). This way, linguistic variable \"after\" could be described as a series of positive, non monotonous fuzzy sets after = [immediately after, shortly after, after, long after, at some point after]. And \"before\" in a similar way but with negative, non monotonous sets. This makes for a similar truth evaluation process as before, where truth degree for ( 7) is computed following [17] and for (8) with (6) substituting P for R in both cases.\nThese descriptions give an easy understanding of the behaviour and different paths (control-flow perspective) patients follow in the process, plus, the addition of the wait time between events (time perspective) allows medical experts to determine whether the behavior of the process is being as expected, where excessive wait times are happening and how activities relate among them." }, { "figure_ref": [], "heading": "C. Deviance protoforms", "publication_ref": [ "b13", "b9", "b10", "b9", "b4", "b10" ], "table_ref": [], "text": "These protoforms aim at putting in relevance attributes which may be causing deviance over other attributes. For example \"In year 2019, most patients had a normal waiting time between the MS session of the patient and its intervention. However, several patients with emergency admittance had a short waiting time\". This protoform is indeed a composite protoform, obtained by composing two protoforms: a type-I general summary protoform with a type-II contrasting protoform, through a semantic relation [14]. And it is defined as:\nIn T, Q 1 patients had attribute P 1 . However, Q 2 patients with attribute C had attribute P 2 (10) A particular case of this protoform is one in which the first statement is made over the expected value of some attribute, defined by experts i.e. \"In year 2020, the waiting time between the MS session of a patient and its intervention is expected to be around 25 days. However, most patients from ambulatory admittance had a longer waiting time\":\nIn T, P 1 is expected. However, Q 2 patients with attribute C had attribute P 2 (11) For these protoforms, the truth value may be derived from the aggregation of the truth values of its constituents through any t-norm. For simplicity and consistency we propose the use of the t-norm minimum. If we refer to the general protoform as S1 and to the contrasting protoform as S2 the truth value of the deviance protoform (10) can be computed as:\nT = T (S1) ∧ T (S2)(12)\nwhere T (S1) and T (S2) can be computed with ( 4) and (5).\nFor protoform (11) where P 1 is the expected value defined by the experts, there is no need to assess the truth degree of S1, as it will be always maximal. In this case, the truth degree of the composite protoform is T (S2).\nOther protoform combinations are possible. Behavior deviance descriptions like \"In year 2020, in most cases, the intervention of a patient takes place shortly after its MS session. However, most patients with a low number of medical tests a second MS session takes place after the first MS session is performed\" are of high interest for medical experts, as they allow them to detect bottlenecks and unexpected behaviours that would otherwise remain unknown. Also, type-II protoforms could be used for both protoforms, allowing to compare different categories of patients for some attribute, e.g. \"In year 2020, most male patients had a short waiting time between the MS session of the patient and its intervention. However, most female patients had a normal waiting time between the MS session of the patient and its intervention\". These descriptions allow medical experts to easily grasp if differences between groups of patients exist for certain attribute; as it may be their sex, type of admittance, treatment patient is being given, etc." }, { "figure_ref": [], "heading": "D. Generation Pipeline", "publication_ref": [], "table_ref": [], "text": "On the one hand, in LDD approaches, linguistic summaries are generated by a search (exhaustive or non-exhaustive) through the semantic space; the power set of all protoform instances that can be built using the defined quantifiers, qualifiers and summarizers guided by quality measures (truth value, strength of relation, etc.). On the other hand, D2T and NLG systems, as our proposal, follow a pipeline where the main stages of the generation process related to handling of data (data interpretation and document planning) use expert knowledge to determine which messages must be included and realized into the final text. This expert knowledge usually takes the form of sets of rules, but other approaches as machine learning, or statistical tests which in our case are used to determine whether a deviance protoform may present relevant information to the user, can be used." }, { "figure_ref": [], "heading": "IV. VALIDATION AND CONCLUSIONS", "publication_ref": [ "b23", "b11" ], "table_ref": [], "text": "In this section we present the assessment of the proposed model in a real domain of application (activities and attributes of the patients of a cardiac valvulopathy unit). The assessment was conducted by medical experts of the Cardiology Department, University Hospital of Santiago de Compostela through a questionnaire, created by taking general ideas of the Technology Acceptance Model (TAM) [24] adapted to linguistic summarization that was used in [12].\nInstances of each type of protoform are presented to the experts to assess the degree to which protoforms provide useful information in a comprehensible way (examples can be found in previous sections). Furthermore, the process model is also included to assess whether natural language descriptions are preferable or not to graphical representations. Finally, general questions are asked in order to determine f the use of natural language descriptions of processes in the health-care domain is found useful. All questions are asked on a five-level Likert scale ranging from 'strongly disagree' to 'strongly agree'.\nResults show most protoforms are found to provide interesting information, except those cases where routine information was given. However, by proposing the same protoforms with different data, we were able to recall that the perceived usefulness is only found lower when protoforms convey information medical experts already know and not because the proposed protoforms were not correct. When the information conveyed in the description is data unknown to medical experts, descriptions were labelled as really interesting. All protoforms, except one case where the realization was not suitable, were found comprehensible and easy to understand. In all cases where natural language descriptions were confronted with graphical representations, a clear preference was shown for natural language descriptions. From the general questions, medical experts clearly stated that natural language descriptions are useful, give them a better understanding of what is happening in the process, allow them to complete tasks quicker, increase the quality of their work and increase their effectiveness.\nIn this paper we present an approach to obtain natural language descriptions of health-care processes. We propose a series of protoforms which include temporal and causal information from processes as well as patient attributes, that are able to quantify attributes in time during a process life-span, recall causal relations and temporal distances between events, and describe whether differences exist in attributes between different groups of patients. By introducing the temporal dimension through imprecise descriptions of frequency and temporal characteristics of attributes and activities and through the composition of protoforms, the semantics and expressiveness of our proposal is greatly enhanced. We propose to generate the descriptions using a novel approach based the D2T pipeline using process-mining techniques and expert knowledge. A real health-care use case is presented, showing the potential of the proposed protoforms for providing natural language descriptions addressed to cardiology specialists about activities and attributes of the patients of a cardiac valvulopathy unit." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "Thanks to Dr. Carlos Peña and Dr. Violeta González from the Department of Cardiology, University Clinical Hospital of Santiago de Compostela, SERGAS, Biomedical Research Center in the Cardiovascular Diseases Network (CIBER-CV), for providing the anonymized data and validating the proposal." } ]
In this paper, we propose a series of fuzzy temporal protoforms in the framework of the automatic generation of quantitative and qualitative natural language descriptions of processes. The model includes temporal and causal information from processes and attributes, quantifies attributes in time during the process life-span and recalls causal relations and temporal distances between events, among other features. Through integrating process mining techniques and fuzzy sets within the usual Data-to-Text architecture, our framework is able to extract relevant quantitative temporal as well as structural information from a process and describe it in natural language involving uncertain terms. A real use-case in the cardiology domain is presented, showing the potential of our model for providing natural language explanations addressed to domain experts.
Fuzzy Temporal Protoforms for the Quantitative Description of Processes in Natural Language
[ { "figure_caption": "Figure 1 .1Figure 1. Simplified model of the valvulopathy process represented with the InVerbis Analytics visualization tool [20].", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "TYPES OF FUZZY PROTOFORMS DESCRIBED IN THE LDD LITERATURE", "figure_data": "AuthorsYearProtoformVerbalized exampleCariñena", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" } ]
Yago Fontenla-Seco; Alberto Bugarín; Manuel Lama
[ { "authors": "W M P Van Der Aalst", "journal": "Springer", "ref_id": "b0", "title": "Process Mining: Data Science in Action", "year": "2016" }, { "authors": "E Reiter; R Dale", "journal": "Cambridge University Press", "ref_id": "b1", "title": "Building Natural Language Generation Systems", "year": "2000" }, { "authors": "A Ramos-Soto", "journal": "Fuzzy Sets and Systems", "ref_id": "b2", "title": "On the role of linguistic descriptions of data in the building of natural language generation systems", "year": "2016" }, { "authors": "M Petre", "journal": "Commun. ACM", "ref_id": "b3", "title": "Why looking isn't always seeing: Readership skills and graphical programming", "year": "1995-06" }, { "authors": "A S Law", "journal": "Jour. Clinical Monitoring Computing", "ref_id": "b4", "title": "A comparison of graphical and textual presentations of time series data to support medical decision making in the neonatal intensive care unit", "year": "2005" }, { "authors": "E Reiter", "journal": "ACL", "ref_id": "b5", "title": "An Architecture for Data-to-Text Systems", "year": "2007" }, { "authors": "R R Yager", "journal": "Information Sciences", "ref_id": "b6", "title": "A new approach to the summarization of data", "year": "1982" }, { "authors": "L A Zadeh", "journal": "", "ref_id": "b7", "title": "A prototype-centered approach to adding deduction capability to search engines-the concept of protoform", "year": "2002" }, { "authors": "A Ramos-Soto", "journal": "IEEE Transactions on Fuzzy Systems", "ref_id": "b8", "title": "Linguistic descriptions for automatic generation of textual short-term weather forecasts on real prediction data", "year": "2015" }, { "authors": "J Kacprzyk", "journal": "Springer", "ref_id": "b9", "title": "Fuzzy Linguistic Summaries of Databases for an Efficient Business Data Analysis and Decision Support", "year": "2002" }, { "authors": "R Castillo-Ortega", "journal": "Multiple-Valued Logic and Soft Computing", "ref_id": "b10", "title": "A fuzzy approach to the linguistic summarization of time series", "year": "2011" }, { "authors": "R M Dijkman; A Wilbik", "journal": "Inf. Syst", "ref_id": "b11", "title": "Linguistic summarization of event logs -A practical approach", "year": "2017" }, { "authors": "A Wilbik; R M Dijkman", "journal": "", "ref_id": "b12", "title": "Linguistic summaries of process data", "year": "2015" }, { "authors": "A Ramos-Soto; P Martín-Rodilla", "journal": "Fuzzy Sets Syst", "ref_id": "b13", "title": "Enriching linguistic descriptions of data: A framework for composite protoforms", "year": "2021" }, { "authors": "M Delgado", "journal": "Fuzzy Sets and Systems", "ref_id": "b14", "title": "Fuzzy quantification: a state of the art", "year": "2014" }, { "authors": "A Cascallar-Fuentes", "journal": "", "ref_id": "b15", "title": "An experimental study on the use of fuzzy quantification models for linguistic descriptions of data", "year": "2020" }, { "authors": "P Cariñena", "journal": "", "ref_id": "b16", "title": "A language for expressing expert knowledge using fuzzy temporal rules", "year": "1999" }, { "authors": "R J Almeida", "journal": "", "ref_id": "b17", "title": "Linguistic summaries of categorical time series for septic shock patient data", "year": "2013" }, { "authors": "N Marín; D Sánchez", "journal": "Fuzzy Sets Syst", "ref_id": "b18", "title": "On generating linguistic descriptions of time series", "year": "2016" }, { "authors": "", "journal": "", "ref_id": "b19", "title": "InVerbis Analytics", "year": "2021-05" }, { "authors": "R S Mans", "journal": "Springer", "ref_id": "b20", "title": "Process mining in healthcare: Data challenges when answering frequently posed questions", "year": "2013" }, { "authors": "B A Carabello; W J Paulus", "journal": "The Lancet", "ref_id": "b21", "title": "Aortic stenosis", "year": "2009" }, { "authors": "J F Allen", "journal": "", "ref_id": "b22", "title": "Maintaining knowledge about temporal intervals", "year": "1983" }, { "authors": "F D Davis", "journal": "MIS Quarterly", "ref_id": "b23", "title": "Perceived usefulness, perceived ease of use, and user acceptance of information technology", "year": "1989" } ]
[ { "formula_coordinates": [ 2, 409.01, 99.93, 154.02, 8.96 ], "formula_id": "formula_0", "formula_text": "Q X s are A(1)" }, { "formula_coordinates": [ 2, 404.99, 247.13, 158.05, 8.96 ], "formula_id": "formula_1", "formula_text": "Q BX s are A(2)" }, { "formula_coordinates": [ 2, 345.46, 411.29, 217.58, 56.47 ], "formula_id": "formula_2", "formula_text": "µ T [a,b,c,d] (x) =          0, (x ≤ a) or (x > d) x-a b-a a < x ≤ b 1, b < x ≤ c d-x d-c c < x ≤ d(3)" }, { "formula_coordinates": [ 4, 97.47, 582.65, 202.56, 8.96 ], "formula_id": "formula_3", "formula_text": "In T i, Q patients had attribute P(4)" }, { "formula_coordinates": [ 4, 296.15, 605.34, 3.87, 8.64 ], "formula_id": "formula_4", "formula_text": ")" }, { "formula_coordinates": [ 4, 343.32, 500.42, 219.72, 27.9 ], "formula_id": "formula_5", "formula_text": "T = µ Q n i=1 µ T i (p i ) ∧ µ P (p i ) ∧ µ C (p i ) n i=1 µ T (p i ) ∧ µ C (p i ) (6)" }, { "formula_coordinates": [ 5, 128.43, 154.47, 171.59, 8.96 ], "formula_id": "formula_6", "formula_text": "In T i, in Q cases R(7)" }, { "formula_coordinates": [ 5, 296.15, 173.46, 3.87, 8.64 ], "formula_id": "formula_7", "formula_text": ")" }, { "formula_coordinates": [ 5, 61.1, 271.11, 57.68, 16.45 ], "formula_id": "formula_8", "formula_text": "r a,b (x) = 0," }, { "formula_coordinates": [ 5, 395.11, 226.09, 167.93, 8.96 ], "formula_id": "formula_9", "formula_text": "T = T (S1) ∧ T (S2)(12)" } ]
10.18653/v1/2020.acl-main.582
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b17", "b43", "b2", "b19", "b23", "b35", "b5", "b0", "b22", "b5", "b30", "b7", "b39", "b11", "b5", "b30", "b7", "b39", "b39", "b11", "b10", "b27", "b44", "b21", "b22" ], "table_ref": [], "text": "Aspect-based sentiment analysis (ABSA) is the task of analyzing people's sentiments at the aspect level. It often involves several sentiment elements, including aspects, opinions, and sentiments (Liu, 2012;Zhang et al., 2022). For instance, given the sentence \"The apple is sweet.\", the aspect is apple, its opinion is sweet, and the corresponding sentiment polarity is Positive. ABSA has attracted increasing attention in the last decade, and various tasks have been proposed to extract either single or multiple sentiment elements under different scenarios. For example, aspect sentiment classification (ASC) predicts the sentiment polarity of a given aspect target (Chen et al., 2017;Li et al., 2018a;Xu et al., 2020a) and aspect term extraction (ATE) extracts aspects given the sentence (Li et al., 2018b;Liu et al., 2015), while aspect sentiment triplet extraction (ASTE) predicts all three elements in the triplet format (Peng et al., 2020;Xu et al., 2021).\nThe main research line of ABSA focuses on solving various tasks within a specific domain. However, in real-world applications, such as Ecommerce websites, there often exist a wide variety of domains. Existing methods often struggle when applying models trained in one domain to unseen domains, due to the variability of aspect and opinion expressions across different domains (Ding et al., 2017;Wang andPan, 2018, 2019). Moreover, manually labeling data for each domain can be costly and time-consuming, particularly for ABSA requiring fine-grained aspect-level annotation. This motivates the task of cross-domain ABSA, where only labeled data in the source domain is available and the knowledge is expected to be transferable to the target domain that only has unlabeled data.\nTo enable effective cross-domain ABSA, domain adaptation techniques (Blitzer et al., 2006;Pan and Yang, 2010) are employed to transfer learnt knowledge from the labeled source domain to the unlabeled target domain. They either focus on learning domain-agnostic features (Ding et al., 2017;Wang and Pan, 2018;Li et al., 2019c), or adapt the training distribution to the target domain (Gong et al., 2020;Yu et al., 2021;Li et al., 2022). However, the majority of these works are based on discriminative models and need task-specific designs, making a cross-domain model designed for one ABSA task difficult to be extended for other tasks (Ding et al., 2017;Wang and Pan, 2018;Li et al., 2019c;Gong et al., 2020). In addition, some methods further require external resources, such as domain-specific opinion lexicons (Yu et al., 2021), or extra models for augmenting pseudo-labeled target domain data (Yu et al., 2021;Li et al., 2022), which narrows their application scenarios.\nIn a recent research line, pre-trained generative models like BART (Lewis et al., 2020) and T5 (Raffel et al., 2020) have demonstrated impressive power in unifying various ABSA tasks without any task-specific design and external resources. By formulating each task as a sequence-to-sequence problem and producing the desired label words, i.e., the desired sentiment elements, they achieve substantial improvements on various ABSA tasks (Zhang et al., 2021b,c;Yan et al., 2021;Mao et al., 2022). Despite their success in supervised in-domain settings, their effectiveness has yet to be verified in the cross-domain setting. Moreover, unlabeled data of the target domain, which is usually easy to collect, has shown to be of great importance for bringing in domain-specific knowledge (Pan and Yang, 2010). How to exploit such data with the generative formulation remains a challenge.\nTowards this end, we propose a Bidirectional Generative Cross-domain ABSA (BGCA) framework to fully exploit generative methods for various cross-domain ABSA tasks. BGCA employs a unified sequence-to-sequence format but contains two reverse directions: text-to-label and label-to-text. The text-to-label direction converts an ABSA task into a text generation problem, using the original sentence as input and a sequence of sentiment tuples as output. After training on the source labeled data D S , the model can then directly conduct inference on the unlabeled data x T of the target domain D T to get the prediction ŷT . The prediction can be used as pseudo-labeled data to continue-train the text-to-label model. However, ŷT is inevitably less accurate due to the domain gap between the source and target domains. This is where the reverse direction, i.e., label-to-text, plays its role. Specifically, we first reverse the order of input and output from the text-to-label stage of the source domain to train a label-to-text model. Then this model takes the prediction ŷT as input and generates a coherent natural language text xT that contains the label words of ŷT . Note that even though the prediction ŷT could be inaccurate regarding the original unlabeled data x T , the generated sentence xT can plausibly well match with ŷT . This is because the label-to-text model was trained to generate an output text that can appropriately describe the input labels. Consequently, ŷT , drawn from the target domain, is able to introduce in-domain knowledge, thereby enhancing the overall understanding of the domain-specific information. In addition, xT aligns more closely with ŷT compared to x T , which effectively minimizes the prediction noise. As such, they can be paired together to create a more accurate and reliable generated dataset. Finally, the generated target data D G and the labeled source data D S can be combined to train the model in the text-to-label direction, which effectively enriches the model knowledge in the target domain.\nOur proposed BGCA framework exhibits some unique advantages. Firstly, it effectively utilizes the unlabeled target domain data by capturing important domain-specific words (i.e., sentiment elements) of the target domain in the first text-to-label stage. In the meantime, it bypasses the issue from the domain gap since it takes the noisy prediction as input and obtains more accurate text-label pairs in the label-to-text stage. Secondly, we fully leverage generative models' encoding and generating capabilities to predict labels and generate natural sentences within a unified framework, which is infeasible for discriminative models. This allows the model to seamlessly switch between the roles of predictor and generator. Finally, BGCA utilizes a shared model to perform training in both directions, allowing for a more comprehensive understanding of the association between sentences and labels.\nIn summary, our main contributions are: (1) We evaluate generative methods on four crossdomain ABSA tasks, including aspect term extraction (ATE), unified ABSA (UABSA), aspect opinion pair extraction (AOPE), and aspect sentiment triplet extraction (ASTE), and find that the generative approach is an effective solution. Without any unlabeled target domain data, it can already achieve better performance than previous discriminative methods. (2) We propose a novel BGCA framework to effectively utilize unlabeled target domain data and train a shared model in reverse directions. It can provide high-quality augmented data by generating coherent sentences given noisy labels and a unified solution to learn the association between sentences and labels thoroughly. (3) Our proposed method achieves new state-of-the-art results on all tasks, which validate the effectiveness and generality of our framework.\nThe manager was [rude] NEG and handled the situation extremely poorly." }, { "figure_ref": [], "heading": "T5 text-to-label", "publication_ref": [], "table_ref": [], "text": "The team also assists you very [nicely] POS when choosing which computer is right for you. Figure 1: Overview of our proposed BGCA framework, which includes text-to-label and label-to-text directions.\nWe take examples from the ASTE task for illustration. Underlining and square brackets indicate gold aspects and gold opinions, respectively. The gold labels for the target domain are shown for demonstration only. The generated dataset will be combined with the labeled source dataset to conduct final training in a text-to-label manner." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b43", "b9", "b4", "b5", "b38", "b44", "b32", "b7", "b39", "b11", "b21", "b44", "b18" ], "table_ref": [], "text": "Cross-domain ABSA Cross-domain ABSA aims to utilize labeled data from a source domain to gain knowledge that can be applied to a target domain where only unlabeled data is available. The main research line of cross-domain ABSA involves two paradigms: feature-based adaptation and data-based adaptation (Zhang et al., 2022). Feature-based adaptation focus on learning domain-invariant features. Some have utilized domain-independent syntactic rules to minimize domain gap (Jakob and Gurevych, 2010;Chernyshevich, 2014;Ding et al., 2017;Wang andPan, 2018, 2019), while others have employed domain discriminators to encourage the learning of universal features (Li et al., 2019c;Yang et al., 2021;Zhou et al., 2021;Zhang et al., 2021a). On the other hand, data-based adaptation aims to adapt the training data distribution to the target domain.\nThey either adjust the importance of individual training instances through re-weighting (Xia et al., 2014;Gong et al., 2020), or generate additional training data using another pre-trained model (Yu et al., 2021;Li et al., 2022). Despite their effectiveness, most of these works require task-specific design or external resources, preventing easy extensions to other cross-domain ABSA tasks.\nGenerative ABSA Recently, generative models have obtained remarkable results in unifying various ABSA tasks. By formulating each ABSA task as a sequence-to-sequence problem, generative models can output the desired sentiment element words (Zhang et al., 2021c;Mao et al., 2022)\nTask Output Tuple Example Output ATE (a) (apple) UABSA (a, s) (apple, positive) AOPE (a, o) (apple, sweet) ASTE (a, o, s) (apple, sweet, positive)\nTable 1: Output tuple of various ABSA tasks, and example output given the sentence \"The apple is sweet.\", where a, o and s denote aspect, opinion and sentiment.\nor their indexes (Yan et al., 2021) directly. In addition, some works successfully adopt the generative model on single ABSA tasks by converting the task to a natural language generation or paraphrase generation problem (Liu et al., 2021;Zhang et al., 2021b). Nevertheless, their potential is not explored under the cross-domain setting." }, { "figure_ref": [], "heading": "Problem Formulation", "publication_ref": [], "table_ref": [], "text": "To examine the generality of our proposed framework, we consider four ABSA tasks, including ATE, UABSA, AOPE, and ASTE. Given a sentence x = [w 1 , w 2 , ..., w n ] with n words, the task is to predict a set of sentiment tuples denoted as\ny = {t i } |t| i=1\n, where each tuple t i may include a single element from aspect (a), opinion (o), and sentiment (s), or multiple elements in pair or triplet format. The element within each tuple depends on the specific ABSA task, detailed in Table 1.\nUnder the cross-domain ABSA setting, the training dataset consists of a set of labeled sentences from a source domain\nD S = x S i , y S i N S\ni=1 and a set of unlabeled sentences from a target domain\nD T = {x T j } N T j=1 .\nThe goal is to leverage both D S and D T to train a model, which can predict the label of test data from the target domain." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "We introduce our Bidirectional Generative Crossdomain ABSA (BGCA) framework in this section.\nAs shown in Figure 1, it contains two sequential stages, namely text-to-label, and label-to-text, to obtain high-quality augmented data. The text-tolabel direction (on the top part) converts various tasks into a unified format and can produce noisy predictions on the unlabeled target data, whereas the label-to-text direction (on the bottom part) utilizes such noisy predictions to generate natural sentences containing the given labels so as to augment high-quality training data and enriches model knowledge of the target domain." }, { "figure_ref": [], "heading": "Text-to-label", "publication_ref": [ "b1" ], "table_ref": [], "text": "The text-to-label direction unifies different ABSA tasks into a sequence-to-sequence format. It takes a sentence as input and outputs a sequence of sentiment tuples extracted from the sentence. We annotate the output sequence with predefined tagger tokens to ensure a valid format, which can prevent decoding ambiguity. The tagger tokens are k continuous tokens { m j } k j=1 initialized by embedding of the words {m j } k j=1 . Specifically, we use aspect , opinion to mark aspect and opinion terms, and pos , neu , neg to annotate positive, neutral and negative sentiments. The output formats with the continuous taggers for different tasks are: ATE :\nx ⇒ aspect a UABSA : x ⇒ pos a AOPE :\nx ⇒ aspect a opinion o ASTE :\nx ⇒ pos a opinion o\n(1)\nwhere a and o denote the aspect and the opinion terms, respectively. Taking ASTE as an example, we use the format of pos followed by the extracted aspect word(s), and opinion followed by the extracted opinion word(s) to annotate the positive opinion term expressed on the corresponding aspect term in a sentence. Based on this format, we are able to extract the aspect, opinion, and sentiment from the output sequence to form a complete sentiment tuple through simple regular expressions.\nThe text-to-label direction is trained on {x, y} pairs from D S by minimizing the standard maximum likelihood loss:\nL = - l i=-1 log p (y i | x; y ≤i-1 ) ,(2)\nwhere l denotes the sequence length.\nAfter training on the source labeled data D S , we can directly conduct inference on the target domain D T to extract the sentiment tuples ŷT . During the inference, we employ constrained decoding (Cao et al., 2021) to ensure each generated token ŷT i of the output sequence is selected from the input sentence or the predefined tagger tokens, in order to prevent invalid output sequences and ensure that the output is relevant to the specific domain:\nŷT i = argmax y j ∈U p y j | x T ; ŷT ≤i-1 ,(3)\nwhere\nU = {w i } n i=1 ∪ { m j } k j=1 ." }, { "figure_ref": [], "heading": "Label-to-text", "publication_ref": [ "b22" ], "table_ref": [], "text": "Although the text-to-label model can be directly applied for prediction on the target domain, it does not exploit the unlabeled target domain data in the training process, which has been proven to be crucial for incorporating target-domain knowledge (Pan and Yang, 2010). One straightforward way to eliminate this problem is to use (x T , ŷT ) as pseudo-labeled data to continue training the above text-to-label model. However, such naive self-training suffers from the noise of ŷT . Our label-to-text stage alleviates this weakness by pairing the label ŷT with a new sentence that matches this label better. Specifically, we continue to train the above model using the labeled dataset from D S . Nevertheless, the training pairs are reversed into the label-to-text direction, where the input is now the sequence y with sentiment tuples, and the output is the original sentence x:\nATE : aspect a ⇒ x UABSA : pos a ⇒ x AOPE : aspect a opinion o ⇒ x ASTE : pos a opinion o ⇒ x (4)\nSimilarly, the label-to-text direction is trained on {y, x} pairs from D S by minimizing the standard maximum likelihood loss:\nL = - l i=-1 log p (x i | y; x ≤i-1 ) ,(5)\nand l refers to the sequence length.\nAfter training, we use the sentiment tuples ŷT , extracted from a target domain unlabeled data x T , \nxT i = argmax x j ∈V p x j | ŷT ; xT ≤i-1 ,(6)\nwhere V denotes the vocabulary of the model. The label-to-text stage thus augments a generated dataset\nD G = xT i , ŷT i N T i=1\n. By considering each natural sentence as a combination of context and sentiment elements, we can find that the generated sentence's context is produced by a model pre-trained on large-scale corpora and fine-tuned on the labeled source domain, while its sentiment elements such as aspects and opinions come from the target domain. Therefore, D G can play the role of an intermediary which connects the source and target domains through the generated sentences.\nAs previously mentioned, due to the gap between source and target domains, the text-to-label model's prediction on unlabeled target data is noisy. Instead of improving the text-to-label model, which may be difficult, our label-to-text stage creates a sentence xT that is generated specifically for describing ŷT . Thus, even with the presence of noise in the extracted labels ŷT , the label-to-text stage offers a means of minimizing the negative impact and ultimately yields a more accurate pseudo-training sample. Finally, since these two stages train a shared model based on sentences and labels from two directions, it gives the model a more comprehensive understanding of the association between sentences and labels, leading to a more accurate prediction of labels for given sentences." }, { "figure_ref": [], "heading": "Training", "publication_ref": [], "table_ref": [], "text": "Ideally, the generated dataset D G should fulfil the following requirements: 1) the natural sentence should exclusively contain sentiment elements that are labeled in the sentiment tuples, and should not include any additional sentiment elements; 2) the natural sentence should accurately convey all the sentiment elements as specified in the sentiment tuples without any omissions; 3) the sentiment tuples should be in a valid format and can be mapped back to the original labels; Therefore, we post-process {x t , ŷt } pairs from D G by: 1) filtering out pairs with ŷt in invalid format or contains words not present in xt ; 2) utilizing the text-to-label model to eliminate pairs where ŷt is different from the model's prediction on xt . In the end, we combine the source domain D S , and the generated dataset D G as the ultimate training dataset and continue to train the same model in a text-to-label manner as outlined in Section 4.1." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b26", "b26", "b25", "b24", "b28", "b8", "b39", "b6", "b7", "b39", "b27", "b29" ], "table_ref": [ "tab_1", "tab_2", "tab_3" ], "text": "Datasets We evaluate the proposed framework on four cross-domain ABSA tasks, including ATE, UABSA, AOPE, and ASTE. Datasets of these tasks mainly consist of four different domains, which are Laptop (L), Restaurant (R), Device (D), and Service (S). L, also referred to as L14, contains laptop reviews from SemEval ABSA challenge 2014 (Pontiki et al., 2014). R is a set of restaurant reviews based on SemEval ABSA challenges 2014, 2015, and 2016 (Pontiki et al., 2014(Pontiki et al., , 2015(Pontiki et al., , 2016)), denoted as R14, R15, R16 for the AOPE and ASTE tasks. D contains digital device reviews provided by Toprak et al. (2010). S includes reviews from web service, introduced by Hu and Liu (2004). Specifically, we can perform the ATE and UABSA tasks on all four domains, whereas the AOPE and ASTE tasks can be conducted on L and R domains, with R being further divided into R14, R15, and R16. We follow the dataset setting provided by Yu et al. (2021) for the ATE and UABSA task, and Fan et al. (2019), Xu et al. (2020b) for the AOPE, ASTE task respectively. We show the statistics in Table 2.\nSettings We consider all possible transfers between each pair of domains for each task. Following previous work (Li et al., 2019a,b;Gong et al., 2020;Yu et al., 2021), we remove D→L and L→D for the ATE and UABSA tasks due to their domain similarity. Additionally, we exclude transfer pairs between R14, R15, and R16 for the AOPE and ASTE tasks since they come from the same restaurant domain. As a result, there are ten transfer pairs for the ATE and UABSA tasks, and six transfer pairs for the AOPE and ASTE tasks, detailed in Table 3 and4. We denote our proposed framework as BGCA label-to-text , which includes the bidirectional augmentation and utilizes the augmented data for training the final model. To investigate the effectiveness of the generative framework for cross-domain ABSA tasks, we also report the results with a single text-to-label direction, denoted as BGCA text-to-label , which is essentially a zero-shot cross-domain method.\nMetrics We choose the Micro-F1 score as the evaluation metric for all tasks. A prediction is counted as correct if and only if all the predicted elements are exactly matched with gold labels.\nImplementation Details We choose T5 (Raffel et al., 2020) as our backbone model and use T5base checkpoint from huggingface 1 . It is a transformer model (Vaswani et al., 2017) that utilizes the encoder-decoder architecture where all the pre-1 https://github.com/huggingface/ training tasks are in sequence-to-sequence format.\nFor simplicity, we use the Adam optimizer with a learning rate of 3e-4, a fixed batch size of 16, and a fixed gradient accumulation step of 2 for all tasks. Regarding training epochs for text-to-label, label-to-text, and final training, we search within a range in {15, 20, 25, 30} using the validation set of the source domain for selection. We train our model on a single NVIDIA V100 GPU." }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [ "b5", "b30", "b44", "b7", "b39", "b33", "b3", "b20", "b35" ], "table_ref": [], "text": "For cross-domain ATE and UABSA tasks, we follow previous works to compare with established baselines including Hier-Joint (Ding et al., 2017), RNSCN (Wang and Pan, 2018), AD-SAL (Li et al., 2019c), AHF (Zhou et al., 2021), BERT B/E -UDA (Gong et al., 2020), and BERT B/E -CDRG (Yu et al., 2021) where BERT B and BERT E refer to models based on the original BERT and the continually trained BERT on large-scale E-commerce data containing around 3.8 million reviews (Xu et al., 2019) For cross-domain AOPE and ASTE tasks, since there is no existing work on these two tasks under the cross-domain setting, we leverage the indomain state-of-the-art models in a zero-shot manner for comparisons, including SDRN (Chen et al., 2020) for AOPE, and RoBMRC (Liu et al., 2022), SpanASTE (Xu et al., 2021) for ASTE task. In addition, we also refine RoBMRC and SpanASTE to work for the AOPE task by simply omitting the prediction of sentiment polarity.\nMost of the above baselines are discriminative methods based on the pre-trained BERT model. To enable a fair comparison, we also employ GAS (Zhang et al., 2021c) for all four ABSA tasks, which is a strong unified generation method based on the same pre-trained generative model, i.e., T5base, as our proposed BGCA method." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [ "tab_2", "tab_3" ], "text": "We report the main results for the ATE and UABSA tasks in Table 3 and the AOPE and ASTE tasks in Table 4. We have the following observations: 1) Our method with a single text-to-label direction (BGCA text-to-label ) establishes a strong baseline for cross-domain ABSA tasks. Compared to discriminative baseline methods without external resources, it shows an improvement of 0.24%, 2.26%, 4.5%, and 5.4% on the cross-domain ATE, UABSA, AOPE, and ASTE tasks, respectively. This demonstrates that generative models can actually generalize well across different domains with our designed continuous tagger to indicate the desired sentiment elements. 2) Our proposed framework BGCA label-to-text with bidiretional augmentations achieves new state-of-the-art results on all four cross-domain ABSA tasks. It outperforms the previous best models by 2.15% and 2.61% on the ATE and UABSA tasks and by 3.28% and 2.07% on AOPE and ASTE. Notably, it requires no external resources and can be seamlessly applied to all crossdomain ABSA tasks. This verifies the generalizability and effectiveness of our proposed bidirectional generation-based augmentation method. 3) Compared to other generation-based methods such as GAS and BGCA text-to-label , BGCA label-to-text outperforms all of them on four tasks, indicating that the label-to-text direction can effectively utilize the unlabeled target data and leverage the potential of generative models." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [ "b33", "b7", "b39" ], "table_ref": [ "tab_4", "tab_4", "tab_4" ], "text": "We conduct ablation studies to analyze the effectiveness of each component in BGCA. Results of different model variants are reported in Table 5.\nAblation on label-to-text generation To investigate the effectiveness of the label-to-text direction, and verify our assumption that it can fix the noisy prediction issue, we replace it with the selftraining method and denote it as \"self-training\" in Table 5. Specifically, we use the pseudo labels of the unlabeled target domain data extracted by the text-to-label stage to replace our augmented data. As shown in Ablation on unlabeled data utilization Continue training has shown to be an effective method to leverage unlabeled data by conducting pretraining tasks on relevant corpora to capture domain-specific knowledge (Xu et al., 2019;Gong et al., 2020;Yu et al., 2021). We compare it with our method to discuss how to utilize unlabeled data for generative cross-domain ABSA and denote it as \"continue\" in 5. Results on four tasks show that our approach outperforms the non-shared method by an average of 0.6%, suggesting that a shared model owns a better understanding of the association between sentences and labels." }, { "figure_ref": [], "heading": "Further Analysis", "publication_ref": [], "table_ref": [], "text": "Analysis on number of generated samples Figure 2 shows the comparison results over four tasks with different numbers of generated samples. To better analyze the effect of the number of generations, we exclude the source training data and solely use the generated samples for final training. There is an apparent trend of performance improvement with the increasing number of generated samples, revealing that the generated samples can boost cross-domain ability." }, { "figure_ref": [], "heading": "Analysis on improvement types", "publication_ref": [], "table_ref": [], "text": "To understand what types of cases our method improved, we cate-gorize sentences from the test set into three groups: without any aspect, with a single aspect, and with multiple aspects. We conduct our analysis on the cross-domain ATE and UABSA tasks since they contain sentences without any aspect, and evaluate the performance of both the text-to-label and label-to-text settings for each group. We choose sentence-level accuracy as the evaluation metric, i.e., a sentence is counted as correct if and only if all of its sentiment elements are correctly predicted. We present the average accuracy across all transfer pairs in Table 7. The text-to-label model has less knowledge of the target domain and thus tends to predict sentences as no aspect, leading to high accuracy in the group without any aspect. However, it also misses many sentiment elements in the other two groups. On the other hand, although label-to-text lies behind text-to-label in the group without any aspect, it significantly improves the performance of sentences with single or multiple aspects. This indicates that the label-to-text model has obtained more target domain knowledge than the text-to-label setting, and thus can identify more sentiment elements." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "In this work, we extend the generative method to cross-domain ABSA tasks and propose a novel BGCA framework to boost the generative model's cross-domain ability. Specifically, we train a shared generative model in reverse directions, allowing high-quality target domain augmentation and a unified solution to comprehend sentences and labels fully. Experiments on four cross-domain ABSA tasks verify the effectiveness of our method." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "In this paper, we present a bidirectional generative framework for cross-domain ABSA that has achieved outstanding results on four cross-domain ABSA tasks. Although there is only one stage during inference, our method involves multiple training stages, including text-to-label, label-totext, and final training. These additional training stages not only lengthen the training time but also require additional computational resources, which may hinder scalability for large-scale data and result in a burden for the environment." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgements S. J. Pan thanks for the support of the Hong Kong Global STEM Professorship. Y. Deng is supported by Alibaba Group through Alibaba Innovative Research (AIR) Program and Alibaba-NTU Singapore Joint Research Institute (JRI), Nanyang Technological University, Singapore." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Our data and code are publicly available at https://github.com/DAMO-NLP-SG/BGCA." } ]
Cross-domain aspect-based sentiment analysis (ABSA) aims to perform various fine-grained sentiment analysis tasks on a target domain by transferring knowledge from a source domain. Since labeled data only exists in the source domain, a model is expected to bridge the domain gap for tackling cross-domain ABSA. Though domain adaptation methods have proven to be effective, most of them are based on a discriminative model, which needs to be specifically designed for different ABSA tasks. To offer a more general solution, we propose a unified bidirectional generative framework to tackle various cross-domain ABSA tasks. Specifically, our framework trains a generative model in both text-to-label and label-to-text directions. The former transforms each task into a unified format to learn domain-agnostic features, and the latter generates natural sentences from noisy labels for data augmentation, with which a more accurate model can be trained. To investigate the effectiveness and generality of our framework, we conduct extensive experiments on four cross-domain ABSA tasks and present new state-of-the-art results on all tasks.
Bidirectional Generative Framework for Cross-domain Aspect-based Sentiment Analysis
[ { "figure_caption": "The statistics of ATE and UABSA to generate a natural sentence xT incorporating the sentiment information in ŷT . To ensure fluency and naturalness, we decode the whole vocabulary set:", "figure_data": "TaskLATE&UABSA R DSAOPE L14 R14 R15 R16 L14 R14 R15 R16 ASTETrain 3045 3877 2557 1492 1035 1462 678 971 906 1266 605 857Dev30438725514911616376108 219 310 148 210Test800 2158 1279 747343500 325 328 328 492 322 326", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "CDRG † * 53.09 57.96 54.39 40.85 42.96 38.83 45.66 35.06 31.62 34.22 43.46 BGCA text-to-label 54.12 48.08 52.65 33.26 30.67 35.26 44.57 36.01 41.19 36.55 41.24 BGCA label-to-text 56.39 61.69 59.12 43.20 39.76 47.94 45.52 36.40 34.16 36.57 46.07 Results on cross-domain ATE and UABSA tasks. The best results are in bold. Results are the average F1 scores over 5 runs.", "figure_data": "MethodsS→R L→R D→R R→S L→S D→S R→L S→L R→D S→D Avg.ATEHier-Joint †46.39 48.61 42.96 27.18 25.22 29.28 34.11 33.02 34.81 35.00 35.66RNSCN †48.89 52.19 50.39 30.41 31.21 35.50 47.23 34.03 46.16 32.41 40.84AD-SAL †52.05 56.12 51.55 39.02 38.26 36.11 45.01 35.99 43.76 41.21 43.91BERT B -UDA †56.08 51.91 50.54 34.62 32.49 34.52 46.87 43.98 40.34 38.36 42.97BERT B -CDRG †56.26 60.03 52.71 42.36 47.08 41.85 46.65 39.51 32.60 36.97 45.60GAS61.24 53.02 56.44 31.19 32.14 35.72 52.24 43.76 42.24 37.77 44.58BERT E -UDA † *59.07 55.24 56.40 34.21 30.68 38.25 54.00 44.25 42.40 40.83 45.53BERT E -CDRG † * 59.17 68.62 58.85 47.61 54.29 42.20 55.56 41.77 35.43 36.53 50.00BGCA text-to-label60.03 55.39 55.83 36.02 35.43 37.73 54.18 43.45 42.49 37.89 45.84BGCA label-to-text63.20 69.53 65.33 45.86 44.85 54.07 57.13 46.15 37.15 38.24 52.15UABSAHier-Joint †31.10 33.54 32.87 15.56 13.90 19.04 20.72 22.65 24.53 23.24 23.72RNSCN †33.21 35.65 34.60 20.04 16.59 20.03 26.63 18.87 33.26 22.00 26.09AD-SAL †41.03 43.04 41.01 28.01 27.20 26.62 34.13 27.04 35.44 33.56 33.71AHF46.55 43.49 44.57 33.23 33.05 34.96 34.89 29.01 37.33 39.61 37.67BERT B -UDA †47.09 45.46 42.68 33.12 27.89 28.03 33.68 34.77 34.93 32.10 35.98BERT B -CDRG †47.92 49.79 47.64 35.14 38.14 37.22 38.68 33.69 27.46 34.08 38.98GAS54.61 49.06 53.40 30.99 29.64 33.34 43.50 35.12 39.29 35.81 40.48BERT E -UDA † *53.97 49.52 51.84 30.67 27.78 34.41 43.95 35.76 40.35 38.05 40.63BERT E -", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Results on cross-domain AOPE and ASTE tasks. The best results are in bold. Results are the average F1 scores over 5 runs.", "figure_data": "MethodsR14→L14 R15→L14 R16→L14 L14→R14 L14→R15 L14→R16 Avg.AOPESDRN45.3937.4538.6647.6341.3446.3642.81RoBMRC52.3646.4443.6154.7048.6855.9750.29SpanASTE51.9048.1547.3061.9755.5863.2654.69GAS57.5853.2352.1764.6060.2666.6959.09BGCA text-to-label58.5454.0651.9964.6158.7467.1959.19BGCA label-to-text60.8255.2254.4868.0465.3170.3462.37ASTERoBMRC43.9040.1937.8157.1345.6252.0546.12SpanASTE45.8342.5040.5757.2449.0255.7748.49GAS49.5743.7845.2464.4056.2663.1453.73BGCA text-to-label52.5545.8546.8661.5255.4361.1553.89BGCA label-to-text53.6445.6947.2865.2758.9564.0055.80MethodsATEUABSA AOPE ASTEAvg.BGCA †52.1546.0762.3755.8054.10-self-training*46.1341.5661.3355.9951.25-continue*46.6342.2258.5654.7050.53-w/o sharing52.0844.7261.6455.7653.55", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablation Study. BGCA † represents our BGCA label-to-text setting. * denotes replacing the labelto-text stage with the corresponding training method.", "figure_data": "", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Table 5, the performance drops POS was good to excellent along with the [attitude] POS . (service, POS) The [service] POS I received from Toshiba was excellent. [Bottles of wine] POS are cheap and good. (bottles, POS) I love the [bottles] POS they are made out of. Our [waitress] NEU wasn't mean, but not especially warm or attentive either. (waitress, NEG) The [waitress] NEG didn't even answer my question.", "figure_data": "Sentence from RPredictionLabel-to-text GenerationThe [service]", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Examples on L→R from the UABSA task. Gold aspects are marked by square brackets. POS, NEU and NEG denote positive, neutral and negative sentiment.", "figure_data": "55Average F1 (%)35 40 45 50100200 # Generated Samples 300 400500 UABSA ATE AOPE ASTEFigure 2: Comparison results of our method with a dif-ferent number of generations.about three points on average for four tasks. Thisindicates that the pseudo-labeled samples from thetext-to-label model contain more noise. Addinglabel-to-text generation could effectively addressthis issue by generating pseudo-training data withless noise. To further investigate the effectivenessof generated samples, we manually check somesamples on L→R from the UABSA task and showsome representative samples in Table 6. Note thatthe gold labels for the target domain are not avail-able during training, and we display them here forinvestigation only. The first two example's predic-tions either omit an aspect or gives an incompleteaspect, while the third example's prediction givesthe wrong sentiment. However, the label-to-textmodel can generate a correct sentence that appropri-ately describes the prediction, although it is inaccu-rate regarding to the original input sentence. Theseexamples demonstrate how the label-to-text stagecan resolve noisy prediction issues and producehigh-quality target domain data.", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Specifically, we replace", "figure_data": "GroupATE text→label label→text text→label label→text UABSAZero45.3136.4850.0239.18Single41.5347.9935.0243.17Multiple26.6137.2021.9929.59Table 7: Comparison results on cross-domain ATE andUABSA tasks over different sentence groups contain-ing zero, single, or multiple aspects respectively.the label-to-text stage with conducting continue-train on the unlabeled data of the target domain,with the span reconstruction objective as originalT5 pre-training (Raffel et al., 2020). The resultsshow that continue training lies behind our pro-posed method and demonstrate that our frameworkcan effectively utilize unlabeled target domain data.The possible reason may be that continue trainingrequires many training samples, which is infeasiblein cross-domain ABSA scenarios.Ablation on model sharing To demonstrate theadvantages of training a shared model in both direc-tions, we compare it to a method where a model isnewly initialized before each stage of training anddenote it as \"w/o sharing\" in Table", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" } ]
Yue Deng; Wenxuan Zhang; Sinno Jialin Pan; Bing Lidong
[ { "authors": "John Blitzer; Ryan T Mcdonald; Fernando Pereira", "journal": "", "ref_id": "b0", "title": "Domain adaptation with structural correspondence learning", "year": "2006-07-23" }, { "authors": "Nicola De Cao; Gautier Izacard; Sebastian Riedel; Fabio Petroni", "journal": "", "ref_id": "b1", "title": "Autoregressive entity retrieval", "year": "2021-05-03" }, { "authors": "Peng Chen; Zhongqian Sun; Lidong Bing; Wei Yang", "journal": "", "ref_id": "b2", "title": "Recurrent attention network on memory for aspect sentiment analysis", "year": "2017" }, { "authors": "Shaowei Chen; Jie Liu; Yu Wang; Wenzheng Zhang; Ziming Chi", "journal": "", "ref_id": "b3", "title": "Synchronous doublechannel recurrent network for aspect-opinion pair extraction", "year": "2020-07-05" }, { "authors": "Maryna Chernyshevich", "journal": "", "ref_id": "b4", "title": "IHS r&d belarus: Crossdomain extraction of product features using CRF", "year": "2014-08-23" }, { "authors": "Ying Ding; Jianfei Yu; Jing Jiang", "journal": "", "ref_id": "b5", "title": "Recurrent neural networks with auxiliary labels for crossdomain opinion target extraction", "year": "2017-02-04" }, { "authors": "Zhifang Fan; Zhen Wu; Xin-Yu Dai; Shujian Huang; Jiajun Chen", "journal": "", "ref_id": "b6", "title": "Target-oriented opinion words extraction with target-fused neural sequence labeling", "year": "2019" }, { "authors": "Chenggong Gong; Jianfei Yu; Rui Xia", "journal": "", "ref_id": "b7", "title": "Unified feature and instance based domain adaptation for aspect-based sentiment analysis", "year": "2020-11-16" }, { "authors": "Minqing Hu; Bing Liu", "journal": "", "ref_id": "b8", "title": "Mining and summarizing customer reviews", "year": "2004" }, { "authors": "Niklas Jakob; Iryna Gurevych", "journal": "", "ref_id": "b9", "title": "Extracting opinion targets in a single and cross-domain setting with conditional random fields", "year": "2010-10" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "BART: denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", "year": "2020-07-05" }, { "authors": "Junjie Li; Jianfei Yu; Rui Xia", "journal": "", "ref_id": "b11", "title": "Generative cross-domain data augmentation for aspect and opinion co-extraction", "year": "2022-07-10" }, { "authors": "Xin Li; Lidong Bing; Wai Lam; Bei Shi; ; ", "journal": "", "ref_id": "b12", "title": "Transformation networks for target-oriented sentiment classification", "year": "2018" }, { "authors": "Xin Li; Lidong Bing; Piji Li; Wai Lam", "journal": "", "ref_id": "b13", "title": "a. A unified model for opinion target extraction and target sentiment prediction", "year": "2019" }, { "authors": "Xin Li; Lidong Bing; Piji Li; Wai Lam; Zhimou Yang", "journal": "", "ref_id": "b14", "title": "Aspect term extraction with history attention and selective transformation", "year": "2018" }, { "authors": "Xin Li; Lidong Bing; Wenxuan Zhang; Wai Lam", "journal": "", "ref_id": "b15", "title": "Exploiting BERT for end-to-end aspectbased sentiment analysis", "year": "2019" }, { "authors": "Zheng Li; Xin Li; Ying Wei; Lidong Bing; Yu Zhang; Qiang Yang", "journal": "", "ref_id": "b16", "title": "Transferable end-to-end aspect-based sentiment analysis with selective adversarial learning", "year": "2019" }, { "authors": "Bing Liu", "journal": "Morgan & Claypool Publishers", "ref_id": "b17", "title": "Sentiment Analysis and Opinion Mining", "year": "2012" }, { "authors": "Jian Liu; Zhiyang Teng; Leyang Cui; Hanmeng Liu; Yue Zhang", "journal": "", "ref_id": "b18", "title": "Solving aspect category sentiment analysis as a text generation task", "year": "2021-07-11" }, { "authors": "Pengfei Liu; R Shafiq; Helen M Joty; Meng", "journal": "", "ref_id": "b19", "title": "Fine-grained opinion mining with recurrent neural networks and word embeddings", "year": "2015" }, { "authors": "Shu Liu; Kaiwen Li; Zuhe Li", "journal": "", "ref_id": "b20", "title": "A robustly optimized BMRC for aspect sentiment triplet extraction", "year": "2022-07-10" }, { "authors": "Yue Mao; Yi Shen; Jingchao Yang; Xiaoying Zhu; Longjun Cai", "journal": "", "ref_id": "b21", "title": "Seq2path: Generating sentiment tuples as paths of a tree", "year": "2022-05-22" }, { "authors": "Jialin Sinno; Qiang Pan; Yang", "journal": "IEEE Trans. Knowl. Data Eng", "ref_id": "b22", "title": "A survey on transfer learning", "year": "2010" }, { "authors": "Haiyun Peng; Lu Xu; Lidong Bing; Fei Huang; Wei Lu; Luo Si", "journal": "", "ref_id": "b23", "title": "Knowing what, how and why: A near complete solution for aspect-based sentiment analysis", "year": "2020-02-07" }, { "authors": "Maria Pontiki; Dimitris Galanis; Haris Papageorgiou; Ion Androutsopoulos; Suresh Manandhar; Al-Smadi Mohammad; Mahmoud Al-Ayyoub; Yanyan Zhao; Bing Qin; Orphée De Clercq; Véronique Hoste; Marianna Apidianaki; Xavier Tannier; Natalia Loukachevitch; Evgeniy Kotelnikov; Nuria Bel; Salud María Jiménez-Zafra; Gülşen Eryigit", "journal": "", "ref_id": "b24", "title": "SemEval-2016 task 5: Aspect based sentiment analysis", "year": "2016" }, { "authors": "Maria Pontiki; Dimitris Galanis; Haris Papageorgiou; Suresh Manandhar; Ion Androutsopoulos", "journal": "", "ref_id": "b25", "title": "SemEval-2015 task 12: Aspect based sentiment analysis", "year": "2015" }, { "authors": "Maria Pontiki; Dimitris Galanis; John Pavlopoulos; Harris Papageorgiou; Ion Androutsopoulos; Suresh Manandhar", "journal": "", "ref_id": "b26", "title": "SemEval-2014 task 4: Aspect based sentiment analysis", "year": "2014" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "J. Mach. Learn. Res", "ref_id": "b27", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Cigdem Toprak; Niklas Jakob; Iryna Gurevych", "journal": "", "ref_id": "b28", "title": "Sentence and expression level annotation of opinions in user-generated discourse", "year": "2010" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Lukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b29", "title": "Attention is all you need", "year": "2017-09" }, { "authors": "Wenya Wang; Sinno Jialin Pan", "journal": "", "ref_id": "b30", "title": "Recursive neural structural correspondence network for crossdomain aspect and opinion co-extraction", "year": "2018-07-15" }, { "authors": "Wenya Wang; Sinno Jialin Pan", "journal": "", "ref_id": "b31", "title": "Transferable interactive memory network for domain adaptation in fine-grained opinion extraction", "year": "2019-01-27" }, { "authors": "Rui Xia; Jianfei Yu; Feng Xu; Shumei Wang", "journal": "", "ref_id": "b32", "title": "Instance-based domain adaptation in NLP via intarget-domain logistic approximation", "year": "2014-07-27" }, { "authors": "Hu Xu; Bing Liu; Lei Shu; Philip S Yu", "journal": "", "ref_id": "b33", "title": "BERT post-training for review reading comprehension and aspect-based sentiment analysis", "year": "2019-06-02" }, { "authors": "Lu Xu; Lidong Bing; Wei Lu; Fei Huang; ; ", "journal": "", "ref_id": "b34", "title": "Aspect sentiment classification with aspect-specific opinion spans", "year": "2020" }, { "authors": "Lu Xu; Yew ; Ken Chia; Lidong Bing", "journal": "", "ref_id": "b35", "title": "Learning span-level interactions for aspect sentiment triplet extraction", "year": "2021-08-01" }, { "authors": "Lu Xu; Hao Li; Wei Lu; Lidong Bing", "journal": "", "ref_id": "b36", "title": "Position-aware tagging for aspect sentiment triplet extraction", "year": "2020" }, { "authors": "Hang Yan; Junqi Dai; Tuo Ji; Xipeng Qiu; Zheng Zhang", "journal": "", "ref_id": "b37", "title": "A unified generative framework for aspect-based sentiment analysis", "year": "2021-08-01" }, { "authors": "Min Yang; Wenpeng Yin; Qiang Qu; Wenting Tu; Ying Shen; Xiaojun Chen", "journal": "IEEE Trans. Affect. Comput", "ref_id": "b38", "title": "Neural attentive network for cross-domain aspect-level sentiment classification", "year": "2021" }, { "authors": "Jianfei Yu; Chenggong Gong; Rui Xia", "journal": "", "ref_id": "b39", "title": "Cross-domain review generation for aspect-based sentiment analysis", "year": "2021" }, { "authors": "Kai Zhang; Qi Liu; Biao Hao Qian; Qing Xiang; Jun Cui; Enhong Zhou; ; Chen", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b40", "title": "Eatn: An efficient adaptive transfer network for aspect-level sentiment analysis", "year": "2021" }, { "authors": "Wenxuan Zhang; Yang Deng; Xin Li; Yifei Yuan; Lidong Bing; Wai Lam", "journal": "", "ref_id": "b41", "title": "Aspect sentiment quad prediction as paraphrase generation", "year": "2021-07-11" }, { "authors": "Wenxuan Zhang; Xin Li; Yang Deng; Lidong Bing; Wai Lam", "journal": "", "ref_id": "b42", "title": "Towards generative aspectbased sentiment analysis", "year": "2021-08-01" }, { "authors": "Wenxuan Zhang; Xin Li; Yang Deng; Lidong Bing; Wai Lam", "journal": "", "ref_id": "b43", "title": "A survey on aspect-based sentiment analysis: Tasks, methods, and challenges", "year": "2022" }, { "authors": "Yan Zhou; Fuqing Zhu; Pu Song; Jizhong Han; Tao Guo; Songlin Hu", "journal": "", "ref_id": "b44", "title": "An adaptive hybrid framework for cross-domain aspect-based sentiment analysis", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 314.05, 312.82, 202.46, 58.03 ], "formula_id": "formula_0", "formula_text": "Task Output Tuple Example Output ATE (a) (apple) UABSA (a, s) (apple, positive) AOPE (a, o) (apple, sweet) ASTE (a, o, s) (apple, sweet, positive)" }, { "formula_coordinates": [ 3, 305.82, 635.62, 56.3, 16 ], "formula_id": "formula_1", "formula_text": "y = {t i } |t| i=1" }, { "formula_coordinates": [ 3, 408.15, 732.23, 85.81, 16.3 ], "formula_id": "formula_2", "formula_text": "D S = x S i , y S i N S" }, { "formula_coordinates": [ 3, 306.14, 760.48, 73.35, 15.34 ], "formula_id": "formula_3", "formula_text": "D T = {x T j } N T j=1 ." }, { "formula_coordinates": [ 4, 342.05, 72.05, 182.36, 33.71 ], "formula_id": "formula_4", "formula_text": "L = - l i=-1 log p (y i | x; y ≤i-1 ) ,(2)" }, { "formula_coordinates": [ 4, 340.09, 257.19, 184.32, 21.71 ], "formula_id": "formula_5", "formula_text": "ŷT i = argmax y j ∈U p y j | x T ; ŷT ≤i-1 ,(3)" }, { "formula_coordinates": [ 4, 335.13, 291.19, 122.16, 14 ], "formula_id": "formula_6", "formula_text": "U = {w i } n i=1 ∪ { m j } k j=1 ." }, { "formula_coordinates": [ 4, 320.86, 582.84, 203.55, 50.49 ], "formula_id": "formula_7", "formula_text": "ATE : aspect a ⇒ x UABSA : pos a ⇒ x AOPE : aspect a opinion o ⇒ x ASTE : pos a opinion o ⇒ x (4)" }, { "formula_coordinates": [ 4, 341.07, 690.35, 183.34, 33.71 ], "formula_id": "formula_8", "formula_text": "L = - l i=-1 log p (x i | y; x ≤i-1 ) ,(5)" }, { "formula_coordinates": [ 5, 103.54, 234.97, 185.59, 19.36 ], "formula_id": "formula_9", "formula_text": "xT i = argmax x j ∈V p x j | ŷT ; xT ≤i-1 ,(6)" }, { "formula_coordinates": [ 5, 125.96, 286.72, 88.49, 17.56 ], "formula_id": "formula_10", "formula_text": "D G = xT i , ŷT i N T i=1" } ]
10.48550/arXiv.2303.08774
2023-12-13
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b1", "b2", "b5", "b6", "b8", "b9", "b10", "b11", "b12", "b13", "b41", "b17", "b10", "b18", "b10", "b18" ], "table_ref": [], "text": "Text generation is a fundamental task within the field of natural language processing (NLP). Pretrained language models like GPT-4 [OpenAI, 2023], LLaMA [Touvron et al., 2023], and Alpaca [Taori et al., 2023] have garnered significant attention with their ability to generate fluent and humanlike textual content. These models utilize the auto-regressive (AR) Transformer decoders [Vaswani et al., 2017] to emit generated tokens one-by-one in sequential order from left to right. By leveraging the power of position dependency, AR models are able to enhance the naturalness, coherence, and adherence to human language conventions in the generated text [Brown et al., 2020].\nRecent studies have shown the remarkable performance of diffusion models in image generation [Ho et al., 2020], motivating researchers to extend diffusion to text generation [Li et al., 2022a, Gong et al., 2022, Dieleman et al., 2022, Yuan et al., 2022, Ye et al., 2023]. By introducing timestep, these methods progressively regulate the interpolation between the original tokens and Gaussian noise, then Figure 1: Model behaviors illustrated on a two-dimensional coordinate system, where the horizontal axis stands for the position and the vertical axis represents the diffusion timestep. In the inference stage, different models will behave differently. (a) For the typical Diffusion-LM [Li et al., 2022a], each token share the identical movement speed v(n 1 , t i , t i+1 ) = v(n 2 , t i , t i+1 ) = |t i+1 -t i |. (b) For AR from the perspective of diffusion models, the tokens have two states based on the degree of interpolation between the original tokens and Gaussian noise: to be decoded (at timestep t = T ) and already decoded (at timestep t = 0). Specifically, we have v(n 1 , t i , t i+1 ) = 0 and v(n 2 , t i , t i+1 ) = T . (c) In AR-DIFFUSION, (n e , t e ) is the coordinate of anchor point. Tokens in different positions exhibit varying movement speeds, such as v(n 1 , t i , t i+1 ) > v(n 2 , t i , t i+1 ) when n 1 < n 2 . iteratively denoise for text generation. At each timestep, the diffusion-based text generator predicts all tokens simultaneously following Non-Auto-Regression (NAR) [Lewis et al., 2020, Qi et al., 2020, 2021, Li et al., 2022b], leading to faster decoding speed compared to AR. However, it also inherits the drawback of NAR, namely the sacrifice of inter-token position dependency [Li et al., 2022c] and the drop of generation performance [Bao et al., 2021].\nTo conduct a comprehensive analysis, we introduce a two-dimensional coordinate system to track the diffusion timestep of tokens f (•) positioned at various locations. As illustrated in Figure 1, the system assigns the token position n ∈ [1, N ] to the horizontal axis and the diffusion timestep t ∈ [0, T ] to the vertical axis. Diffusion-LM [Li et al., 2022a], which is followed by existing diffusion-based text generation models, is shown in Figure 1(a). It assigns a uniform timestep t to all tokens. In contrast, tokens in the AR model depicted in Figure 1(b) exhibit distinct timesteps within a generation step (t i ). For instance, the already decoded token at position n 1 has a timestep of 0, while the to-be-decoded token at position n 2 has a timestep of T . This approach effectively captures the sequential dependency. Motivated by this observation, we introduce AR-DIFFUSION, an auto-regressive diffusion method, for the disparity in token positions and the principle of sequential token identification.\nIn AR-DIFFUSION, we propose a multi-level diffusion strategy that includes both sentence-level and token-level diffusion. We randomly choose a sentence-level timestep t, and assign dynamic movement speeds v(•) by determining position-sensitive token-level timestep f (n, t) for each token. This enables tokens at the left of a sentence to undergo faster movement from random Gaussian noise to token embedding, while those at the right of the sentence experience slower movement to better utilize information from previously denoised tokens. During inference, to reduce the significant number of inference steps (e.g., 2,000) required in Diffusion-LM [Li et al., 2022a], SeqDiffSeq [Yuan et al., 2022] and GENIE [Lin et al., 2023], we introduce a skipping mechanism that collaborates with the multi-level diffusion strategy to accelerate the process.\nExperimental results across various text generation tasks, such as text summarization, machine translation, and common sense generation, have consistently demonstrated that AR-DIFFUSION surpasses existing text diffusion models, including AR methods in terms of both quality and diversity. Moreover, our verification reveals that AR-DIFFUSION requires fewer resources during decoding while maintaining superior performance. It achieves 100× faster than SeqDiffSeq [Yuan et al., 2022] in machine translation and 600× faster than GENIE [Lin et al., 2023] in text summarization while delivering comparable results. Furthermore, it demonstrates promising results even in a challenging scenario where decoding is limited to only two steps." }, { "figure_ref": [], "heading": "Preliminary", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Conditional Generative Language Models", "publication_ref": [ "b19" ], "table_ref": [], "text": "In the field of natural language generation, conditional generative models are commonly implemented using either auto-regressive (AR) or non-auto-regressive (NAR) methods. In AR [Vaswani et al., 2017], tokens on the right are predicted based on visible left tokens. The likelihood is given by p AR (y|x) = N i=1 p(y i |y 1:i-1 ; x), where y i denotes the i-th token of y. On the other hand, NAR [Gu et al., 2017] assumes conditional independence among tokens and generates them uniformly without distinction during decoding, resulting in the likelihood p NAR (y|x) = N i=1 p(y i |x). This parallel generation approach is of lower quality compared to AR, although it offers a substantial speed advantage." }, { "figure_ref": [], "heading": "Diffusion Models for Text Generation", "publication_ref": [ "b18" ], "table_ref": [], "text": "Recently, Li et al. [2022a] propose a natural language generation model based on the diffusion process, which is typically divided into a forward noising process and a reverse denoising process. Specifically, the forward process is a fixed linear Gaussian model, which gradually perturbs the random variable z 0 until it becomes the standard Gaussian distribution. This can be formalized as:\nq(zt | z0; x) = N (zt; √ ᾱtz0, (1 -ᾱt)I),(1)\nwhere, ᾱt = t i=1 α i , and α i is a coefficient that monotonically decreases with timestep t, z t is the latent state at timestep t.\nThe reverse process is to initiate from standard Gaussian noise and progressively utilize the denoising transition p θ (z t-1 |z t ; x) for generation.\np θ (zt-1 | zt; x) = N zt-1; µ θ (zt, t; x), Σ θ (zt, t; x) ,(2)\nwhere the mean µ θ and variance Σ θ are learned from the model. In particular, we follow Li et al.\n[2022a]'s approach of using predefined variance without trainable parameters.\nTo extend the continuous diffusion process to discrete text generation, Li et al. [2022a] introduce an additional Markov transition from the discrete tokens y to the latent variable z 0 . In practice, we add an embedding step q ϕ (z 0 |y) = N (z 0 ; Emb(y), (1 -α 0 )I) in the forward process, and use a trainable rounding step which is parametrized by p θ (y|z 0 ; x) = N i=1 p θ (y i |z i 0 ; x) in the reverse process. In each timestep, we utilize an encoder-decoder model g θ (z t , t; x) to approximate z 0 [Lin et al., 2023] in a NAR manner and then estimate µ θ (z t , t; x).\nIn consequence, combined with maximizing the evidence lower bound (ELBO) of log p θ (y|x), our training objective of the conditional diffusion language model is:\nL = E q ϕ (z 0:T |y) -log p θ (y | z0; x) + T t=1 ∥z0 -g θ (zt, t; x)∥ 2 .\n(3)\n3 Methodology" }, { "figure_ref": [], "heading": "Multi-Level Diffusion", "publication_ref": [], "table_ref": [], "text": "In the typical diffusion process, every token in the text sequence has the same diffusion timestep.\nIn order to leverage the sequential nature of language, we enable tokens to have different diffusion timesteps during the forward and reverse pass. To accomplish this, we propose a multi-level diffusion strategy that includes both sentence-level and token-level diffusion.\nFirstly, at the sentence-level, we follow Diffusion-LM [Li et al., 2022a] to randomly select a timestep t.\nSecondly, at the token-level, we incorporate positional information n ∈ [1, N ] based on the sentencelevel timestep to regulate the diffusion timestep for the current token. The procedure is illustrated as:\nzt = z 1 f (1,t) , z 2 f (2,t) , • • • , z N f (N,t) , (4\n)\nwhere N is the given target sentence length, z t is the sentence representation at timestep4 t,\nz n f (n,t)\nis the latent representation for the n-th token at sentence-level timestep t, and f (n, t) is a token-level timestep function that denotes the token-level diffusion timestep determined by token position n and sentence-level timestep t.\nWe visualize the token-level timestep n, f (n, t) onto a two-dimensional coordinate system as Figure 1 , which takes the token position as the horizontal axis and the sentence-level timestep as the vertical axis. Furthermore, to provide a more profound description of the characteristics of movement, we define the speed of movement as the following equation.\nv(n, ti, ti+1) = f (n, ti+1) -f (n, ti),(5)\nwhere t i and t i+1 are the start and end sentence-level diffusion timesteps. It can be observed that tokens in Diffusion-LM share the same movement speed, while those in AR exhibit different speeds." }, { "figure_ref": [], "heading": "Token-Level Diffusion with Dynamic Movement Speed", "publication_ref": [], "table_ref": [], "text": "Based on the speed of movement, we propose a fundamental principle, dynamic movement speed, for designing the token-level diffusion timestep function f (n, t) to take advantage of AR in diffusion. Specifically, elements on the left side of a sentence undergo higher movement speed from random Gaussian noise to token embedding, while those on the right side experience lower movement speed, thereby they can be generated in the later sentence-level timestep and utilize information from previously generated tokens more effectively.\nAlgorithm 1 Training Process of AR-DIFFUSION.\nInput: Dataset {(x, y)}, maximum timestep number T and maximum target length N .\nOutput: Optimized model parameters θ. 1: Define an anchor point (ne, te)5 . 2: repeat 3:\nSample (x, y) from the dataset and embed y into z0." }, { "figure_ref": [], "heading": "4:", "publication_ref": [], "table_ref": [], "text": "Sample a sentence-level timestep t from the interval [0, N + T ], then the start point is determined by the following equation:\n(ns, ts) = clip(N -t, 0, N ), clip(t -N, 0, T )(6)\n5:\nUse the point-slope linear function to determine the token-level timestep f (n, t) in position n:\nf (n, t) = clip te -ts ne -ns (n -ns) + ts, 0, T(7)\n6: Sample z n f (n,t) for each n in different positions with Gaussian reparameterization. 7:\nAccording to equation (3) and equation ( 9), employ gradient descent to optimize the objective:\nmin θ -log p θ (y | z0; x) + N n=1 g θ (z n f (n,t) , f (n, t); x) -z0 2 (8) 8: until converged\nFollowing the guidance of the principle, we develop a token-level diffusion strategy with the linear function, which is shown in Figure 1(c). In particular, the procedure is illustrated in Algorithm 1, where clip(x, min, max) function is to clamp all elements in x into the range [min, max]. Specifically, in the forward process of diffusion, the start point goes to the left from (N, 0) to (0, 0) along the horizontal axis and then moves up to (0, T ) along the vertical axis. Therefore, the entire range of sentence-level timestep is extended to [0, N + T ].\nIn the reverse diffusion process, the multi-level diffusion follows the formula:\ng θ zt, t; x = g θ z 1 f (1,t) , f (1, t) , z 2 f (2,t) , f (2, t) , • • • , z N f (N,t) , f (N, t) ; x ,(9)\nwhere g θ (z n f (n,t) , f (n, t); x) denotes the n-th element." }, { "figure_ref": [], "heading": "Inference with Skipping", "publication_ref": [], "table_ref": [], "text": "Typically, the generation process needs to go through all the sentence-level timesteps from T + N to 0. To reduce the decoding time, we introduce a skipping mechanism that allows us to traverse a subset of timesteps.\nTo ensure consistency between training and inference, we also need to calculate the timestep for each token during the inference process. Therefore, we first establish an anchor point, and then uniformly select a decreasing subsequence {t i } M i=0 from all timesteps (T + N to 0). The count of this sequence is the total decoding steps M (M ≪ T + N ). For example, assuming the interval is 500 and T + N is 2500, then M is 5, and the subsequence is [2500,2000,1500,1000,500,0].\nEach element of this subsequence represents the sentence-level timesteps t, and we can use equation (6) to calculate (n s , t s ). Then, based on equation ( 7), we calculate the token-level timesteps corresponding to each position. We take the current sentence-level timestep t i and the next sentencelevel timestep t i+1 , and calculate the token-level timesteps f (n, t i ) and f (n, t i+1 ) for each position.\nSince M ≪ T + N , t i+1 ≪ t i , implying that f (n, t i ) ≪ f (n, t i+1 ).\nThe essence of Skipping is reflected in the fact that each token undergoes significant span during denoising (from f (n,\nt i ) to f (n, t i+1 )).\nAlgorithm 2 Inference Process of AR-DIFFUSION with the Skipping Mechanism.\nInput: Source condition x, number of decoding steps M and model parameters θ. Output: Predicted target embedding ŷ. 1: Define an anchor point (ne, te). 2: Uniformly select a decreasing subsequence of timesteps {ti} M i=0 ranging from T + N to 0, where M ≪ T + N . 3: Sample zt 0 ∼ N (0, I). 4: for i = 0 to M -1 do 5:\nCalculate the start point (ns, ts) using equation ( 6). 6:\nBased on the current sentence-level inference steps ti and the next one ti+1, assign token-level timesteps f (n, ti) and f (n, ti+1) to token in position n using equation ( 7). 7:\nReverse sample\nzt i+1 = z 1 f (1,t i+1 ) , z 2 f (2,t i+1 ) , • • • , z N f (N,t i+1 ) from p θ (zt i+1 | zt i ;\nx) with the following formulas:\np θ (zt i+1 | zt i ; x) = N n=1 p θ z n f (n,t i+1 ) | z n f (n,t i ) ; x (10) p θ z n f (n,t i+1 ) | z n f (n,t i ) ; x ∼ N z n f (n,t i+1 ) ; λz n f (n,t i ) + µg θ (z n f (n,t) , f (n, t); x), σI(11\n) 8: end for 9: Map zt M to the nearest embedding ŷ.\nIn practice, we propose an algorithm for the inference, illustrated in Algorithm 2.\nλ = ᾱf(n,t i ) ᾱf(n,t i+1 ) (1 -ᾱf(n,t i+1 ) ) 1 -ᾱf(n,t i ) , µ = ᾱf(n,t i+1 ) (1 -ᾱf(n,t i ) ᾱf(n,t i+1 ) ) 1 -ᾱf(n,t i ) , σ = (1 -α f (n,t i ) )(1 -ᾱf(n,t i+1 ) ) 1 -ᾱf(n,t i )(12)\nIn equation ( 10), the conditional distribution of z ti+1 is inferred by p θ (z ti+1 |z ti ; x), and then we decompose it by positions due to the independent forward process of elements at different positions.\nFrom equation (11) to equation ( 12), we establish the relationship between tokens at different timesteps, and the detailed derivation can be found in Appendix A." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Tasks and Datasets", "publication_ref": [ "b20" ], "table_ref": [], "text": "Text Summarization This task involves taking a long document as input and generating a concise sentence as output. This requires models with the ability to identify important content and rewrite it in a condensed form. In our experiments, we use the publicly available XSUM [Narayan et al., 2018] and CNN/DAILYMAIL Hermann et al. [2015] on GLGE6 , which is also named as GLGE-Easy.\nMachine Translation Translation is a widely used sequence-to-sequence task. The input is a sequence of words in the source language, and the output is a sequence of corresponding words in the target language. We choose the IWSLT 2014 dataset and the data processing method is to follow the scripts provided by fairseq7 .\nCommon Sense Generation In this task, the model is provided with a concept set consisting of objects and actions as input. The objective is to generate a sentence that incorporates these concepts and describes a realistic scenario. We use COMMONGEN8 dataset for evaluation." }, { "figure_ref": [], "heading": "Experimental Details", "publication_ref": [ "b23", "b19", "b24", "b25", "b26", "b17", "b27", "b24", "b25", "b26", "b28", "b29", "b9", "b10", "b11", "b18", "b13", "b10", "b11" ], "table_ref": [], "text": "Model Setup Our model configuration is implemented based on Transformer-base [Vaswani et al., 2017]. In particular, For XSUM and CNN/DAILYMAIL, we set the diffusion embedding dimension to 128. For IWSLT14, we use 64-dimensional diffusion embedding, 4 attention heads and 1024-dimensional feed-forward layers. For COMMONGEN, we adopt 64-dimensional diffusion embedding, 8 attention heads and 512-dimensional feed-forward layers.\nTraining and Inference In the training phase, we employ a square-root noise schedule and 2,000 diffusion steps [Li et al., 2022a]. Specially, we use the tokenizer and vocabulary constructed by Byte Pair Encoding (BPE)9 [Kudo and Richardson, 2018] for translation tasks. For other tasks, we adopt the tokenizer and vocabulary of bert-base-uncased.\nBaselines We set four groups of baselines:\n• NAR: NAT [Gu et al., 2017], iNAT [Lee et al., 2018], CMLM [Ghazvininejad et al., 2019], LevT [Gu et al., 2019] and CNAT [Bao et al., 2021];\n• Semi-NAR: InsT [Stern et al., 2019], iNAT [Lee et al., 2018], CMLM [Ghazvininejad et al., 2019] and LevT [Gu et al., 2019];\n• AR: bRNN [Gu et al., 2016], LSTM [Greff et al., 2017] and Transformer [Vaswani et al., 2017];\n• Diffusion: DiffusionLM [Li et al., 2022a], CDCD [Dieleman et al., 2022], SeqDiffuSeq [Yuan et al., 2022], DINOISER [Ye et al., 2023] and GENIE [Lin et al., 2023].\nMetrics We follow the approach of Qi et al. [2020] 10 to evaluate the ROUGE-1/2/L of the summarization task. For the evaluation of translation tasks, we adopt the setting of SeqDiffuSeq [Yuan et al., 2022] to report BLEU score. In addition, we also calculate the SacreBLEU score according to the setting of DINOISER [Ye et al., 2023] " }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [ "b31", "b14", "b32", "b19", "b24", "b25", "b26", "b27", "b24", "b25", "b26", "b29", "b18", "b30", "b26" ], "table_ref": [ "tab_1", "tab_2", "tab_4", "tab_2", "tab_3" ], "text": "The results on different datasets are shown in Table 2, Table 3, Table 4 and Table 6. The best result is bolded and the second-best result is underlined . \"k\" indicates the number of generated candidate samples. It can be seen from the results in each table that AR-DIFFUSION achieves the best performance.\nDuring the inference process, we utilize 20 inference steps and employ Minimum Bayes Risk (MBR) [Kumar and Byrne, 2004] decoding to select the best sample, following [Li et al., 2022a]. We choose MBR instead of the selection approach in GENIE, as GENIE picks up the best sample by calculating the maximum score for each generated one using ground truth, which introduces unfairness. To ensure a fair comparison, we re-implement GENIE using our configuration and perform inference with 20 steps.\nTable 2: Results on XSUM test set. The results of NAR and Semi-NAR are from Qi et al. [2021], and the results of AR are from GLGE [Liu et al., 2021].\nMethods Pattern ROUGE-1 ROUGE-2 ROUGE-L\nNAT [Gu et al., 2017] NAR 24.0 3.9 20.3 iNAT [Lee et al., 2018] 24.0 4.0 20.4 CMLM [Ghazvininejad et al., 2019] 23.8 3.6 20.2 LevT [Gu et al., 2019] 24.8 4.2 20.9\nInsT [Stern et al., 2019] Semi-NAR 17.7 5.2 16.1 iNAT [Lee et al., 2018] 27.0 6.9 22.4 CMLM [Ghazvininejad et al., 2019] 29.1 7.7 23.0 LevT [Gu et al., 2019] 25.3 7.4 21.5\nLSTM [Greff et al., 2017] AR12 25.1 6.9 19.9 Transformer [Vaswani et al., 2017] 30.5 10.4 24.2 GENIE [Lin et al., 2023] Machine Translation Table 4 presents the BLEU score implemented by SeqDiffuSeq setting. AR-DIFFUSION outperforms the non-auto-regressive CNAT in greedy search for a single sample, and achieves a substantial gain. Moreover, the BLEU score of AR-DIFFUSION surpasses GENIE by a large margin and shows a slightly better performance than the AR Transformer. Specially, AR-DIFFUSION achieves a more powerful result at k = 500.\nIn Table 5 we give the SacreBLEU score according to the setting of DINOISER. AR-DIFFUSION has notable improvements over non-auto-regressive CMLM. Besides, AR-DIFFUSION achieves excellent performance among text diffusion models for both EN→DE and DE→EN tasks. Specifically, AR-DIFFUSION is far superior to GENIE and comparable to the newly proposed DINOISER at n = 50. Nevertheless, the performance is stronger than DINOISER when k = 50013 . [Lin et al., 2020] 11.08 32.57 17.20 10.60 18.80 18.00 MeanPooling-CopyNet [Lin et al., 2020] 11.36 34.63 14.80 8.90 19.20 20.20 LevT [Gu et al., 2019 " }, { "figure_ref": [], "heading": "Inference Efficiency", "publication_ref": [ "b11" ], "table_ref": [ "tab_2", "tab_5" ], "text": "First, we use the number of function evaluations (NFE) as a measure to compare inference efficiency [Ye et al., 2023] in machine translation. From Table 4, it is evident that even when the NFE is reduced to 1% of SeqDiffuSeq (equivalent to 100× faster), AR-DIFFUSION still outperforms SeqDiffuSeq. Moreover, increasing the number of generated candidate samples (k = 500) leads to further performance improvements, albeit with increased time consumption.\nSecond, we conduct experiments with an extremely limited number of inference steps (2 and 3)14 and compare the performance with that of GENIE in XSUM. The results are presented in Table 7.\nWhen reducing the number of steps to 2, GENIE experiences a significant decline, with an average score of 4.20 in the AVG Drop column, while AR-DIFFUSION exhibits a comparatively smaller decrease of 1.34. Furthermore, with 3 steps, although the performance deterioration of GENIE is somewhat reduced, the average score still shows a decline of 2.81. In contrast, AR-DIFFUSION maintains a high performance level, with an average score differing from the 20-step result by only 0.64. Notably, the results of AR-DIFFUSION at 3 steps are comparable to the results of GENIE at 2,000 steps. Therefore, compared to GENIE, the inference speed of AR-DIFFUSION can be accelerated by up to 600×. " }, { "figure_ref": [ "fig_1", "fig_1", "fig_1", "fig_2" ], "heading": "Analysis", "publication_ref": [ "b34", "b18", "b40" ], "table_ref": [ "tab_6" ], "text": "Diversity of Samples Diversity is a key advantage of diffusion models. To measure the diversity of generated samples, We adopt the SELF-BLEU [Zhu et al., 2018] metric, in which a lower score indicates higher diversity. In Lin et al. [2023], various sampling methods were applied to the pretrained auto-regressive model BART, including Greedy Search, Beam Search Xiao et al. Specifically, greedy search is to select the token with the highest probability at each step. Beam search is to select the largest token from among the beams with higher probability at each step. Diverse beam search is to divide the beams into multiple groups at each step and ensure the difference between groups by calculating the diversity score between groups. Typical sampling selects samples through a discrete random process. Top-k sampling is to randomly select one of the k candidate tokens with the highest probability at each step. Nucleus sampling is to randomly select one token at each step from the candidate tokens whose probability density is greater than p.\nAs shown in Table 8, AR-DIFFUSION achieves significantly higher diversity compared to the autoregressive model. Furthermore, the diversity can be comparable to GENIE with a better performance. Ablation Study To demonstrate the effectiveness of our proposed method, we perform ablation experiments on the XSUM dataset. Our results show that both our proposed multi-level diffusion and skipping mechanism are essential for achieving the high performance of AR-DIFFUSION.\nMaintaining the skipping inference method, we remove the token-level diffusion during the training process, which degenerates to GENIE w/ skipping. The comparison results are shown in Figure 2(a). It can be observed that after removing, the AVG-ROUGE score is greatly lower after 2 steps.\nThe performance of applying our proposed skipping mechanism and DDIM [Song et al., 2021] to AR-DIFFUSION is shown in Figure 2(b). The results demonstrate that the skipping mechanism consistently outperforms DDIM in various inference steps. Additionally, the skipping mechanism can be easily applied to GENIE. As depicted in Figure 2(c), DDIM suffers a significant drop in performance when the number of inference steps is less than 40. In contrast, the skipping mechanism consistently maintains good performance across all inference steps. Case Study By mapping the state to the token with the highest logits, we visualize the intermediate states of AR-DIFFUSION. As depicted in Figure 3, AR-DIFFUSION undergoes a denoising process, transforming the random Gaussian noise into a coherent sentence over 20 steps, and we present 5 of them. With the progression of each timestep, compared to the tokens on the right side of the sentence, the tokens on the left side demonstrate faster determination and a rapid increase in the corresponding logits. This behavior is consistent with our principle of dynamic movement speed from left to right." }, { "figure_ref": [ "fig_3" ], "heading": "Impact of Minimum Bayes Risk and Anchor Point", "publication_ref": [ "b0", "b1", "b42", "b19", "b26", "b25", "b17" ], "table_ref": [ "tab_7" ], "text": "Minimum Bayes Risk To investigate the relationship between the number of generated candidate samples (k) and the quality of generation, we generate varying numbers of samples, ranging up to 1,000, on the IWSLT14 De→En test set and present the results in Figure 4. The curve demonstrates an initial gain of approximately 0.5 SacreBLEU within the first 200 samples, after which the gain becomes insignificant with generating more samples.\nAnchor Point We conduct experiments on AR-DIFFUSION using different anchor points (n e , t e ). These anchor points vary in terms of n e values, namely 1.0 × N , 2.0 × N and 3.0 × N , where N denotes the target sentence length. Additionally, they share a common t e value of T , which represents the total time step of diffusion. We present the results in Table 9, and determine that the best result is achieved at (n e , t e ) = (2.0 × N, T ). AR and NAR Language Models AR models have been the dominant approach for text generation [OpenAI, 2023, Touvron et al., 2023, Dong et al., 2023], but their token-by-token generation nature often leads to unsatisfactory inference speed. To address this issue, NAR models have been developed in recent years. The NAR method is initially proposed by Gu et al. [2017], its objective is generate the entire output sequence in parallel, thereby improving generation speed and efficiency. Subsequently, LevT [Gu et al., 2019] adopts insertion and deletion to address the lack of flexibility in NAR generation, CMLM [Ghazvininejad et al., 2019] utilizes a masked language model to improve the quality of NAR generation through a constant number of iterations, and CNAT [Bao et al., 2021] introduces latent variables to represent the category information of the target word to make full use of the latent representation. However, these NAR methods are hard to model inter-token position dependency and deficient in generation performance." }, { "figure_ref": [], "heading": "Continuous Text Diffusion", "publication_ref": [ "b8", "b9", "b10", "b18", "b11", "b43", "b44" ], "table_ref": [], "text": "The application of diffusion models to continuous text space is first introduced by Li et al. [2022a]. Through the embedding and rounding processes, the direct integration of continuous noise into word embeddings was accomplished. After that, more people attempt to adopt continuous text diffusion model to solve sequence-to-sequence tasks. DiffuSeq [Gong et al., 2022] divides the input into two parts, utilizing one part as a condition, and perturbs the other part with noise. CDCD [Dieleman et al., 2022] proposes score interpolation and time warping to allow diffusion model and Euclidean embedding to share the same loss function for training. SeqDiffuSeq [Yuan et al., 2022], GENIE [Lin et al., 2023] and DINOISER [Ye et al., 2023] incorporate diffusion model into the encoder-decoder structure through cross-attention mechanisms.\nIt is important to highlight the differences between our method and both ARDMs [Hoogeboom et al., 2022] and TimeGrad [Rasul et al., 2021], despite the common references to autoregression and diffusion in all these. ARDMs employ an order-agnostic technique, leveraging masking and prediction for generation in arbitrary orders. On the other hand, TimeGrad integrates RNN and diffusion to model the conditional distribution of future steps of multivariate time series. In contrast, our research focuses on implementing the diffusion process within a continuous embedding space, with the primary aim of generating text in a left-to-right sequence." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper introduces AR-DIFFUSION, which exhibits AR-like generation behavior but enables efficient parallel decoding. Embracing the inherent sequential nature of language, we propose a multilevel diffusion model, consisting of sentence-level and token-level components, to assign dynamic movement speeds to tokens. Consequently, compared to those on the right, the left tokens undergo fewer denoising steps and generate earlier to subsequently influence the later ones. Furthermore, we introduce a skipping mechanism to facilitate parallel generation within the multi-level diffusion framework. The experimental results across various tasks demonstrate that AR-DIFFUSION surpasses existing diffusion models in terms of quality while maintaining diversity. Additionally, compared to existing diffusion language models, AR-DIFFUSION achieves comparable results while being 100× ∼ 600× faster." }, { "figure_ref": [], "heading": "Limitation", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "A primary limitation of our work lies in the requirement of generating a large number of candidate samples for optimal performance. As an illustration in Table 3 of CNN/DAILYMAIL dataset, AR-DIFFUSION (k = 50) achieves a 0.8 lower ROUGE-2 score compared to AR-DIFFUSION (k = 500). We anticipate exploring more efficient sampling strategies to minimize the number of generated samples without performance drop." }, { "figure_ref": [], "heading": "A Proof of Inference with Skipping", "publication_ref": [ "b45" ], "table_ref": [], "text": "During the inference process, skipping strategy requires the model g θ to infer the state z n2 ti+1 at a far-off timestep t i+1 compared to the current state z n2 ti , where t i+1 ≪ t i . In our model, due to the dynamic speed setting, token z n1 ti+1 with smaller timestep t i+1 ≤ t i , which is closer to t i+1 , and positions n 1 ≤ 2 can provide stronger auxiliary information than z n1 ti . This reduces the difficulty of inferring states for tokens in the end, making our multi-level diffusion model particularly suitable for accelerating the generation process.\nThrough maximizing the evidence lower bound (ELBO) of p(z 0 ), the training object is equivalent to minimize the divergence between q(z t |z t-1 , z 0 ) and p θ (z t-1 |z t ) following [Luo, 2022].\nBy converting the joint probability distribution into a conditional probability distribution, we obtain the following formula for q(z ti+1 |z ti , z 0 ). q(zt i+1 |zt i , z0) = q(zt i+1 |zt i -1, zt i , z0) q(zt i+1 -1|zt i , z0) = q(zt i+1 |zt i -1, z0) q(zt i+1 -1|zt i , z0) = q(zt i+1 |zt i -2, z0) q(zt i+1 -2|zt i -1, z0) q(zt i+1 -1|zt i , z0) = t i -t i+1 k=1 q(z t i -k |z t i -k+1 , z0)\n(13)\nSimilarly, we reach the same conclusion regarding p θ (z ti+1 |z ti ).\nBased on equation ( 13), which consists of q(z t |z t-1 , z 0 ), and the interchangeability between q(z t |z t-1 , z 0 ) and p θ (z t-1 |z t ), we can decompose q(z ti+1 |z ti , z 0 ) by incorporating z ti and z 0 , and utilize our estimated z 0 to determine the expression of p θ (z ti+1 |z ti ).\nq(zt i+1 | zt i , z0) = N n=1 q z n f (n,t i+1 ) | z n f (n,t i ) , z n 0 (14)\nNext, we obtain the explicit expression q z n f (n,ti+1) | z n f (n,ti) , z n 0 through linear interpolation between z n f (n,ti) and z n 0 .\nq z n f (n,t i+1 ) | z n f (n,t i ) , z n 0 = q(z n f (n,t i ) | z n f (n,t i+1 ) , z n 0 )q(z n f (n,t i+1 ) | z n 0 ) q(z n f (n,t )\n1 -ᾱf(n,t i ) z n 0 ,\n(1 -ᾱt i ᾱt i+1\n)(1 -ᾱt i+1 )\n1 -ᾱt i I = N (z n f (n,t i+1 ) ; λz n f (n,t i ) + µz n 0 , σI)\nwhere we have the following notations for simplification. (1 -ᾱf(n,t i+1 ) )\n1 -ᾱf(n,t i ) , µ = ᾱf(n,t i+1 ) (1 -ᾱf(n,t i ) ᾱf(n,t i+1 )\n)\n1 -ᾱf(n,t i ) , σ = (1 -α f (n,t i ) )(1 -ᾱf(n,t i+1 ) ) 1 -ᾱf(n,t i )\nBuilding upon equation ( 15), we substitute z n 0 with g θ (z n f (n,t) , f (n, t); x), yielding the final formula for p θ z n f (n,ti+1) | z n f (n,ti) ; x as the following equation.\np θ z n f (n,t i+1 ) | z n f (n,t i ) ; x ∼ N z n f (n,t i+1 ) ; λz n f (n,t i ) + µg θ (z n f (n,t) , f (n, t); x), σI\nB More Cases" }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "𝑓(𝑛 2 , 𝑡 𝑖+1 ) 𝑓(𝑛 2 , 𝑡 𝑖 ) (a) 𝑓(𝑛 (𝑛 1 , 𝑡 𝑖+1 ) 𝑓(𝑛 2 , 𝑡 𝑖+1 ) 𝑓(𝑛" }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "a british soldier who was killed by an army in intersect has been named upon a prosecutor of deep ofise ি process. a british soldier who was killed by an army in ash has been named by the ministry of crucial of championships . a british soldier who was killed by an army in afghanistan has been named by the ministry of defence of moddra . china ' s prime minister says it is the \" emissions \" in the country ' s tq crisis , the engine of \" . ⺼ [unused887]√ china ' s prime minister says it is the \" emissions \" in the country ' s economic crisis , the engine of parliament . " } ]
Diffusion models have gained significant attention in the realm of image generation due to their exceptional performance. Their success has been recently expanded to text generation via generating all tokens within a sequence concurrently. However, natural language exhibits a far more pronounced sequential dependency in comparison to images, and the majority of existing language models are trained with a left-to-right auto-regressive approach. To account for the inherent sequential characteristic of natural language, we introduce Auto-Regressive Diffusion (AR-DIFFUSION). AR-DIFFUSION ensures that the generation of tokens on the right depends on the generated ones on the left, a mechanism achieved through employing a dynamic number of denoising steps that vary based on token position. This results in tokens on the left undergoing fewer denoising steps than those on the right, thereby enabling them to generate earlier and subsequently influence the generation of tokens on the right. In a series of experiments on various text generation tasks, including text summarization, machine translation, and common sense generation, AR-DIFFUSION clearly demonstrated its superiority over existing diffusion language models and that it can be 100× ∼ 600× faster when achieving comparable results. Our code is available at https: //github.com/microsoft/ProphetNet/tree/master/AR-diffusion.
AR-DIFFUSION: Auto-Regressive Diffusion Model for Text Generation
[ { "figure_caption": "[2022], Diverse Beam Search(diversity strength = 0.8)Vijayakumar et al. [2016], Typical Sample (τ = 1.2)Meister et al. [2022], Top-k Sample (k = 50)Fan et al. [2018] and Nucleus Sample (p = 0.92)Holtzman et al. [2020].", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Ablation experiments on XSUM test set and taking k = 5. The horizontal axis is the number of inference steps and the vertical axis is AVG-ROUGE = (ROUGE-1 + ROUGE-2 + ROUGE-L) / 3.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The intermediate state of AR-DIFFUSION gradually generating real text from a standard Gaussian noise through 20 steps. The brightness of the color represents the magnitude of the logits, darker colors indicating larger logits. More cases are shown in the supplementary materials B.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Relationship between the number of candidate samples for applying MBR and SacreBLEU on IWSLT14 DE→EN test set.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "for comparison. For COMMONGEN, we employ ROUGE-2/L, BLEU-3/4, METEOR and SPICE under the evaluation methods ofLin et al. [2020] 11 .Training Parameters Our training parameters on different datasets are shown in Table1. Our linear schedule warm up steps is 4,000 ×N gc , where N gc denotes gradient accumulation number. In addition, we use the AdamW (weight decay = 0.0) optimizer and dropout is 0.2. All experiments are implemented on 8 Tesla V100-32G. It takes about 20 hours to train XSUM and CNN/DAILYMAIL, about 5 hours to train IWSLT14, and about 2 hours to train COMMENGEN. Training Parameter Settings. Batch Size = mini batch size × N gc × GPU number, Optimized Steps = total steps / N gc , and N gc is gradient accumulation number.", "figure_data": "DatasetLr & ScheduleBatch Size Optimized Steps Target LengthXSUM8e-4 & Cosine128×3×880,000 / 350CNN/DAILYMAIL8e-4 & Cosine80×5×8100,000 / 5180IWSLT14 DE→EN2e-3 & Cosine192×2×8160,000 / 290IWSLT14 EN→DE 1.8e-3 & Cosine 768×1×860,00090COMMONGEN3e-4 & Constant 384×1×840,00054", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results on CNN/DAILYMAIL test set. The results of AR are fromGLGE Liu et al. [2021].Furthermore, in comparison to Transformer, AR-DIFFUSION outperforms it on both ROUGE-1 and ROUGE-L, while achieving comparable performance in terms of ROUGE-2. Notably, when the sample number is 500, AR-DIFFUSION demonstrates superiority over Transformer across all the measures.", "figure_data": "(k = 50)29.38.321.9AR-DIFFUSION (k = 50)Diffusion31.710.124.7AR-DIFFUSION (k = 500)32.210.625.2MethodsPatternROUGE-1ROUGE-2ROUGE-LLSTM [Greff et al., 2017] Transformer [Vaswani et al., 2017]AR37.3 39.515.7 16.734.4 36.7GENIE [Lin et al., 2023] (k = 50)34.412.832.1AR-DIFFUSION (k = 50)Diffusion39.616.337.1AR-DIFFUSION (k = 500)40.217.137.7", "figure_id": "tab_1", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Results on IWSLT14 DE→EN test set following the setting of SEQDIFFUSEQ. \"NFE\" indicates the Number of Function Evaluations[Ye et al., 2023].", "figure_data": "MethodsPatternBLEUStepsNFE (Steps×k)Transformer [Vaswani et al., 2017]AR34.74--CNAT [Bao et al., 2021]NAR29.81--SeqDiffuSeq [Yuan et al., 2022] (k = 1) AR-DIFFUSION (k = 1)Diffusion29.83 30.192,000 202,000 (2,000 × 1) 20 (20 × 1)GENIE [Lin et al., 2023] (k = 50)30.08201,000 (20 × 50)AR-DIFFUSION (k = 50)Diffusion34.95201,000 (20 × 50)AR-DIFFUSION (k = 500)35.622010,000 (20 × 500)", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "SacreBLEU on the IWSLT14 test set. This result follows the setting of DINOISER.", "figure_data": "Methods", "figure_id": "tab_3", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Results on COMMONGEN dev set. Results of NAR and AR are from Lin et al. [2020].", "figure_data": "MethodsPatternROUGE-2/LBLEU-3/4METEOR SPICEbRNN-CopyNet [Gu et al., 2016]9.23 30.57 13.60 7.8017.4016.90Trans-CopyNetAR", "figure_id": "tab_4", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "As depicted in Table6, AR-DIFFUSION achieves superior performance compared to the current AR, NAR, and other diffusion methods across all the metrics on the COMMONGEN dataset. Experimental results of GENIE and AR-DIFFUSION with inference steps of 2 and 3 on XSUM test set. Take k = 10 to apply the MBR decoding strategy. (•) indicates the drop score compared to the 20-step.", "figure_data": "] ConstLeven [Susanto et al., 2020]NAR12.22 35.42 23.10 15.00 13.47 35.19 21.30 12.3022.10 25.0021.40 23.20GENIE [Lin et al., 2023] (k = 50) AR-DIFFUSION (k = 50)Diffusion12.89 35.21 22.00 13.30 13.93 37.36 25.60 16.4024.30 25.0023.00 24.20", "figure_id": "tab_5", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Diversity of 10 generated samples on XSUM test set and average of 10 results. The results of BART and GENIE are quoted fromLin et al. [2023].", "figure_data": "MethodsBARTGENIE AR-DIFFUSIONSamplingGreedy SearchBeam SearchDiverse Beam SearchTypical SampleTop-k SampleNucleus SampleDiffusionSELF-BLEU ↓ 100.093.475.676.980.279.129.330.4", "figure_id": "tab_6", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Effect of anchor point at different positions on the IWSLT14 DE→EN test set. \"N \" indicates the target sequence length and \"T\" represents the total time step of diffusion.", "figure_data": "", "figure_id": "tab_7", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "I N z n f (n,t i+1 ) ; ᾱf(n,t i+1 ) z n 0 , (1 -ᾱf(n,t i+1 ) )I", "figure_data": "n 0 )=N z n f (n,t i ) ;ᾱf(n,t i ) ᾱf(n,t i+1 )z n f (n,t i+1 ) , 1 -ᾱf(n,t i ) ᾱf(n,t i+1 N z n f (n,t i ) ; √ ᾱf(n,t i ) z n 0 , (1 -ᾱt i )I∝ exp -z n f (n,t i ) -2(1 -ᾱf(n,t i ) ᾱf(n,t i ) ᾱf(n,t i+1 ) z n f (n,t i+1 ) ᾱf(n,t i+1 ) )2-z n f (n,t i+1 ) -ᾱt i+1 z n 0 1 -ᾱf(n,t i+1 )2+z n f (n,t i ) -1 -ᾱf(n,t i ) √ ᾱf(n,t i ) z n 02= exp -1 -ᾱt i 2(1 -ᾱt i ᾱt i+1 )(1 -ᾱt i+1 )z n f (n,t i+1 )2 -2ᾱf(n,t i ) ᾱf(n,t i+1 ) 1 -ᾱf(n,t i ) (1 -ᾱf(n,t i+1 ) )z n f (n,t i )+ᾱf(n,t i+1 ) (1 -ᾱf(n,t i ) ᾱf(n,t i+1 ) 1 -ᾱf(n,t i ))z n 0 z n f (n,t i+1 )", "figure_id": "tab_8", "figure_label": "", "figure_type": "table" } ]
Tong Wu; Zhihao Fan; Xiao Liu; Yeyun Gong; Yelong Shen; Jian Jiao; Hai-Tao Zheng; Juntao Li; Zhongyu Wei; Jian Guo; Nan Duan; Weizhu Chen; Timestep Timestep; 𝟏
[ { "authors": " Openai", "journal": "", "ref_id": "b0", "title": "GPT-4 technical report", "year": "2023" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar; Aurélien Rodriguez; Armand Joulin; Edouard Grave; Guillaume Lample", "journal": "", "ref_id": "b1", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Rohan Taori; Ishaan Gulrajani; Tianyi Zhang; Yann Dubois; Xuechen Li; Carlos Guestrin; Percy Liang; Tatsunori B Hashimoto", "journal": "", "ref_id": "b2", "title": "Stanford alpaca: An instruction-following llama model", "year": "2023" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Ł Ukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b3", "title": "Attention is all you need", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b4", "title": "", "year": "2017" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b5", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "", "ref_id": "b6", "title": "Denoising diffusion probabilistic models", "year": "2020-12-06" }, { "authors": "Lisa Xiang; John Li; Ishaan Thickstun; Percy Gulrajani; Tatsunori Liang; Hashimoto", "journal": "", "ref_id": "b7", "title": "Diffusion-LM improves controllable text generation", "year": "2022" }, { "authors": "Shansan Gong; Mukai Li; Jiangtao Feng; Zhiyong Wu; Lingpeng Kong", "journal": "", "ref_id": "b8", "title": "Diffuseq: Sequence to sequence text generation with diffusion models", "year": "2022" }, { "authors": "Laurent Sander Dieleman; Arman Sartran; Nikolay Roshannai; Yaroslav Savinov; Ganin; Arnaud Pierre H Richemond; Robin Doucet; Chris Strudel; Conor Dyer; Durkan", "journal": "", "ref_id": "b9", "title": "Continuous diffusion for categorical data", "year": "2022" }, { "authors": "Hongyi Yuan; Zheng Yuan; Chuanqi Tan; Fei Huang; Songfang Huang", "journal": "", "ref_id": "b10", "title": "Seqdiffuseq: Text diffusion with encoder-decoder transformers", "year": "2022" }, { "authors": "Jiasheng Ye; Zaixiang Zheng; Yu Bao; Lihua Qian; Mingxuan Wang", "journal": "", "ref_id": "b11", "title": "Dinoiser: Diffused conditional sequence learning by manipulating noises", "year": "2023" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2020" }, { "authors": "Weizhen Qi; Yu Yan; Yeyun Gong; Dayiheng Liu; Nan Duan; Jiusheng Chen; Ruofei Zhang; Ming Zhou", "journal": "", "ref_id": "b13", "title": "ProphetNet: Predicting future n-gram for sequence-to-SequencePre-training", "year": "2020-11" }, { "authors": "Weizhen Qi; Yeyun Gong; Jian Jiao; Yu Yan; Weizhu Chen; Dayiheng Liu; Kewen Tang; Houqiang Li; Jiusheng Chen; Ruofei Zhang", "journal": "PMLR", "ref_id": "b14", "title": "Bang: Bridging autoregressive and non-autoregressive generation with large scale pretraining", "year": "2021" }, { "authors": "Junyi Li; Tianyi Tang; Wayne Xin Zhao; Jian-Yun Nie; Ji-Rong Wen", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "ELMER: A nonautoregressive pre-trained language model for efficient and effective text generation", "year": "2022" }, { "authors": "Yafu Li; Leyang Cui; Yongjing Yin; Yue Zhang", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Multi-granularity optimization for nonautoregressive translation", "year": "2022" }, { "authors": "Yu Bao; Shujian Huang; Tong Xiao; Dongqi Wang; Xinyu Dai; Jiajun Chen", "journal": "", "ref_id": "b17", "title": "Non-autoregressive translation by learning target categorical codes", "year": "2021-06" }, { "authors": "Zhenghao Lin; Yeyun Gong; Yelong Shen; Tong Wu; Zhihao Fan; Chen Lin; Nan Duan; Weizhu Chen", "journal": "", "ref_id": "b18", "title": "Text generation with diffusion language models: A pre-training approach with continuous paragraph denoise", "year": "2023" }, { "authors": "Jiatao Gu; James Bradbury; Caiming Xiong; O K Victor; Richard Li; Socher", "journal": "", "ref_id": "b19", "title": "Non-autoregressive neural machine translation", "year": "2017" }, { "authors": "Shashi Narayan; Shay B Cohen; Mirella Lapata", "journal": "", "ref_id": "b20", "title": "Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization", "year": "2018-11" }, { "authors": "Karl Moritz Hermann; Tomas Kocisky; Edward Grefenstette; Lasse Espeholt; Will Kay; Mustafa Suleyman; Phil Blunsom", "journal": "", "ref_id": "b21", "title": "Teaching machines to read and comprehend", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b22", "title": "", "year": "2015" }, { "authors": "Taku Kudo; John Richardson", "journal": "", "ref_id": "b23", "title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "year": "2018-11" }, { "authors": "Jason Lee; Elman Mansimov; Kyunghyun Cho", "journal": "", "ref_id": "b24", "title": "Deterministic non-autoregressive neural sequence modeling by iterative refinement", "year": "2018" }, { "authors": "Marjan Ghazvininejad; Omer Levy; Yinhan Liu; Luke Zettlemoyer", "journal": "", "ref_id": "b25", "title": "Mask-predict: Parallel decoding of conditional masked language models", "year": "2019-11" }, { "authors": "Jiatao Gu; Changhan Wang; Junbo Zhao", "journal": "", "ref_id": "b26", "title": "Levenshtein transformer", "year": "2019" }, { "authors": "Mitchell Stern; William Chan; Jamie Kiros; Jakob Uszkoreit", "journal": "", "ref_id": "b27", "title": "Insertion transformer: Flexible sequence generation via insertion operations", "year": "2019" }, { "authors": "Jiatao Gu; Zhengdong Lu; Hang Li; O K Victor; Li", "journal": "The Association for Computer Linguistics", "ref_id": "b28", "title": "Incorporating copying mechanism in sequence-to-sequence learning", "year": "2016" }, { "authors": "Klaus Greff; Rupesh Kumar Srivastava; Jan Koutník; R Bas; Jürgen Steunebrink; Schmidhuber", "journal": "IEEE Trans. Neural Networks Learn. Syst", "ref_id": "b29", "title": "LSTM: A search space odyssey", "year": "2017" }, { "authors": "Wangchunshu Bill Yuchen Lin; Ming Zhou; Pei Shen; Chandra Zhou; Yejin Bhagavatula; Xiang Choi; Ren", "journal": "", "ref_id": "b30", "title": "CommonGen: A constrained text generation challenge for generative commonsense reasoning", "year": "2020-11" }, { "authors": "Shankar Kumar; William Byrne", "journal": "JOHNS HOPKINS UNIV BALTIMORE MD CENTER FOR LANGUAGE AND SPEECH PROCESSING (CLSP", "ref_id": "b31", "title": "Minimum bayes-risk decoding for statistical machine translation", "year": "2004" }, { "authors": "Dayiheng Liu; Yu Yan; Yeyun Gong; Weizhen Qi; Hang Zhang; Jian Jiao; Weizhu Chen; Jie Fu; Linjun Shou; Ming Gong", "journal": "", "ref_id": "b32", "title": "Glge: A new general language generation evaluation benchmark", "year": "2021" }, { "authors": "Raymond Hendy Susanto; Shamil Chollampatt; Liling Tan", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "Lexically constrained neural machine translation with levenshtein transformer", "year": "2020" }, { "authors": "Yaoming Zhu; Sidi Lu; Lei Zheng; Jiaxian Guo; Weinan Zhang; Jun Wang; Yong Yu", "journal": "", "ref_id": "b34", "title": "Texygen: A benchmarking platform for text generation models", "year": "2018" }, { "authors": "Yisheng Xiao; Lijun Wu; Junliang Guo; Juntao Li; Min Zhang; Tao Qin; Tie-Yan Liu", "journal": "", "ref_id": "b35", "title": "A survey on non-autoregressive generation for neural machine translation and beyond", "year": "2022" }, { "authors": "K Ashwin; Michael Vijayakumar; Ramprasaath R Cogswell; Qing Selvaraju; Stefan Sun; David J Lee; Dhruv Crandall; Batra", "journal": "", "ref_id": "b36", "title": "Diverse beam search: Decoding diverse solutions from neural sequence models", "year": "2016" }, { "authors": "Clara Meister; Tiago Pimentel; Gian Wiher; Ryan Cotterell", "journal": "", "ref_id": "b37", "title": "Typical decoding for natural language generation", "year": "2022" }, { "authors": "Angela Fan; Mike Lewis; Yann N Dauphin", "journal": "Association for Computational Linguistics", "ref_id": "b38", "title": "Hierarchical neural story generation", "year": "2018" }, { "authors": "Ari Holtzman; Jan Buys; Li Du; Maxwell Forbes; Yejin Choi", "journal": "", "ref_id": "b39", "title": "The curious case of neural text degeneration", "year": "2020" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "", "ref_id": "b40", "title": "Denoising diffusion implicit models", "year": "2021" }, { "authors": " Openreview", "journal": "", "ref_id": "b41", "title": "", "year": "2021" }, { "authors": "Chenhe Dong; Yinghui Li; Haifan Gong; Miaoxin Chen; Junxin Li; Ying Shen; Min Yang", "journal": "ACM Comput. Surv", "ref_id": "b42", "title": "A survey of natural language generation", "year": "2023" }, { "authors": "Emiel Hoogeboom; Alexey A Gritsenko; Jasmijn Bastings; Ben Poole; Rianne Van Den; Tim Berg; Salimans", "journal": "", "ref_id": "b43", "title": "Autoregressive diffusion models", "year": "2022" }, { "authors": "Kashif Rasul; Calvin Seward; Ingmar Schuster; Roland Vollgraf", "journal": "PMLR", "ref_id": "b44", "title": "Autoregressive denoising diffusion models for multivariate probabilistic time series forecasting", "year": "2021-07-24" }, { "authors": "Calvin Luo", "journal": "", "ref_id": "b45", "title": "Understanding diffusion models: A unified perspective", "year": "2022" } ]
[ { "formula_coordinates": [ 3, 227.62, 274.99, 276.98, 15.09 ], "formula_id": "formula_0", "formula_text": "q(zt | z0; x) = N (zt; √ ᾱtz0, (1 -ᾱt)I),(1)" }, { "formula_coordinates": [ 3, 202.38, 348.92, 302.22, 8.37 ], "formula_id": "formula_1", "formula_text": "p θ (zt-1 | zt; x) = N zt-1; µ θ (zt, t; x), Σ θ (zt, t; x) ,(2)" }, { "formula_coordinates": [ 3, 180.33, 487.07, 251.34, 26.81 ], "formula_id": "formula_2", "formula_text": "L = E q ϕ (z 0:T |y) -log p θ (y | z0; x) + T t=1 ∥z0 -g θ (zt, t; x)∥ 2 ." }, { "formula_coordinates": [ 3, 239.23, 655.56, 261.88, 11.24 ], "formula_id": "formula_3", "formula_text": "zt = z 1 f (1,t) , z 2 f (2,t) , • • • , z N f (N,t) , (4" }, { "formula_coordinates": [ 3, 501.12, 658.11, 3.48, 7.77 ], "formula_id": "formula_4", "formula_text": ")" }, { "formula_coordinates": [ 3, 476.78, 670.64, 26.72, 12.94 ], "formula_id": "formula_5", "formula_text": "z n f (n,t)" }, { "formula_coordinates": [ 4, 235.59, 155.17, 269.01, 8.06 ], "formula_id": "formula_6", "formula_text": "v(n, ti, ti+1) = f (n, ti+1) -f (n, ti),(5)" }, { "formula_coordinates": [ 4, 227.65, 405.42, 276.95, 8.06 ], "formula_id": "formula_7", "formula_text": "(ns, ts) = clip(N -t, 0, N ), clip(t -N, 0, T )(6)" }, { "formula_coordinates": [ 4, 236.09, 443.06, 268.51, 19.74 ], "formula_id": "formula_8", "formula_text": "f (n, t) = clip te -ts ne -ns (n -ns) + ts, 0, T(7)" }, { "formula_coordinates": [ 4, 111.78, 500.76, 392.82, 41.73 ], "formula_id": "formula_9", "formula_text": "min θ -log p θ (y | z0; x) + N n=1 g θ (z n f (n,t) , f (n, t); x) -z0 2 (8) 8: until converged" }, { "formula_coordinates": [ 4, 152.67, 657.22, 351.93, 11.24 ], "formula_id": "formula_10", "formula_text": "g θ zt, t; x = g θ z 1 f (1,t) , f (1, t) , z 2 f (2,t) , f (2, t) , • • • , z N f (N,t) , f (N, t) ; x ,(9)" }, { "formula_coordinates": [ 5, 108, 238.57, 281.99, 9.65 ], "formula_id": "formula_11", "formula_text": "Since M ≪ T + N , t i+1 ≪ t i , implying that f (n, t i ) ≪ f (n, t i+1 )." }, { "formula_coordinates": [ 5, 108, 249.48, 396, 20.56 ], "formula_id": "formula_12", "formula_text": "t i ) to f (n, t i+1 ))." }, { "formula_coordinates": [ 5, 194.73, 403.4, 264.11, 12.31 ], "formula_id": "formula_13", "formula_text": "zt i+1 = z 1 f (1,t i+1 ) , z 2 f (2,t i+1 ) , • • • , z N f (N,t i+1 ) from p θ (zt i+1 | zt i ;" }, { "formula_coordinates": [ 5, 153.08, 432.56, 351.52, 45.16 ], "formula_id": "formula_14", "formula_text": "p θ (zt i+1 | zt i ; x) = N n=1 p θ z n f (n,t i+1 ) | z n f (n,t i ) ; x (10) p θ z n f (n,t i+1 ) | z n f (n,t i ) ; x ∼ N z n f (n,t i+1 ) ; λz n f (n,t i ) + µg θ (z n f (n,t) , f (n, t); x), σI(11" }, { "formula_coordinates": [ 5, 108, 541.74, 401.86, 40.86 ], "formula_id": "formula_15", "formula_text": "λ = ᾱf(n,t i ) ᾱf(n,t i+1 ) (1 -ᾱf(n,t i+1 ) ) 1 -ᾱf(n,t i ) , µ = ᾱf(n,t i+1 ) (1 -ᾱf(n,t i ) ᾱf(n,t i+1 ) ) 1 -ᾱf(n,t i ) , σ = (1 -α f (n,t i ) )(1 -ᾱf(n,t i+1 ) ) 1 -ᾱf(n,t i )(12)" }, { "formula_coordinates": [ 7, 122.63, 377.95, 364.5, 8.18 ], "formula_id": "formula_16", "formula_text": "Methods Pattern ROUGE-1 ROUGE-2 ROUGE-L" }, { "formula_coordinates": [ 16, 489.66, 407.31, 14.94, 7.77 ], "formula_id": "formula_17", "formula_text": "(13)" }, { "formula_coordinates": [ 16, 210.79, 698.34, 293.81, 26.81 ], "formula_id": "formula_18", "formula_text": "q(zt i+1 | zt i , z0) = N n=1 q z n f (n,t i+1 ) | z n f (n,t i ) , z n 0 (14)" } ]
10.18653/v1/2022.acl-long.589
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b3", "b21", "b12", "b23", "b34", "b2", "b30", "b19", "b33", "b45", "b21", "b44", "b6", "b0", "b5", "b12", "b27", "b1", "b43", "b7", "b40", "b8", "b11", "b39", "b38", "b29", "b4", "b20", "b32", "b12", "b19", "b15" ], "table_ref": [], "text": "Documents are basic units of the organization of natural language (Buckland, 1997). Understanding the structures and the semantics of documents is the foundation for understanding news articles (Kiesel et al., 2019), scientific papers (Dasigi et al., 2021), government reports (Huang et al., 2021b) , stories (Kočiskỳ et al., 2018), etc. Evaluating how a machine intelligence system can read, analyze and generate documents is an important part of evaluating its natural language abilities, which has long been a critical direction in NLP field.\nWhile standard benchmarks like GLUE (Shanahan et al., 2016) and SuperGLUE (Wang et al., 2019) have become a critical part of NLP community, they primarily focused on short utterances like sentences or paragraphs. However, documents are much more than bag-of-sentences, which usually focus on the central theme (Benamara et al., 2017), with underlying structure bound by complex linguistic elements (Parsing, 2009) and dependent knowledge dispersed across the whole text (Huang et al., 2021a). Therefore, these benchmarks cannot be used to evaluate document understanding due to its unique challenges: First, documents usually have lengthy content, i.e., they usually are much longer than sentences/paragraphs thus it is difficult to process them due to the computational memory/runtime limitation of current NN models. Second, documents have underlying structures, which play a critical role during understanding. For example, mining arguments from a document needs to model both the local coherence between sentences and the global interactions between claims and arguments, which cannot be accomplished by only exploiting sentence-or paragraph-level information. Third, the knowledge in a document is usually dispersed beyond sentences/paragraphs, which makes it necessary to model and explore document-arXiv:2305.09520v1 [cs.CL] 16 May 2023 level context. For example, long-distance withindocument coreference resolution needs to integrate global information to find all expressions that refer to the same entity distributed in the full text.\nRecently, an increasing number of researches have paid attention to the evaluation of document understanding. Mostafazadeh Davani et al. (2021) proposes the Long Range Arena (LRA), which contains two synthetic text-related tasks and evaluates model quality of understanding multi-modal long contexts. Hudson and Moubayed (2022) proposes MuLD, which concentrates on merged sequences over 10,000 tokens. SCROLLS (Shaham et al., 2022) is a recently proposed benchmark that contains NLI, QA and summarization tasks and focuses on long language sequences. However, these benchmarks mainly focus on the lengthy content challenge, while ignoring other important challenges. As a result, almost all tasks in these benchmarks can be resolved via an retrieval-answering paradigm, i.e., retrieving a very limited number of sentences that contains critical information and then resolving the task. Furthermore, these benchmarks only cover limited tasks, which makes them unable to thoroughly evaluate the document understanding abilities of models.\nTo systematically evaluate document language understanding abilities, this paper proposes Document Language Understanding Evaluation -DLUE, a new task suite which covers a wide-range of tasks in various forms, different domains and document genres. Figure 1 shows the overview of DLUE. Specifically, we summarize 4 kinds of document understanding abilities, including 1) Document Classification, which evaluates whether a model can understand the overall semantics of a document, e.g., its topic (Zhang et al., 2015) and standpoint (Kiesel et al., 2019); 2) Document Structural Analysis, which evaluates whether a model can analyze and leverage the underlying structure of a document, e.g., its discourse structure (Zeldes, 2017) and argument structure (Cheng et al., 2020); 3) Document Information Extraction, which evaluates whether a model can recognize and aggregate associated information spanning cross whole document, e.g., long-distance within-document coreference (Bamman et al., 2020); 4) Document Transcription, which evaluates whether a model can capture and transcript important information of a document, e.g., summarization (Huang et al., 2021b;Chen et al., 2022) and abstractive QA (Dasigi et al., 2021). Then we collect 10 datasets and align them with the above abilities. Datasets of the same group are converted to a unified format. In this way, DLUE provides a comprehensive benchmark for document language understanding, which enables the research community to fairly compare and measure the progress of this field.\nTo better understand the challenges of document language understanding and analyze the performance of current approaches, we conduct experiments on DLUE using several state-of-theart document understanding models, including 1) Memory-based approaches, which includes XLNet (Yang et al., 2019); 2) Pattern-based approaches, which includes Longformer (Beltagy et al., 2020), BigBird (Zaheer et al., 2020), Sparse transformer (Child et al., 2019); 3) Lowrank/Kernel-based approaches, which includes Linformer (Wang et al., 2020) and Performer (Choromanski et al., 2020). Experiments show that document understanding is still far from being solved due to lengthy content, complicated underlying structure and dispersed knowledge, and currently there is no neural architecture that dominates all tasks, raising requirements for a universal document understanding architecture.\nGenerally, the contributions of this paper are: 2 Background NLP Benchmarks The development of natural language understanding (NLU) evaluation benchmarks has helped drive the progress of pretraining and transfer learning in NLP. Benchmarks proposed in the early stage mostly aim at general tasks, such as SentEval (Conneau and Kiela, 2018) for universal sentence representations, De-caNLP (McCann et al., 2018) for ten diversified NLP tasks cast as a general question-answering format, GLUE (Wang et al., 2018) for NLU in the English language and SuperGLUE (Wang et al., 2019) as a harder counterpart of GLUE. Besides, benchmarks for more specific tasks have also been proposed, such as DialoGLUE (Mehri et al., 2020) for task-oriented dialogue, DiscoEval (Chen et al., 2019) for discourse-aware sentence representations, GLUECoS (Khanuja et al., 2020) for codeswitched NLP, KILT (Petroni et al., 2020) for knowledge-intensive language tasks, and etc.\nThe above benchmarks mostly focus on sentences or short texts. However, documents are also very common for complex tasks or real-world textual data. Single-task benchmark for document understanding mostly uses summarization tasks (Cohan et al., 2018a) or QA tasks (Dasigi et al., 2021).\nDue to the single nature of the task and data distribution, it is difficult for these benchmarks to comprehensively evaluate models' ability to model documents. There are also some multi-task benchmarks for document understanding, such as the Long Range Arena (LRA) (Tay et al., 2020a), SCROLLS (Shaham et al., 2022), MuLD (Hudson andMoubayed, 2022) and LOT (Guan et al., 2022). Long inputs of LRA and MuLD are either automatically generated or artificially lengthened. Tasks in SCROLLS all focus on a few sentences or paragraphs, which can be solved by retrieval-based or chunk-based approaches. LOT only focuses on Chinese long text understanding and generation. In this paper, compared with existing benchmarks that focus on long sequences instead of documents, we focus on challenges posed by document understanding, including lengthy content, complicated underlying structure and dispersed knowledge." }, { "figure_ref": [], "heading": "Document Understanding Models", "publication_ref": [ "b27", "b1", "b43", "b7", "b22", "b40", "b8" ], "table_ref": [], "text": "There have been numerous attempts to improve both the memory footprint and computational cost of transformers, thus allowing the use of longer inputs. A natural way is to connect blocks via recurrence, such as XLNet (Yang et al., 2019). Another way of tackling the high complexity of full attention is to sparsify the attention matrix. Longformer (Beltagy et al., 2020), BigBird (Zaheer et al., 2020) and Sparse transformer (Child et al., 2019) of view to fixed, predefined patterns such as local windows and block patterns of fixed strides. Reformer (Kitaev et al., 2020) uses learnable ones, an extension to fixed, pre-determined pattern. Besides, low-rank approximations or kernelization of the self-attention matrix can be used as well to reduce the complexity. Linformer (Wang et al., 2020) and Performer (Choromanski et al., 2020) are representative low-rank/kernel-based transformers. In this paper, we conduct experiments on the above three document understanding architectures to explore challenges posed by document understanding." }, { "figure_ref": [], "heading": "DLUE: Document Language Understanding Evaluation Benchmark", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "The section describes the DLUE benchmark, which is used to evaluate the 4 representative abilities of document understanding. Specifically, DLUE is centered on 10 English document understanding datasets, which cover a wide-range of tasks in various forms, different domains and document genres.\nTable 1 provides an overview of the datasets included in the benchmark. In the following, we describe the details of DLUE." }, { "figure_ref": [ "fig_0" ], "heading": "Overview", "publication_ref": [ "b13" ], "table_ref": [ "tab_1" ], "text": "As described above, a document understanding system should resolve the lengthy content, complicated underlying structure, and dispersed knowledge challenges. To effectively evaluate the above abilities and challenges, DLUE selects datasets according to the following several desiderata: First, the documents in the benchmark should have lengthy content. We select datasets with an average token number of more than 512, considering the fact that most existing state-of-the-art NLP systems (e.g., pretrained models) are limited to 512 to 1024 tokens (Devlin et al., 2018). Second, the tasks must be solved using the dispersed knowledge in a document. Therefore, we don't select a document-level dataset if most of them can be resolved through chunk-based or retrieval-based approaches. Third, the documents in the benchmark must be natural, such as literature works, scientific articles, government reports and so on. Synthesized documents don't have structure information and relation links among different sections. Fourth, the selected tasks should be beyond the scope of current state-of-theart systems, but solvable by most college-educated English speakers. Based on the above desiderata and with the permission of licenses, we collect as diverse datasets as possible to increase the coverage on capabilities. The overview of DLUE is shown in Figure 1, and their statistics are shown in Table 1. In the following, we describe all datasets according to the their target ability." }, { "figure_ref": [], "heading": "Document Classification", "publication_ref": [ "b2", "b21", "b24" ], "table_ref": [], "text": "A document usually narrow focus on a single central theme (Benamara et al., 2017). We aim to evaluate document classification ability, specifically the ability to understand the overall semantics of documents in this section. To do this, we select two datasets that rely on full-text to make judgements and reformulate every dataset as document classification tasks. Specifically, given single sequence s or sequence pairs (s 1 , s 2 ), the goal is to classify the input into a single label l.\nHyperpartisan (Kiesel et al., 2019) is a document classification dataset which aims to automatically detect news that takes an left-wing or right-wing standpoint. A few words or sentences are not enough to determine the political leanings of news, which are toned by the full text. This task provides two datasets, one is labeled manually and the other is labeled in a semi-automated manner via distant supervision at the publisher level. We use the first one to pursue higher evaluation accuracy and keep the same train/test split as the original work.\nContractNLI (Koreeda and Manning, 2021) is a natural language inference dataset in the legal domain, with non-disclosure agreements (NDAs) as premises and legal statements as hypothesizes. NDAs are collected from Internet search engines and Electronic Data Gathering, Analysis, and Retrieval system (EDGAR). To correctly predict whether the hypothesis is entailed, neutral, or contradictory from the contract, we need to refer to not necessarily continuous sentences across the contract with hundreds of tokens. The dataset contains 607 contracts and 17 unique hypothesizes, which we combine to produce 10319 instances." }, { "figure_ref": [], "heading": "Document Structure Analysis", "publication_ref": [ "b41", "b6", "b44", "b25" ], "table_ref": [], "text": "A document is composed of structured groups of sentences, paragraphs and sections. Analyzing document structure can be very useful in indexing and organizing the information contained in the document. Tasks in this section aim to evaluate document structure analysis ability, specifically the ability to capture and leverage structure information. We select three datasets as follows and reformulate every dataset as sentence-level sequence labeling tasks. Specifically, given a document d = {s 1 , s 2 , ..., s n }, the goal is to output a tag sequence t = {t 1 , t 2 , ..., t n } for sentences.\nECOM (Xu et al., 2022) is an event-centric opinion mining corpus in which a system takes in an event descriptor and related news articles to extract event-related opinion segments from articles. An opinion segment is composed of continuous sentences targeting at the same argument. We select the dataset to evaluate the ability to utilize local structure information, which is important for identifying opinion boundaries unambiguously.\nRR (Cheng et al., 2020) is an argument mining corpus for extracting arguments from reviews and rebuttals, which are collected from ICLR 2013 -2020 (except for 2015 that is unavailable) from openreview.net. Peer reviews and rebuttals on scientific works are a data source of rich structures and long passages. We think it's a suitable dataset because experiments in the original paper show that the internal structure information is important for this task.\nGUM (Zeldes, 2017) is a multi-layer corpus collected and edited via classroom annotation. We focus on its Rhetorical Structure Theory analysis annotation. We consider the task of predicting annotated discourse relations among sentences, as it's the most direct way to probe structure knowledge. The problem is framed to a sequence labeling task as Koto et al. (2021), where the goal is to iteratively find a segmentation boundary to split a sequence of discourse units into two sub-sequences of discourse units." }, { "figure_ref": [], "heading": "Document Information Extraction", "publication_ref": [ "b0", "b23" ], "table_ref": [], "text": "Dependent knowledge in a document is usually dispersed across the full text, which plays an important role in the transmission of the intended meaning. Tasks in this section aim to evaluate document information extraction ability, specifically the ability to identify long-distance related mentions and relations. We select two datasets as follows and reformulate every dataset as multi-answer question answering tasks. Specifically, given a document d and a question q, the goal is to extract correct answer spans a = {a 1 , a 2 , ..., a n } from d for q.\nLitBank (Bamman et al., 2020) is a coreference resolution dataset on English literature works. The documents in LitBank are several times longer than those in other benchmark datasets (e.g. 463.7 tokens for OntoNotes) and thus are abundant with long-distance within-document coreference. For each coreference link, we transform the sentence of one mention into a question, take all mentions as answers, and then can get 7214 question-answer pairs.\nNarrativeQA (Kočiskỳ et al., 2018) is a reading comprehension dataset on books and movie scripts. The questions in NarrativeQA are written based on summaries. Therefore, whether to understand or answer questions requires to recognize long-distance associated information according to several parts or a larger span of the context document." }, { "figure_ref": [], "heading": "Document Transcription", "publication_ref": [ "b5", "b12" ], "table_ref": [], "text": "Tasks in this section aim to evaluate document transcription ability, specifically the ability to capture and transcript key information of documents. We select three datasets that need to contextualize across different sections and reformulate every dataset as sequence-to-sequence tasks. Specifically, given a sequence s, the goal is to output a concise and fluent new sequence s N .\nGOVREPORT (Huang et al., 2021b) is a summarization dataset of long reports on various national policy issues and paired expert-written summaries published by U.S. Government Accountability Office (GAO) and Congressional Research Service (CRS). Documents and summaries in Gov-Report are significantly longer than prior datasets, such as 1.5 times longer than Arxiv (Cohan et al., 2018b). Moreover, new salient bigrams are steadily added as more content is consumed, which indi-cates information is spread throughout documents in the dataset.\nSummScreen (Chen et al., 2022) is a summarization dataset comprised of pairs of TV series transcripts and human written recaps. Different from official documentation like GOVRE-PORT (Huang et al., 2021b), in this dataset, the language expression is more informal and the structure is more unclear. We need to combine the whole document to understand plots that are often expressed indirectly in character dialogues and scattered across the entirety of the transcript.\nQasper (Dasigi et al., 2021) is a QA dataset in the research domain focusing on entire papers, in which both questions and answers are handedwritten by NLP practitioners. Over half of questions require multiple paragraphs as evidence to answer. We prepend the query to the document, using two newlines as a natural separator to construct the input sequence." }, { "figure_ref": [], "heading": "Experiments and Analysis", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Benchmarking Architectures", "publication_ref": [ "b27", "b1", "b43", "b7", "b40", "b8", "b13", "b27", "b26", "b16", "b37" ], "table_ref": [], "text": "This section describes models and architectures we evaluate on DLUE. Following the general taxonomy of efficient transformer models (Tay et al., 2020b), we conduct experiments on six well-established transformer variants to represent a diverse cross-section of document understanding models. Specifically, aside from the standard vanilla transformer, we compare three approaches:\n• Memory-based models, including XL-Net (Yang et al., 2019).\n• Pattern-based models, including Longformer (Beltagy et al., 2020), BigBird (Zaheer et al., 2020) and Sparse Transformer (Child et al., 2019).\n• Low-rank/Kernel-based models, including Linformer (Wang et al., 2020) and Performer (Choromanski et al., 2020).\nFor all kinds of task formulations described in Section 3, we implement unified model architectures. For document classification tasks, we use the special classification token ([CLS]) for prediction. Specifically, we concatenate a [CLS] token in front of each sequence and then input them into encoders. as the aggregate representation and passed into a two-layered MLP with ReLU activations for classification. The document structure analysis tasks are reformulated into sentence-level sequence labeling tasks. We use the classical Transformer-CRF architecture as in named entity recognition (Devlin et al., 2018). Specifically, we insert external [CLS] tokens at the start of each sentence, and each [CLS] symbol collects features for the sentence preceding it (Liu and Lapata, 2019). Then the sentence representations are input into Conditional Random Field (Lafferty et al., 2001) to get the sentencelevel labels. The document information extraction tasks are reformulated into multi-span question answering tasks. Following Hu et al. (2019), we expand traditional MRC architecture by adding a span number prediction and search component. For transcription tasks, we use the basic encoder-decoder architecture (Vaswani et al., 2017)." }, { "figure_ref": [], "heading": "The final hidden vector of [CLS] token is taken", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Implementations", "publication_ref": [], "table_ref": [], "text": "Our models are implemented by PyTorch framework 23 . For transformers with public pretrained models, we use the base version, including XLNetbase, Longformer-base, BigBird-base. The learning rate is 1e-5 for pretrained models and 1e-3 for classifier heads. For other models, we follow the same setup as Long range arena (Tay et al., 2020a), a widely recognized benchmark for efficient transformers to minimize the influence of hyper-parameter settings. These transformer models are parameterized by the same number of layers, heads and hidden dimensions, namely 6 layers, 8 heads, 512 hidden dimensions and d = 2048 for positional FFN layers. We use Adam with warmup.\nAll models are trained for 10 epochs. Across datasets and models, we run three repetitions with different random seeds and report averaged scores." }, { "figure_ref": [], "heading": "Overall Results", "publication_ref": [ "b41", "b12" ], "table_ref": [ "tab_2", "tab_2", "tab_2", "tab_2", "tab_1", "tab_2" ], "text": "Table 2 shows the overall results on DLUE. From this table, we can see that: 1) Document understanding is far from being solved. From Table 2, we can see that the best benchmark system can only achieve 44.5 average score. While it's difficult to establish an accurate human performance ceiling in DLUE, we can take some indicators to prove that the performance gap between human and models are huge. For example, human agreement on ECOM was measured at 80.8% F1 (Xu et al., 2022), much higher than our best baseline of 39.1% F1. Likewise, Dasigi et al. (2021) study a subset of Qasper that has multiple annotated answers, and find their overlap to be 60.9% F1, more than double our best baseline. This indicates that contemporary off-the-shelf models struggle with documents, challenging future work to make progress on DLUE.\n2) Different tasks have different advantageous architectures, raising a need for an universal document understanding architecture which can dominate all tasks in one architecture. From Table 2, we can see that different model architectures seem to be good at processing different tasks. Specifically, the performance of XLNet ranks first on the structure analysis tasks, while Longformer and BigBird perform better on the other tasks. Linformer and Performer do well on document classification tasks. This shows that recurrence-based models may have advantages over hierarchically structured data and pattern-based models may be more effective on flat data. Contrary to the other tasks, fast low-rank/kernel-based models do better on document classification tasks.\nNo architecture dominates all tasks, which indicates that more universal models are needed.\n3) Lengthy content is the critical, but not the only, challenge for document understanding. From Table 2 andTable 1, we can see that models perform poorly with too long inputs, such as the 18.5 best F 1 score in NarrativeQA dataset with 51790 average input length. However, even for those structure analysis and extraction tasks where documents can be taken in completely by long-range transformer models, the model performances still fail to meet expectations. Obviously, there exist other challenges for document understanding apart from lengthy input, such as complex structures and dispersed knowledge.\n4) It is critical to take global context into consideration. From Table 2, we can see that longrange transformers that can take in more contexts achieve a higher score than vanilla transformer in most datasets. This demonstrates longer contexts are necessary to understand documents. Documentlevel tasks can't be solved in the same way as shorttext tasks." }, { "figure_ref": [], "heading": "Computational Efficiency", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "The last column of Table 2 shows inference speeds of models. For a fair comparison, we use the standard test datasets of DLUE as testbed. Based on our implementation, the low-rank/kernel-based models are the fastest. Performer model is the fastest model with 6.7 steps per second, which is close to the inference speed of Linformer with 6.4 steps per second. The results are consistent with model com-plexity, which has a significant impact on inference speed. The low-rank/ kernel-based models decompose the N × N self-attention matrix to a lowerdimensional representation and thus usually have a O(N ) time complexity. Pattern-based models sparsify the attention matrix according to predefined or learnable patterns and the time complexity is usually between O(N ) and O(N 2 ). Recurrence-based models connect multiple segments and blocks via recurrence and the representative XLNet has a O(N 2 ) time complexity." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Effect Of Document Length", "publication_ref": [], "table_ref": [], "text": "To investigate how the document length will impact the performance, we cluster the documents into buckets for each task according to their document lengths and run the evaluation on each bucket. The breakdown analysis is shown in Figure 2.\nOn the whole, understanding longer documents faces more challenges. We notice that the performances on most datasets decrease when document lengths increase, with ContractNLI dataset as an exception. This maybe because there exists label bias related to document lengths in ContractNLI datasets. We find that a longer contract tends to entail a hypothesis, with 34% probability for documents shorter than 1000 words and 76% probability for documents longer than 5000 words.\nThe performance of pattern-based models seems to be more stable when the document lengths increase. We can see that Longformer and BigBird obtain a greater advantage when documents get longer. We think there are two reasons. First, the global token mechanism in Longformer and Big-Bird could help models focus on important information and be less distracted by noise in long contexts. Second, the maximum input length of XLNet is smaller due to the segment-level recurrence mechanism.\nThe performance is relatively stable on datasets where document lengths far exceed input limits. Figure 2(f) shows performance on NarrativeQA dataset. When the document length exceeds 20,000 tokens, the result remains around 18 F 1 for Longformer, BigBird and 15 F 1 for XLNet. This indicates the ability of efficient transformers to understand long documents are limited." }, { "figure_ref": [ "fig_3" ], "heading": "Effect of dispersed knowledge Exploition", "publication_ref": [ "b14", "b31" ], "table_ref": [ "tab_3" ], "text": "Our goal in this section is to validate that recognizing and aggregating dispersed knowledge is crucial to document understanding, and there is still much room for current models to improve. We analyze from two perspectives: 1) the effect of mention distance, which can be viewed as the measure of dispersion; 2) performance comparison between longrange transformers and short-text models without global information.\nEffect of Mention Distance To quantify the impact of dispersed knowledge to document understanding, we analyze the performance of coreference resolution with different mention distances on LitBank dataset. From Figure 3, we can see that the performance of all models decreases sharply when the mention distances increase. This indicates long-distance coreference is more challenging than within-sentence coreference. It's easy to understand because it puts forward higher requirements for the ability to capture and aggregate dispersed information. We can also notice the huge performance gap between short and long mention distances, which indicates there is still much room for further improvements in models' ability of integrating global information. the importance of global information to document understanding, we compare the performance of long-range transformers with two existing shorttext models, including CogLTX (Ding et al., 2020) and ToBERT (Pappagari et al., 2019). CogLTX jointly trains two BERT models to select key sentences from documents. ToBERT divides documents into smaller chunks and uses a transformer layer over BERT-based chunk representations. We select Hyperpartisan and Qasper datasets, whose tasks can be solved by CogLTX and ToBERT, and in which documents can be completely taken in by long-range transformers to eliminate interference caused by more contexts.\nFrom Table 3, we can see that long-range transformers do have advantages over IR-based and chunking-based methods. Intuitively, the reason behind is that the performance of long-range transformers benefits from the contextual representation with a broader view of the document. These findings emphasize the need for future studies in document understanding to integrate global information. The results also indicate that DLUE effectively covers the assessment of the ability to recognize and aggregate dispersed knowledge across the whole text." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "We propose a new benchmark DLUE that places the spot on documents and their lengthy content, complex underlying structure and dispersed knowledge challenges. DLUE covers diverse document-level tasks to evaluate four basic abilities required by document understanding, including document classification, document structure analysis, document information extraction and document transcription. Based on DLUE, we conduct an extensive side-byside comparison of three document understanding architectures. Experiments demonstrate document understanding is far from being solved, and there exists a need for a universal architecture that can dominate all tasks.\nLimitations DLUE now focuses on plain text documents, while the documents one encounter, e.g., scientific articles, company announcements, or even personal notes, may also contain multi-modal information and with non-sequential structure. In future work, we intend to integrate these multi-modal, complex structure information into our document understanding benchmark.\nBesides, due to the huge cost of computing resources, we didn't pretrain transformer models specialized for document understanding, but directly use the public pretrained versions or train from scratch. We believe an unified pretraining by also incorporating document-related tasks will further enhance the understanding performance." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "In consideration of ethical concerns, we provide the following detailed description:\n1. We believe that this work is beneficial to develop universal document understanding architectures, which can help people quickly get information from business documents, legal statements and so on, saving time and money.\n2. We standardize and put together ten datasets, which are all already publicly available under CC-BY-(NC-)SA-4.0 licenses 4 . For all the datasets, we have referenced the original work and encouraged DLUE users to do so.\n3. All DLUE benchmark datasets have low ethical risks and do not expose any sensitive or personally identifiable information." } ]
Understanding documents is central to many real-world tasks but remains a challenging topic. Unfortunately, there is no wellestablished consensus on how to comprehensively evaluate document understanding abilities, which significantly hinders the fair comparison and measuring the progress of the field. To benchmark document understanding researches, this paper summarizes four representative abilities, i.e., document classification, document structural analysis, document information extraction, and document transcription. Under the new evaluation framework, we propose Document Language Understanding Evaluation -DLUE, a new task suite which covers a wide-range of tasks in various forms, domains and document genres. We also systematically evaluate six well-established transformer models on DLUE, and find that due to the lengthy content, complicated underlying structure and dispersed knowledge, document understanding is still far from being solved, and currently there is no neural architecture that dominates all tasks, raising requirements for a universal document understanding architecture.
DLUE: Benchmarking Document Language Understanding
[ { "figure_caption": "Figure 1 :1Figure 1: Overview of DLUE, which covers a widerange of tasks and datasets in diverse domains to evaluate four representative document understanding abilities.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Performance (y axis) on DLUE datasets with different document lengths (x axis).", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Performance on LitBank dataset with different mention distances. Mention distances can reflect the degree of knowledge dispersion in a document.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "simply sparsify the attention matrix by limiting the field Task descriptions and statistics of DLUE.", "figure_data": "CorpusTaskDomainMetricAvg #Words Input Output#ExamplesClassificationHyperpartisan Classifi.Newsacc.58811273ContractNLINLILegalacc.1708110319Structure AnalysisECOMOPNewsF1488202000RRAMScienceF1793474764GUMDPMultiF1939119175ExtractionLitBankCoref.LiteratureF121157.47214NarrativeQAQALiteratureF1517904.671187TranscriptionGovReportSumm. Government ROUGE 7897492.719402SummScreen Summ.TVROUGE 5639100.04348QasperQAScienceF1367111.55692", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Overall experimental results on DLUE. Best model is in boldface. \"-\" denotes the model can't handle this task.", "figure_data": "Model#param pretrainClassification Hyper CNLI ECOM RR GUM LitBank NrQA SummScr GovRep Qasper Structure Analysis Extraction TranscriptionAvgInference Speed (steps per sec)Vanilla TransformerBERT110Myes80.172.337.357.3-34.114.5-----Memory-basedXLNet110Myes81.480.239.174.0 65.478.115.218.922.724.242.80.95Pattern-basedLongformer148Myes83.871.637.972.9 58.479.118.320.925.726.443.22.0BigBird127Myes85.982.837.071.1 67.677.818.220.627.326.244.51.6Sparse Trans.46Mno64.667.721.945.6 47.556.711.121.417.617.631.73.8Low-rank/Kernel-basedLinformer33Mno67.165.522.644.3 53.463.812.418.925.817.532.16.4Performer51Mno67.969.518.648.6 56.851.610.120.115.621.533.06.7", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparison with Short-text Models To verify Performance comparison between long-range transformers and short-text models.", "figure_data": "ModelHyperpartisan QasperXLNet81.424.2Longformer83.826.4BigBird85.926.2CogLTX82.918.9ToBERT78.416.6", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" } ]
Ruoxi Xu; Hongyu Lin; Xinyan Guan; Xianpei Han; Yingfei Sun; Le Sun
[ { "authors": "David Bamman; Olivia Lewke; Anya Mansoor", "journal": "European Language Resources Association", "ref_id": "b0", "title": "An annotated dataset of coreference in English literature", "year": "2020" }, { "authors": "Iz Beltagy; Matthew E Peters; Arman Cohan", "journal": "", "ref_id": "b1", "title": "Longformer: The long-document transformer", "year": "2020" }, { "authors": "Farah Benamara; Maite Taboada; Yannick Mathieu", "journal": "Computational Linguistics", "ref_id": "b2", "title": "Evaluative language beyond bags of words: Linguistic insights and computational applications", "year": "2017" }, { "authors": "K Michael; Buckland", "journal": "Journal of the American society for information science", "ref_id": "b3", "title": "What is a \"document", "year": "1997" }, { "authors": "Mingda Chen; Zewei Chu; Kevin Gimpel", "journal": "", "ref_id": "b4", "title": "Evaluation benchmarks and learning criteria for discourse-aware sentence representations", "year": "2019" }, { "authors": "Mingda Chen; Zewei Chu; Sam Wiseman; Kevin Gimpel", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "SummScreen: A dataset for abstractive screenplay summarization", "year": "2022" }, { "authors": "Liying Cheng; Lidong Bing; Qian Yu; Wei Lu; Luo Si", "journal": "", "ref_id": "b6", "title": "Ape: argument pair extraction from peer review and rebuttal via multi-task learning", "year": "2020" }, { "authors": "Rewon Child; Scott Gray; Alec Radford; Ilya Sutskever", "journal": "", "ref_id": "b7", "title": "Generating long sequences with sparse transformers", "year": "2019" }, { "authors": "Krzysztof Choromanski; Valerii Likhosherstov; David Dohan; Xingyou Song; Andreea Gane; Tamas Sarlos; Peter Hawkins; Jared Davis; Afroz Mohiuddin; Lukasz Kaiser", "journal": "", "ref_id": "b8", "title": "Rethinking attention with performers", "year": "2020" }, { "authors": "Arman Cohan; Franck Dernoncourt; Soon Doo; Trung Kim; Seokhwan Bui; Walter Kim; Nazli Chang; Goharian", "journal": "", "ref_id": "b9", "title": "A discourse-aware attention model for abstractive summarization of long documents", "year": "2018" }, { "authors": "Arman Cohan; Franck Dernoncourt; Soon Doo; Trung Kim; Seokhwan Bui; Walter Kim; Nazli Chang; Goharian", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "A discourse-aware attention model for abstractive summarization of long documents", "year": "2018" }, { "authors": "Alexis Conneau; Douwe Kiela", "journal": "", "ref_id": "b11", "title": "Senteval: An evaluation toolkit for universal sentence representations", "year": "2018" }, { "authors": "Pradeep Dasigi; Kyle Lo; Iz Beltagy; Arman Cohan; Noah A Smith; Matt Gardner", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "A dataset of information-seeking questions and answers anchored in research papers", "year": "2021" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b13", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Ming Ding; Chang Zhou; Hongxia Yang; Jie Tang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b14", "title": "Cogltx: Applying bert to long texts", "year": "2020" }, { "authors": "Jian Guan; Zhuoer Feng; Yamei Chen; Ruilin He; Xiaoxi Mao; Changjie Fan; Minlie Huang", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b15", "title": "Lot: A story-centric benchmark for evaluating chinese long text understanding and generation", "year": "2022" }, { "authors": "Minghao Hu; Yuxing Peng; Zhen Huang; Dongsheng Li", "journal": "", "ref_id": "b16", "title": "A multi-type multi-span network for reading comprehension that requires discrete reasoning", "year": "2019" }, { "authors": "Kung-Hsiang Huang; Sam Tang; Nanyun Peng", "journal": "", "ref_id": "b17", "title": "Document-level entity-based extraction as template generation", "year": "2021" }, { "authors": "Luyang Huang; Shuyang Cao; Nikolaus Parulian; Ji Heng; Lu Wang", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Efficient attentions for long document summarization", "year": "2021" }, { "authors": "Thomas Hudson; Noura Al; Moubayed ", "journal": "", "ref_id": "b19", "title": "Muld: The multitask long document benchmark", "year": "2022" }, { "authors": "Simran Khanuja; Sandipan Dandapat; Anirudh Srinivasan; Sunayana Sitaram; Monojit Choudhury", "journal": "", "ref_id": "b20", "title": "Gluecos: An evaluation benchmark for codeswitched nlp", "year": "2020" }, { "authors": "Johannes Kiesel; Maria Mestre; Rishabh Shukla; Emmanuel Vincent; Payam Adineh; David Corney; Benno Stein; Martin Potthast", "journal": "", "ref_id": "b21", "title": "Semeval-2019 task 4: Hyperpartisan news detection", "year": "2019" }, { "authors": "Nikita Kitaev; Łukasz Kaiser; Anselm Levskaya", "journal": "", "ref_id": "b22", "title": "Reformer: The efficient transformer", "year": "2020" }, { "authors": "Tomáš Kočiskỳ; Jonathan Schwarz; Phil Blunsom; Chris Dyer; Karl Moritz Hermann; Gábor Melis; Edward Grefenstette", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b23", "title": "The narrativeqa reading comprehension challenge", "year": "2018" }, { "authors": "Yuta Koreeda; Christopher D Manning", "journal": "", "ref_id": "b24", "title": "Contractnli: A dataset for document-level natural language inference for contracts", "year": "2021" }, { "authors": "Fajri Koto; Jey Han Lau; Timothy Baldwin", "journal": "", "ref_id": "b25", "title": "Top-down discourse parsing via sequence labelling", "year": "2021" }, { "authors": "John Lafferty; Andrew Mccallum; Fernando Cn Pereira", "journal": "", "ref_id": "b26", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "year": "2001" }, { "authors": "Yang Liu; Mirella Lapata", "journal": "", "ref_id": "b27", "title": "Text summarization with pretrained encoders", "year": "2019" }, { "authors": "Bryan Mccann; Nitish Shirish Keskar; Caiming Xiong; Richard Socher", "journal": "", "ref_id": "b28", "title": "The natural language decathlon: Multitask learning as question answering", "year": "2018" }, { "authors": "Shikib Mehri; Mihail Eric; Dilek Hakkani-Tur", "journal": "", "ref_id": "b29", "title": "Dialoglue: A natural language understanding benchmark for task-oriented dialogue", "year": "2020" }, { "authors": "Aida Mostafazadeh Davani; Douwe Kiela; Mathias Lambert; Bertie Vidgen; Zeerak Vinodkumar Prabhakaran; Waseem", "journal": "", "ref_id": "b30", "title": "Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)", "year": "2021" }, { "authors": "Raghavendra Pappagari; Piotr Zelasko; Jesús Villalba; Yishay Carmiel; Najim Dehak", "journal": "IEEE. Constituency Parsing", "ref_id": "b31", "title": "Hierarchical transformers for long document classification", "year": "2009" }, { "authors": "Fabio Petroni; Aleksandra Piktus; Angela Fan; Patrick Lewis; Majid Yazdani; Nicola De Cao; James Thorne; Yacine Jernite; Vladimir Karpukhin; Jean Maillard", "journal": "", "ref_id": "b32", "title": "Kilt: a benchmark for knowledge intensive language tasks", "year": "2020" }, { "authors": "Uri Shaham; Elad Segal; Maor Ivgi; Avia Efrat; Ori Yoran; Adi Haviv; Ankit Gupta; Wenhan Xiong; Mor Geva; Jonathan Berant", "journal": "", "ref_id": "b33", "title": "Scrolls: Standardized comparison over long language sequences", "year": "2022" }, { "authors": "Timothy Shanahan; Douglas Fisher; Nancy Frey", "journal": "", "ref_id": "b34", "title": "The challenge of challenging text", "year": "2016" }, { "authors": "Yi Tay; Mostafa Dehghani; Samira Abnar; Yikang Shen; Dara Bahri; Philip Pham; Jinfeng Rao; Liu Yang; Sebastian Ruder; Donald Metzler", "journal": "", "ref_id": "b35", "title": "Long range arena: A benchmark for efficient transformers", "year": "2020" }, { "authors": "Yi Tay; Mostafa Dehghani; Dara Bahri; Donald Metzler", "journal": "ACM Computing Surveys (CSUR)", "ref_id": "b36", "title": "Efficient transformers: A survey", "year": "2020" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b37", "title": "Attention is all you need", "year": "2017" }, { "authors": "Alex Wang; Yada Pruksachatkun; Nikita Nangia; Amanpreet Singh; Julian Michael; Felix Hill; Omer Levy; Samuel Bowman", "journal": "Advances in neural information processing systems", "ref_id": "b38", "title": "Superglue: A stickier benchmark for general-purpose language understanding systems", "year": "2019" }, { "authors": "Alex Wang; Amanpreet Singh; Julian Michael; Felix Hill; Omer Levy; Samuel R Bowman", "journal": "", "ref_id": "b39", "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "year": "2018" }, { "authors": "Sinong Wang; Belinda Z Li; Madian Khabsa; Han Fang; Hao Ma", "journal": "", "ref_id": "b40", "title": "Linformer: Selfattention with linear complexity", "year": "2020" }, { "authors": "Ruoxi Xu; Hongyu Lin; Meng Liao; Xianpei Han; Jin Xu; Wei Tan; Yingfei Sun; Le Sun", "journal": "", "ref_id": "b41", "title": "Eco v1: Towards event-centric opinion mining", "year": "2022" }, { "authors": "Zhilin Yang; Zihang Dai; Yiming Yang; Jaime Carbonell; Russ R Salakhutdinov; Quoc V Le", "journal": "Advances in neural information processing systems", "ref_id": "b42", "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "year": "2019" }, { "authors": "Manzil Zaheer; Guru Guruganesh; Avinava Kumar; Joshua Dubey; Chris Ainslie; Santiago Alberti; Philip Ontanon; Anirudh Pham; Qifan Ravula; Li Wang; Yang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b43", "title": "Big bird: Transformers for longer sequences", "year": "2020" }, { "authors": "Amir Zeldes", "journal": "Lang. Resour. Eval", "ref_id": "b44", "title": "The gum corpus: Creating multilayer resources in the classroom", "year": "2017" }, { "authors": "Xiang Zhang; Junbo Zhao; Yann Lecun", "journal": "Advances in neural information processing systems", "ref_id": "b45", "title": "Character-level convolutional networks for text classification", "year": "2015" } ]
[]
2023-05-16
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b4", "b5", "b6", "b7", "b11", "b12", "b13" ], "table_ref": [], "text": "M ULTI-object tracking is widely used in video surveil- lance, behavior analysis, traffic monitoring and other fields [1]. Therefore, it has important theoretical research significance and practical application value. According to the different sensors used, it can be divided into visual multiobject tracking, radar multi-object tracking, multi-object tracking with multi-sensor fusion, and so on. Among them, visual multi-object tracking is the mainstream direction in the field of multi-object tracking. Visual multi-object tracking aims to detect multiple objects in the video sequence and assign identification numbers to them. With the rapid development of deep learning and multi-object detection, multi-object tracking based on detection has become a research hotspot in recent years [2]- [5]. Multi-object tracking based on detection is mainly composed of a detector and data association. First, a well trained detector is used to obtain detection results. In the data association stage, detections are associated with tracks according to some specific measures and association strategies. When the same detector is used, the data association algorithm determines whether the multi-object tracking algorithm can assign the correct ID number to the target in different scenarios.\nMost researches on multi-object tracking are divided into detection-based tracking and end-to-end tracking. The performance of Detection-based tracking depends largely on the This work was supported by the National Natural Science Foundation of China (Grant Nos. 61871132).\nHuan Mao, Yulin Chen, Zongtan Li, Feng Chen, and Pingping Chen are with the College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China(email: [email protected]; [email protected]; [email protected]; [email protected]; [email protected]).\nperformance of the detector. In the method of using Reidentification (ReID) features, the ReID feature extraction network brings additional training and inference costs. Some methods [6], [7] try to integrate ReID feature extraction into the detector for joint training, but there is a problem of imbalance between the detection branch and the feature extraction branch. Recently, end-to-end multi-object tracking methods [8]- [12] based on Transformer [13] have attracted more attention. End-to-end tracking needs to realize detection and tracking in a network, but its performance still lags behind the tracking based on detection.\nCurrent target association methods mainly measure the distance between the track and the detection box from the spatial information and re-identification features. These method use the Hungarian algorithm to assign the most similar detection box for the track, and finally completes the update of the track. However, due to the lack of differentiation in the Intersection of Union (IoU) distance measurement, the position information becomes unreliable when dealing with occlusions and low confidence detection boxes. Although the re-identification features can effectively find the long-disappeared object, the spatial motion information is ignored, and the feature extraction network also brings additional training and time costs. In addition, how to balance the feature and location information is also a problem brought by the introduction of re-identification features. To solve the above problems and challenges, this paper proposes a multi-object tracker based on shape constraint and confidence named SCTracker. SCTracker uses an Intersection of Union distance with shape constraints (Shape-IoU), including a height constraint and an area constraint, to calculate the cost matrix. The measurement of different shapes but similar IoU in the track and detections is optimized to improve the accuracy of data association. The Kalman filter parameters of the tracklet are updated with the confidence of the detection in order to improve the tracking performance when the detection result is poor or the bounding box mutates.\nThe main contributions of this paper are summarized as follows:\n1) An Intersection of Union distance with shape constraints (Shape-IoU) is proposed to distinguish the detection results of different shapes but the same intersection distance with tracks. 2) A new track state update based on the confidence of detections is proposed in order to better describe the motion state of the tracklet associated with the low quality detection result. 3) Compared with other advanced methods in MOT 17 dataset [14], SCTracker shows better tracking performance and verifies the effectiveness of the proposed method." }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [ "b1", "b2", "b3", "b4", "b14", "b5", "b15", "b6", "b16", "b12", "b7", "b11", "b17", "b18" ], "table_ref": [], "text": "With the development of deep learning in recent years, the improvement of multi-object detection performance enables detection-based tracking methods to obtain strong tracking performance. The detection-based tracking method describes the state of the track through feature extraction, motion prediction and other methods. IoU Tracker [2] directly associates objects by the IoU distance of detection results between adjacent frames to achieve high running speed, but the tracking performance is prone to be affected by occlusion scenes. SORT [3] uses the Kalman Filter for motion prediction, effectively overcoming the problem of track loss in short-time occlusion scenes. On this basis, Deep SORT [4] adds an appearance feature and cascade matching strategy to improve tracking loss under long-term occlusion. ByteTrack [5] reserves detections with low score for a twice association strategy, and only uses the Kalman Filter to predict the motion state of the track to achieve the most advanced tracking performance with a stronger detector. QDTrack [15] intensively samples the surrounding area of the target to obtain rich feature information. In the inference stage, only the feature information is used to carry out bidirectional matching between two frames to achieve tracking.\nZhongdao et al. [6] extended an appearance feature branch on the single-stage detector YOLO v3 [16], jointly learning detection and appearance embedding in the way of multi-task learning, reducing the complexity of the algorithm. However, there is a conflict between the detector and feature learning that learning high-dimensional feature embedding caused imbalance between detection branch and feature branch. FairMOT [7] uses CenterNet [17], a detector based on the target center point, to extend the low-dimensional feature branch to balance the multi-task learning of detection and feature extraction, which improves the tracking performance while maintaining high reasoning speed.\nEnd-to-end tracking methods based on Transformer [13] have been concerned in some recent studies [8]- [12]. They are based on the normal form of DETR [18], [19] to build the query vector of tracks, which can implicitly build the object trajectories and achieve end-to-end multi-object tracking. However, the performance of end-to-end tracking still lags behind that of detection-based tracking, and the calculation speed is slow.\nAll the above multi-object tracking methods have achieved excellent tracking performance. However, the tracking performance under complex scenes and low quality detection results still needs to be improved. To solve these problems, we propose SCTracker based on detection-based tracking paradigm to improve the simple measurement of distance and the track update under low-confidence detection results, so as to improve the tracking performance while maintaining high speed." }, { "figure_ref": [ "fig_0" ], "heading": "III. PROPOSED METHOD A. Overall framework", "publication_ref": [ "b4" ], "table_ref": [], "text": "The proposed SCTracker only uses the spatial information of detections in the data association stage without using the re-identification feature in order to avoid the extra training cost and improve the efficiency of the algorithm. The Kalman Filter is applied to predict the motion state of a track, which is described as an 8-dimensional vector\n[x, y, a, h, d x , d y , d a , d h ],\nwhere (x, y, a, h) are the coordinates of the top left corner of the track, the aspect ratio and the height of the frame respectively, and the posterior four dimensions represent the speed of the first four dimensions of the state vector. The overall process of the proposed tracker based on shape constraint and confidence is shown in Fig. 1.\nSpecifically, the image of frame i is input into the detector to obtain a set of detections Detections(t) = {B t 1 , ..., B t N } with a total number of N , in which each detection contains four components (x, y, a, h), which are the top-left coordinate of the detection, the aspect ratio and the height of the box, respectively. The set of tracks in the current frame participating in the data association is represented as T racks(t) = {S t 1 , ..., S t M }, where M is the number of tracks currently participating in the association.\nThe association strategy is built on ByteTrack [5], which considers the information of detections with low confidence, and detections are divided into a set of high confidence detections Detections high (t) ∈ Detections(t) and a set of low confidence detections Detections low (t) ∈ Detections(t). Specifically, for each 8-dimensional state vector, the Kalman Filter is applied to predict the state distribution in the current frame, and the bounding box of the tracklet in the current frame is set as the first four-dimensional component (x , y , a , h ) of the predicted state vector.\nIn the first association, the distance between the set of tracks T racks(t) and the set of detections with high confidence Detections high (t) is calculated to obtain a cost matrix C ∈ R M ×N . According to this cost matrix, the Hungarian algorithm is used to assign the corresponding detection to the track. Unmatched detections are created as new tracks, and unmatched tracks are represented as T racks remain (t) and participate in the second association.\nThe distance between T racks remain (t) and detections with low confidence is calculated in the second association and the assignment is calculated by the Hungarian algorithm. Track marks that are not matched in the second association are lost. The track that is not matched in the second association is marked as a lost state, and the track whose lost state exceeds a certain number of frames is marked as a removed state and does not participate in the tracking of subsequent frames. Finally, the Kalman filter parameters of matched tracks are updated and the tracking result of the current frame is returned. The calculation of the cost matrix of the above two correlations needs to measure the spatial distance between tracks and detections, so the cost matrix calculated by M tracks and N detections are formulated as:\nC =    d 11 . . . d 1N . . . . . . . . . d M 1 . . . d M N    M ×N ,(1)\nwhere \nd" }, { "figure_ref": [], "heading": "B. IoU with shape constraints", "publication_ref": [], "table_ref": [], "text": "In the process of data association, it is necessary to measure the distance between the detection and the track to get the cost matrix, and then use the Hungarian algorithm to assign the track of the detection with the minimum cost. In most detection-based tracking methods, the IoU distance is used to measure the overlap of detection and track. Specifically, IoU is the ratio of the intersection area to the union area of two bounding boxes, which measures the overlapping degree between the two boxes. The function expression of the IoU is as follows:\nIoU (B a , B b ) = |B a ∩ B b | |B a ∪ B b | . (2\n)\nThe IoU between the two bounding boxes is 1 when they are completely overlapped, and 0 when they are not. Therefore, the IoU distance is expressed as follows:\nd IoU (B a , B b ) = 1 -IoU (B a , B b ).(3)\nIn the multi-object tracking, predicted bounding boxes of tracks and ones of detections under the current frame are involved in the calculation of IoU distance. However, detections with different shapes and sizes may have the same IoU distance with the same predicted box of the track. As shown in Fig. 2, green bounding boxes of different shapes have the same intersection area and union area with the same blue bounding box, but the shapes of the two green bounding boxes are completely different. Tracking targets are mainly objects with relatively fixed shapes, such as pedestrians or vehicles, and detections with inconsistent shapes are mostly false positive detection results in multi-object tracking. Therefore, it is difficult to accurately measure the distance between targets and tracks using only IoU distance, which may cause the wrong matching results of the tracker. " }, { "figure_ref": [], "heading": "Fig. 2. Schematic diagram of different shape bounding boxes intersecting the same bounding box", "publication_ref": [], "table_ref": [], "text": "In order to solve the above problems in multi-object tracking, we propose an Intersection of Union distance with shape constraints named Shape-IoU. By adding constraints to the shape of the detection, the distance between the track and the false positive detection frame with inconsistent shape but large IoU becomes larger, and the distance between the track and the detection with consistent shape and large intersection ratio is reduced. The shape constraint item contains a height constraint item ρ h and an area constraint item ρ s . Intuitively, the height of the target is less affected by complex scenes, occlusion and other factors, so adding a height constraint item can determine the shape of a detection result according to the area and height. The Shape-IoU is formulated as:\nd(B a , B b ) = 1 -IoU (B a , B b ) + ρ h (B a , B b ) + ρ s (B a , B b ),(4)\nρ h (B a , B b ) = (h a -h b ) 2 (h u + ) 2 ,(5)\nρ s (B a , B b ) = (S a -S b ) 2 (S u + ) 2 ,(6)\nwhere h a , h b , h u is the height of the two boundary boxes and the corresponding minimum enclosing rectangle respectively, S a , S b , S u is the area of the two boundary boxes and the corresponding minimum enclosing rectangle respectively, is a valid minimum to ensure the validity of the constraint term, and is set as 10 -7 in the experiment." }, { "figure_ref": [], "heading": "C. Track update based on detection confidence", "publication_ref": [ "b4" ], "table_ref": [], "text": "In the tracking process, the Kalman Filter is used to build a motion model and predict the distribution of the next position for the tracklet, which can effectively avoid the tracking loss caused by short-time occlusion and enhance the tracking performance. Although ByteTrack [5] takes into account the effective information of low score detection box, the detection quality of detector output deteriorates due to occlusions, environmental interference and other factors, and the detection results with low confidence have a certain deviation from the ground truth, which results in the prediction error of the updated state vector for the next frame and the error accumulation in subsequent frames. As shown in Fig. 3, when the person behind is blocked, the size and shape of its bounding box are inaccurate, and there is a certain error with the actual size and shape.\nIn view of the above problems, we propose a track update strategy based on detection confidence. Specifically, the measurement noise covariance matrix R in the update equations describes the uncertainty in the measurement process, which is related to the confidence of the detection results output by the detector in the tracking task. Therefore, the confidence of detection is added as a weighting factor, and the specific function expression is as follows:\nR c = R • (1 -score 2 ), (7\n)\nwhere score is the confidence of the detection result and the measurement noise covariance matrix R is negatively correlated with score. In addition, in the case of short-time occlusion, the shape of the detection output by the detector appears to be biased. To avoid the influence of incorrect shape changes on the state vector of the Kalman filter, the confidence of detection is introduced to correct the velocity component of the state vector when the original state vector m is updated to a new state vector m by using the Kalman filter, as shown in the following formula:\nm vel = score • m vel + (1 -score) • m vel ,(8)\nwhere, m vel , m vel is respectively the last four dimensional vector of the original state vector m and the updated state vector m . When the confidence is high, the updated velocity component is more inclined. On the contrary, when the confidence is low, it is considered that the updated velocity component has a larger error and is more inclined to the original state vector velocity component." }, { "figure_ref": [], "heading": "IV. EXPERIMENTS AND RESULTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Datasets description", "publication_ref": [ "b13" ], "table_ref": [], "text": "In order to evaluate the performance of the proposed method, MOT 17 [14] is used as a unified benchmark dataset for experiments. MOT 17 is a widely used multi-object tracking dataset with large scale, authenticity and high quality. The dataset comprises of 17 video sequences of real-world scenes shot by different cameras, including different scenes such as shopping malls, streets and subway stations. Each video sequence has a resolution of 720p, a frame rate of 30 frames per second, and contains a different number and variety of targets. In order to facilitate the evaluation of the experiment, the first half of each video sequence in the training set of MOT 17 is divided as training, and the second half is used as the validation set of MOT 17 to test the performance of the model. Fig. 4 shows an example image of the MOT 17 dataset." }, { "figure_ref": [], "heading": "B. Evaluation metrics", "publication_ref": [ "b19" ], "table_ref": [], "text": "In order to make the performance of this algorithm comparable with other algorithms, all experiments follow the CLEAR Multi-object metrics [20], including MOTA, IDF1 and IDSW. MOTA is the multiple object tracking accuracy which reflects the accuracy of determining the number of targets and related attributes of targets. It is used to calculate the error accumulation in tracking, including FP, FN, and IDSW. IDF1 is a comprehensive indicator that considers both the accuracy and stability of the tracker. It is the harmonic mean of MOTA and IDP. IDSW is the Number of Identity Switches." }, { "figure_ref": [], "heading": "C. Experiment setting and result", "publication_ref": [ "b20", "b21", "b4" ], "table_ref": [], "text": "The detector in the proposed method uses YOLO X [21]. The model of the detector uses a pre-trained model trained on the COCO dataset [22] as initialization weight. The input image size is 1440×800. SGD optimizer is used for training 80 epochs. The weight decay and momentum are set to 0.0005 and 0.9 respectively. The initial learning rate is set to 0.001, and the learning rate adjustment strategy uses learning rate preheating and cosine annealing. The inference of the model is performed on an NVIDIA GeForce RTX 3090 GPU, and the inference time consists of the forward propagation time including post-processing and tracking time.\n1) Ablation experiment: Since there are no learnable parameters in the data association, in the ablation experiment, the detector selects a model of the same size (YOLO X-X) and uses the same training weight. The proposed method is built on ByteTrack [5], and ByteTrack is chosen as a baseline for comparison. The Intersection of Union distance with shape constraints (Shape-IoU) is denoted as Shape, and the track update based on detection confidence is denoted as Conf . The experimental results is shown in Table I. Bold indicates the best result in that column. Under the premise of the same detection results, Shape-IoU compared with the baseline has improved in IDF1 and MOTA indicators, and IDSW has decreased. Compared with the baseline, the track update based on detection confidence has improved by 0.2% in IDF1, and When measuring the distance between tracks and detections, adding different shape constraints affects the cost matrix differently. Therefore, it is necessary to set the ablation of different shape constraints and choose the appropriate term to more accurately describe the distance between tracks and detections. The group experimentally set up height constraint ρ h and area constraint ρ s , respectively, representing the height and the area distance of two boxes. The experimental results of different shape constraints are shown in Table II, where bold indicates the best result in that column. As shown in Table II, using only height or area constraint can improve IDF1 and MOTA compared with not using shape constraint, and IDSW is also reduced. Using both height and area constraint can maximize the improvement of various tracking metics, fully demonstrating the effectiveness of combining shape constraints to improve multi-object tracking performance. 2) Comparative experiment: The proposed method is compared with other excellent multi-object tracking methods. The performance of each method on the MOT 17 validation set is evaluated, and the advantages of the proposed algorithm are illustrated through intuitive experimental results. The results of each performance metric are presented in tabular form, where each bold part in each column represents the best metric value. Table III shows the experimental results of different multiobject tracking methods on the MOT 17 validation set. It can " }, { "figure_ref": [], "heading": "V. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, a multi-object tracker based on shape constraint and confidence named SCTracker is proposed to optimize the data association of detections with different shapes and improve the tracking with low quality detection results. First, in view of the shape similarity between detections and tracks, constraints of height and area are added to the calculation of IoU distance to avoid the influence of false association of detection frames with similar positions but inconsistent shapes. Second, the confidence of the detections was introduced in the Kalman Filter update of tracklets. The measurement noise covariance matrix based on the confidence of detections is used to update and the velocity component of the state vector is modified according to the confidence of detections, improving the tracking performance of low quality detection results. Finally, the performance of the proposed method and the ablation of each component were evaluated on the verification set of MOT 17. The experimental results fully show that the algorithm has advanced tracking performance." } ]
Detection-based tracking is one of the main methods of multi-object tracking. It can obtain good tracking results when using excellent detectors but it may associate wrong targets when facing overlapping and low-confidence detections. To address this issue, this paper proposes a multi-object tracker based on shape constraint and confidence named SCTracker. In the data association stage, an Intersection of Union distance with shape constraints is applied to calculate the cost matrix between tracks and detections, which can effectively avoid the track tracking to the wrong target with the similar position but inconsistent shape, so as to improve the accuracy of data association. Additionally, the Kalman Filter based on the detection confidence is used to update the motion state to improve the tracking performance when the detection has low confidence. Experimental results on MOT 17 dataset show that the proposed method can effectively improve the tracking performance of multi-object tracking.
SCTracker: Multi-object tracking with shape and confidence constraints
[ { "figure_caption": "Fig. 1 .1Fig. 1. The pipeline of the proposed method", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "ab = d(B a , B b ) is the spatial distance between bounding box B a and bounding box B b . In this paper, Shape-IoU is used to calculate the distance d(B a , B b ). For the matched tracks, the Kalman filter parameters are updated based on the confidence of the corresponding detection.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .Fig. 4 .34Fig. 3. Visualization results of low confidence detection box", "figure_data": "", "figure_id": "fig_3", "figure_label": "34", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. Visualization results of the proposed method on the MOT 17", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" } ]
Huan Mao; Yulin Chen; Zongtan Li; Feng Chen; Pingping Chen
[ { "authors": "W Luo; J Xing; A Milan; X Zhang; W Liu; T.-K Kim", "journal": "Artificial intelligence", "ref_id": "b0", "title": "Multiple object tracking: A literature review", "year": "2021" }, { "authors": "E Bochinski; V Eiselein; T Sikora", "journal": "IEEE", "ref_id": "b1", "title": "High-speed tracking-bydetection without using image information", "year": "2017" }, { "authors": "A Bewley; Z Ge; L Ott; F Ramos; B Upcroft", "journal": "IEEE", "ref_id": "b2", "title": "Simple online and realtime tracking", "year": "2016" }, { "authors": "N Wojke; A Bewley; D Paulus", "journal": "IEEE", "ref_id": "b3", "title": "Simple online and realtime tracking with a deep association metric", "year": "2017" }, { "authors": "Y Zhang; P Sun; Y Jiang; D Yu; F Weng; Z Yuan; P Luo; W Liu; X Wang", "journal": "Springer", "ref_id": "b4", "title": "Bytetrack: Multi-object tracking by associating every detection box", "year": "2022" }, { "authors": "Z Wang; L Zheng; Y Liu; Y Li; S Wang", "journal": "Springer", "ref_id": "b5", "title": "Towards real-time multi-object tracking", "year": "2020" }, { "authors": "Y Zhang; C Wang; X Wang; W Zeng; W Liu", "journal": "International Journal of Computer Vision", "ref_id": "b6", "title": "Fairmot: On the fairness of detection and re-identification in multiple object tracking", "year": "2021" }, { "authors": "P Chu; J Wang; Q You; H Ling; Z Liu", "journal": "", "ref_id": "b7", "title": "Transmot: Spatialtemporal graph transformer for multiple object tracking", "year": "2023" }, { "authors": "P Sun; J Cao; Y Jiang; R Zhang; E Xie; Z Yuan; C Wang; P Luo", "journal": "", "ref_id": "b8", "title": "Transtrack: Multiple object tracking with transformer", "year": "2020" }, { "authors": "Y Xu; Y Ban; G Delorme; C Gan; D Rus; X Alameda-Pineda", "journal": "", "ref_id": "b9", "title": "Transcenter: Transformers with dense queries for multipleobject tracking", "year": "2021" }, { "authors": "T Meinhardt; A Kirillov; L Leal-Taixe; C Feichtenhofer", "journal": "", "ref_id": "b10", "title": "Trackformer: Multi-object tracking with transformers", "year": "2022" }, { "authors": "F Zeng; B Dong; Y Zhang; T Wang; X Zhang; Y Wei", "journal": "Springer", "ref_id": "b11", "title": "Motr: Endto-end multiple-object tracking with transformer", "year": "2022" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; Ł Kaiser; I Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b12", "title": "Attention is all you need", "year": "2017" }, { "authors": "A Milan; L Leal-Taixé; I Reid; S Roth; K Schindler", "journal": "", "ref_id": "b13", "title": "Mot16: A benchmark for multi-object tracking", "year": "2016" }, { "authors": "J Pang; L Qiu; X Li; H Chen; Q Li; T Darrell; F Yu", "journal": "", "ref_id": "b14", "title": "Quasidense similarity learning for multiple object tracking", "year": "2021" }, { "authors": "J Redmon; A Farhadi", "journal": "", "ref_id": "b15", "title": "Yolov3: An incremental improvement", "year": "2018" }, { "authors": "X Zhou; D Wang; P Krähenbühl", "journal": "", "ref_id": "b16", "title": "Objects as points", "year": "2019" }, { "authors": "N Carion; F Massa; G Synnaeve; N Usunier; A Kirillov; S Zagoruyko", "journal": "Springer", "ref_id": "b17", "title": "End-to-end object detection with transformers", "year": "2020" }, { "authors": "X Zhu; W Su; L Lu; B Li; X Wang; J Dai", "journal": "", "ref_id": "b18", "title": "Deformable detr: Deformable transformers for end-to-end object detection", "year": "2020" }, { "authors": "K Bernardin; R Stiefelhagen", "journal": "EURASIP Journal on Image and Video Processing", "ref_id": "b19", "title": "Evaluating multiple object tracking performance: the clear mot metrics", "year": "2008" }, { "authors": "Z Ge; S Liu; F Wang; Z Li; J Sun", "journal": "", "ref_id": "b20", "title": "Yolox: Exceeding yolo series in 2021", "year": "2021" }, { "authors": "T.-Y Lin; M Maire; S Belongie; J Hays; P Perona; D Ramanan; P Dollár; C L Zitnick", "journal": "Springer", "ref_id": "b21", "title": "Microsoft coco: Common objects in context", "year": "2014" } ]
[ { "formula_coordinates": [ 2, 461.54, 323.58, 101.49, 9.65 ], "formula_id": "formula_0", "formula_text": "[x, y, a, h, d x , d y , d a , d h ]," }, { "formula_coordinates": [ 3, 107.86, 394.28, 192.16, 44 ], "formula_id": "formula_1", "formula_text": "C =    d 11 . . . d 1N . . . . . . . . . d M 1 . . . d M N    M ×N ,(1)" }, { "formula_coordinates": [ 3, 76.12, 445.22, 5.19, 8.74 ], "formula_id": "formula_2", "formula_text": "d" }, { "formula_coordinates": [ 3, 118, 663.08, 178.14, 23.23 ], "formula_id": "formula_3", "formula_text": "IoU (B a , B b ) = |B a ∩ B b | |B a ∪ B b | . (2" }, { "formula_coordinates": [ 3, 296.15, 670.14, 3.87, 8.64 ], "formula_id": "formula_4", "formula_text": ")" }, { "formula_coordinates": [ 3, 102.9, 740.66, 197.12, 9.65 ], "formula_id": "formula_5", "formula_text": "d IoU (B a , B b ) = 1 -IoU (B a , B b ).(3)" }, { "formula_coordinates": [ 3, 315.11, 728.71, 247.93, 20.91 ], "formula_id": "formula_6", "formula_text": "d(B a , B b ) = 1 -IoU (B a , B b ) + ρ h (B a , B b ) + ρ s (B a , B b ),(4)" }, { "formula_coordinates": [ 4, 119.34, 53.8, 180.68, 24.8 ], "formula_id": "formula_7", "formula_text": "ρ h (B a , B b ) = (h a -h b ) 2 (h u + ) 2 ,(5)" }, { "formula_coordinates": [ 4, 120.24, 80.96, 179.78, 24.8 ], "formula_id": "formula_8", "formula_text": "ρ s (B a , B b ) = (S a -S b ) 2 (S u + ) 2 ,(6)" }, { "formula_coordinates": [ 4, 126.94, 498.47, 169.21, 11.72 ], "formula_id": "formula_9", "formula_text": "R c = R • (1 -score 2 ), (7" }, { "formula_coordinates": [ 4, 296.15, 500.86, 3.87, 8.64 ], "formula_id": "formula_10", "formula_text": ")" }, { "formula_coordinates": [ 4, 87.78, 650.47, 212.24, 10.62 ], "formula_id": "formula_11", "formula_text": "m vel = score • m vel + (1 -score) • m vel ,(8)" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "Estimating the relative pose between two images given mutual feature correspondences is a fundamental problem in computer vision. It is a key component of structure from motion (SfM) and visual odometry (VO) methods which in turn fuel a plethora of applications from autonomous vehicles or robots to augmented and virtual reality." }, { "figure_ref": [], "heading": "Project Page", "publication_ref": [ "b41", "b23", "b41", "b17", "b23", "b33", "b10", "b11", "b55", "b62", "b39", "b6", "b47", "b14", "b47", "b47", "b47", "b66", "b14", "b11", "b56", "b57", "b47", "b11" ], "table_ref": [], "text": "Estimating the relative pose -rotation and translation -between two images, is often formulated as a geometric problem that can be solved by estimating the essential matrix [42] for calibrated cameras, or the fundamental matrix [24] for uncalibrated cameras. Related algorithms like the eight-point algorithm [23,42] provide fast solutions. However, essential matrix based approaches suffer issues such as solution multiplicity [18,24] and planar degeneracy [33]. The normal epipolar constraint (NEC) [34] addresses the issues by estimating the rotation independently of the translation, leading to more accurate relative poses [33].\nNeither of the aforementioned algorithms takes into account the quality of feature correspondences -an important cue that potentially improves pose estimation accuracy. Instead, feature correspondences are classified into inliers and outliers through a RANSAC scheme [11]. However, keypoint detectors [12,56] for feature correspondences or tracking algorithms [63] yield imperfect points [40] that exhibit a richer family of error distributions, as opposed to an inlier-outlier distribution family. Algorithms, that make use of feature correspondence quality have been proposed for essential/fundamental matrix estimation [7,53] and for the NEC [48], respectively.\nWhile estimating the relative pose can be formulated as a classical optimization problem [15,33], the rise in popularity of deep learning has led to several works augmenting arXiv:2305.09527v2 [cs.CV] 18 May 2023 (a) covariances from [48] per pixel (b) points and covariances [48] (c) learned covariances per pixel (Ours) (d) points and learned covariances (Ours) Figure 2. Comparison between covariances used in [48] (first row) and our learned covariances (second row). The first column shows a dense color coded (s, α, β mapped to HLS with γ correction) representation for each pixel, while the second column shows subsampled keypoints and their corresponding (enlarged) covariances. The higher saturation in (a) shows that the covariances are more anisotropic.\nThe learned covariances (c) show a more fine-grained detail in the scale (brightness) and less blurring than the covariances in (a).\nVO or visual simultaneous localisation and mapping (VS-LAM) pipelines with learned components. GN-Net [67] learns robust feature representations for direct methods like DSO [15]. For feature based methods Superpoint [12] provides learned features, while Superglue [57] uses graph neural networks to find corresponding matches between feature points in two images. DSAC introduces a differential relaxation to RANSAC that allows gradient flow through the otherwise non-differentiable operation. In [53] a network learns to re-weight correspondences for estimating the fundamental matrix. PixLoc [58] estimates the pose from an image and a 3D model based on direct alignment.\nIn this work we combine the predictive power of deep learning with the precision of geometric modeling for highly accurate relative pose estimation. Estimating the noise distributions for the feature positions of different feature extractors allows us to incorporate this information into relative pose estimation. Instead of modeling the noise for each feature extractor explicitly, we present a method to learn these distributions from data, using the same domain that the feature extractors work with -images. We achieve this based on the following technical contributions:\n• We introduce a symmetric version of the probabilistic normal epipolar constraint (PNEC), that more accurately models the geometry of relative pose estimation with uncertain feature positions. • We propose a learning strategy to minimize the relative pose error by learning feature position uncertainty through differentiable nonlinear least squares (DNLS), see Fig. 1. • We show with synthetic experiments, that using the gradient from the relative pose error leads to meaningful estimates of the positional uncertainty that reflect the correct error distribution. • We validate our approach on real-world data in a visual odometry setting and compare our method to non-probabilistic relative pose estimation algorithms, namely Nistér 5pt [50], and NEC [33], as well as to the PNEC with non-learned covariances [48]. • We show that our method is able to generalize to different feature extraction algorithms such as SuperPoint [12] and feature tracking approaches on real-world data. • We release the code for all experiments and the training setup to facilitate future research." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b9", "b64", "b23", "b33", "b15", "b14", "b35", "b37", "b41", "b60", "b41", "b50", "b16", "b33", "b5", "b21", "b43", "b62", "b58", "b59", "b71", "b13", "b70", "b42", "b21", "b18", "b7", "b26", "b6", "b6", "b47", "b3", "b53", "b40", "b46", "b68", "b67", "b66", "b11", "b69", "b56", "b1", "b57", "b63", "b29", "b24", "b39", "b25", "b36", "b44" ], "table_ref": [], "text": "This work is on deep learning for improving frame-toframe relative pose estimation by incorporating feature position uncertainty with applications to visual odometry. We therefore restrict our discussion of related work to relative pose estimation in visual odometry, weighting correspondences for relative pose estimation, and deep learning in the context of VSLAM. For a broader overview over VS-LAM we refer the reader to more topic-specific overview papers [10,65] and to the excellent books by Hartley and Zisserman [24] and by Szeliski [62].\nRelative Pose Estimation in Visual Odometry. Finding the relative pose between two images has a long history in computer vision, with the first solution for perspective images reaching back to 1913 by Kruppa [35]. Modern methods for solving this problem can be classified into feature-based and direct methods. The former rely on feature points extracted in the images together with geometric constraints like the epipolar constraint or the normal epipolar constraint [34] to calculate the relative pose. The latter optimize the pose by directly considering the intensity differences between the two images and rose to popularity with LSD-SLAM [16] and DSO [15]. Since direct methods work on the assumption of brightness or irradiance constancy they require the appearance to be somewhat similar across images. In turn, keypoint based methods rely on suitable feature extractors which can exhibit significant amounts of noise and uncertainty. In this paper we propose a method to learn the intrinsic noise of keypoint detectorstherefore, the following will focus on feature based relative pose estimation.\nOne of the most widely used parameterizations for reconstructing the relative pose from feature correspondences is the essential matrix, given calibrated cameras, or the fundamental matrix in the general setting. Several solutions based on the essential matrix have been proposed [36,38,42,50,61]. They include the linear solver by Longuet-Higgins [42], requiring 8 correspondences, or the solver by Nistér et al. [51] requiring the minimal number of 5 correspondences. However, due to their construction, essential matrix methods deteriorate for purely rotational motion with noise-free correspondences [33]. As an alternative, methods that do not use the essential matrix have been proposed -they either estimate the relative pose using quaternions [17] or make use of the normal epipolar constraint (NEC) by Kneip and Lynen [33,34]. The latter addresses the problems of the essential matrix by estimating rotation independent of the translation. [6] shows how to obtain the global minimum for the NEC. Further work, that disentangles rotation and translation can be found in [39].\nWeighting of Feature Correspondences. Keypoints in images can exhibit significant noise, deteriorating the performance for pose estimation significantly [22]. The noise characteristics of the keypoint positions depend on the feature extractor. For Kanade-Lucas-Tomasi (KLT) tracking [44,63] approaches, the position uncertainty has been investigated in several works [20,59,60,72]. The uncertainty was directly integrated into the tracking in [14]. [71] proposed a method to obtain anisotropic and inhomogeneous covariances for SIFT [43] and SURF [3].\nGiven the imperfect keypoint positions, not all correspondences are equally well suited for estimating the relative pose. [22] showed the effect of the noise level on the accuracy of the pose estimation. Limiting the influence of bad feature correspondences has been studied from a geometrical and a probabilistic perspective. random sample consensus (RANSAC) [19] is a popular method to classify datapoints into inliers and outliers that can be easily integrated into feature based relative pose estimation pipelines. Ranftl et al. [53] relax the hard classification for inlier and outlier and use deep learning to find a robust fundamental matrix estimator in the presence of outliers in an iteratively reweighted least squares (IRLS) fashion. DSAC [5] models RANSAC as a probabilistic process to make it differentiable. Other lines of work integrate information about position uncertainty directly into the alignment problem. For radar based SLAM, [8] incorporates keypoint uncertainty in radar images, with a deep network predicting the uncertainty. Image based position uncertainty was investigated from the statistical, [27,28], the photogrammetry [46] and the computer vision perspective [7,29]. [7] and [29] debated the benefit of incorporating position uncertainty into fundamental matrix estimation. We base our method on the probabilistic normal epipolar constraint (PNEC) [48], that improved on the NEC by extending it to a probabilistic view. It achieved better results on real-world data with covariances approximated using the Boltzmann distribution [4]. We expand on this idea by learning covariances (see Fig. 2) agnostic of the keypoints extractor used to further improve pose estimation.\nDeep Learning in VSLAM. Deep Learning has transformed computer vision in the last decade. While deep networks have been successfully used for tasks like detection [54], semantic segmentation [41], and recently novel view synthesis [47], they have also found application in VS-LAM pipelines. DVSO [69] and D3VO [68] leveraged deep learning to improve the precision for direct methods, while GN-Net [67] predicts robust and dense feature maps. Several works proposed to learn keypoint extractors, for feature based pose estimation, such as SuperPoint [12] and LIFT [70]. SuperGlue [57] enabled feature matching with graph neural networks. Other lines of work leverage deep learning for localization by making parts of the pose estimation pipeline differentiable [2,5,58,64]. Works, that directly predicting the pose include PoseNet [30] and CTCNet [25] that uses self-supervised learning with a cycle-consistency loss for VO. [40] learns image representations by refining keypoint positions and camera poses in a post-processing step of a structure-from-motion pipeline. ∇SLAM [26] presents a differentiable dense SLAM system with several components (e.g., the Levenberg-Marquardt [37,45] optimizer)." }, { "figure_ref": [], "heading": "Method", "publication_ref": [ "b47" ], "table_ref": [], "text": "In the following, we present our framework to estimate positonal uncertainty of feature points by leveraging DNLS. We learn the noise covariances through a forward and backward step. In the forward step, the covariances are used in a probabilistic pose estimation optimization, namely the PNEC. In the backward step, the gradient from the pose error is back-propagated through the optimization to the covariances. From there we can train a neural network to predict the keypoint position uncertainty from the images. We start by summarizing the asymmetric PNEC [48] and for the first time introduce its symmetric counterpart." }, { "figure_ref": [], "heading": "Prerequisites", "publication_ref": [ "b47", "b54", "b47" ], "table_ref": [], "text": "Notation. We follow the notation of [48]. Bold lowercase letters (e.g. f ) denote vectors, whereas bold uppercase letters (e.g. Σ) denote matrices. û ∈ R 3×3 represents the skew-symmetric matrix of the vector u ∈ R 3 such that the cross product between two vectors can be rewritten as a matrix-vector operation, i.e. u × v = ûv. The transpose is Figure 3. Architecture: We extract the uncertainty per image for every pixel using a UNet [55] backbone. Using keypoint locations from a keypoint detector, we obtain the keypoints with their estimated positional uncertainty. The relative pose is then estimated using a DNLS optimization. The UNet is updated by backpropagating the gradient (obtained by implicit differentiation) to the network output.\ndenoted by the superscript . We deviate from [48] in the following: variables of the second frame are marked with the superscript, while variables of the first frame do not have a superscript. We represent the relative pose between images as a rigid-body transformation consisting of a rotation matrix R ∈ SO(3) and a unit length translation t ∈ R 3 ( t = 1 is imposed due to scale-invariance)." }, { "figure_ref": [], "heading": "The Probabilistic Normal Epipolar Constraint", "publication_ref": [ "b47", "b47", "b47", "b55", "b11", "b47" ], "table_ref": [], "text": "The asymetric probabilistic normal epipolar constraint (PNEC) estimates the relative pose, give two images I, I of the same scene under the assumption of uncertain feature positions in the second image. A feature correspondences is given by p i , p i in the image plane, where the uncertainty of p i is represented by the corresponding covariance Σ 2D,i . To get the epipolar geometry for the PNEC the feature points are unprojected using the camera intrinsics, giving unit length bearing vectors f i , f i . The uncertainty of f i is now represented by Σ i . Estimating the relative pose is done by minimizing the PNEC cost function as defined in [48]. For convenience we recap the energy function\nE(R, t) = i e 2 i σ 2 i = i |t (f i × Rf i )| 2 t f i RΣ i R f i t ,(1)\nin our notation. As mentioned previously, this asymmetric PNEC in [48] only considers uncertainties Σ in the second frame. While this assumption might hold for the KLT tracking [66] used in [48], this leaves out important information when using other keypoint detectors like ORB [56] or SuperPoint [12]. Therefore, we will introduce a symmetric version of the PNEC that is more suitable for our task in the following.\nMaking the PNEC symmetric. As in [48] we assume the covariance of the bearing vectors f i and f i to be gaussian, their covariance matrices denoted by Σ i and Σ i , respectively. The new variance can be approximated as\nσ 2 s,i = t ( (Rf i )Σ i (Rf i ) + f i RΣ i R f i )t . (2\n)\nIn the supplementary material (see App. C), we derive the variance and show the validity of this approximation given the geometry of the problem. This new variance now gives us the new symmetric PNEC with its following energy function\nE s (R, t) = i e 2 i σ2\ns,i(3)" }, { "figure_ref": [], "heading": "DNLS for Learning Covariances", "publication_ref": [ "b12", "b24", "b24", "b24" ], "table_ref": [], "text": "We want to estimate covariances Σ 2D and Σ 2D (in the following collectively denoted as Σ 2D for better readability) in the image plane\nΣ 2D = arg min Σ2D L ,(4)\nsuch that they minimize a loss function L of the estimated pose. Since we found that the rotational error of the PNEC is more stable than the translational error, we chose to minimize only the rotational error\ne rot = ∠ R R (5) L( R, R; Σ 2D ) = e rot(6)\nbetween the ground truth rotation R and the estimated rotation R. We obtain\nR = arg min R E s (R, t; Σ 2D )(7)\nby minimizing Eq. 3. To learn the covariances that minimize the rotational error, we can follow the gradient dL/dΣ 2D . Implicit differentiation allows us to compute the gradient as\ndL dΣ 2D = - ∂ 2 E s ∂Σ 2D ∂R ∂ 2 E s ∂R∂R -1 e rot ∂R .(8)\nFor a detailed derivation of Eq. 8 and other methods, that unroll the optimization, to obtain the gradient we refer the interested reader to [13]. Supervised Learning. The goal of the paper is for a neural network F learn the noise distributions of a keypoint detector. Given an image and a keypoint position, the network should predict the covariance of the noise Σ 2D,i = F (I, p i ). The gradient dL/dΣ 2D allows for the network to learn the covariance matrices in an end-to-end manner by regression on the relative pose error. Given a dataset with know ground truth poses, we can use\nL sup = e rot (9\n)\nas a training loss. This ensures, that learned covariances effectively minimize the rotational error. See Fig. 3 for overview of the training process.\nSelf-Supervised Learning. Finding a suitable annotated dataset for a specific task is often non-trivial. For our task, we need accurate ground truth poses that are difficult to ac-quire. But given a stream of images, like in VO, our method can be adapted to train a network in a self-supervised manner without the need for ground truth poses. For this, we follow the approach of [25] to exploit the cycle-consistency between a tuple of images. The cycle-consistency loss for a triplet {I 1 , I 2 , I 3 } of images is given by\nL cycl = ∠ (i,j)∈P R ij ,(10)\nwhere R ij is the estimated rotation between images I i and I j and P = {(1, 2), (2, 3), (3, 1)} defines the cycle. As in [25], we also define an anchor loss\nL anchor = (i,j)∈P ∠R ij R ij,NEC(11)\nwith the NEC rotation estimate, as a regularising term. In contrast to [25], our method does not risk learning degenerate solutions from the cycle-consistency loss, since the rotation is estimated using independently detected keypoints. The final loss is then given by\nL self = L cycl + λL anchor .(12)" }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b11", "b51", "b0" ], "table_ref": [], "text": "We evaluate our method in both synthetic and real-world experiments. Over the synthetic data, we investigate the ability of the gradient to learn the underlying noise distribution correctly by overfitting covariance estimates directly. We also investigate if better noise estimation leads to a reduces rotational error.\nOn real-world data, we use the gradient to train a network to predicts the noise distributions from images for different keypoint detectors. We explore fully supervised and self-supervised learning techniques for SuperPoint [12] and Basalt [66] KLT-Tracks to verify that our method is agnostic to the type of feature descriptor used (classical vs learned). We evaluate the performance of the learned covariances in a visual odometry setting on the popular KITTI odometry and the EuRoC dataset. We also evaluate generalization capabilities from the KITTI to the EuRoC dataset.\nFor our experiments we implement Eq. 3 in both Theseus [52] and ceres [1]. We use the Theseus implementation to train our network, since it allows for batched optimization and provides the needed gradient (see Eq. 8). However, we use the ceres implementation for our evaluation. We found the Levenberg-Marquardt optimization of ceres to be faster and more stable than its theseus counterpart." }, { "figure_ref": [ "fig_0", "fig_0", "fig_1", "fig_2" ], "heading": "Simulated Experiments", "publication_ref": [ "b30", "b47", "b10", "b48" ], "table_ref": [], "text": "In the simulated experiments we overfit covariance estimates for a single relative pose estimation problem using the gradient from Eq. 8. For this, We create a random relative pose estimation problem consisting of two cameraframes observing randomly generated points in 3D space.\nThe points are projected into camera frames using a pinhole camera model. Each projected point is assigned a random gaussian noise distribution. From this 128 000 random problems are sampled. We learn the noise distributions by initializing all covariance estimates as scaled identity matrices, solving the relative pose estimation problem using the PNEC and updating the parameters of the distribution using the gradient of Eq. 8 directly. We train for 100 epochs with the ADAM [31] optimizer with (0.9, 0.99) as parameters and a batch size of 12 800 for a stable gradient. Fig. 4a shows the decrease of the rotation error over the epochs. The learned covariances decrease the error by 8% and 16% compared to unit covariances and the NEC, respectively. This validates the importance of good covariances for the PNEC, shown in [48]. Fig. 4b shows the average error for the normalized variance σ 2 norm , given by\nσ 2 i,norm = N • σ 2 i N j=0 σ 2 j (13)\nover the training epochs, obtained at the ground truth relative pose. We compare the normalized error variance, as the scale of σ 2 is not observable from the gradient. The covariances that minimize the rotational error also approximate Since we compare monocular methods, that cannot estimate the correct scale from a pair of images, we use the scale of the ground truth translations for visualization purposes. Both, our supervised and self-supervised approaches lead to significant improvements in the trajectory. There is little drift even without additional rotation averaging [11] or loop closure [49].\nthe residual uncertainty σ 2 very closely. However, while the residual uncertainty is approximated well, the learned 2D covariances in the image plane do not correspond to the correct covariances (see Fig. 5). This is due to two different reasons. First, due to σ 2 i dependence on both Σ 2D,i and Σ 2D,i , there is not a single unique solution. Secondly, the direction of the gradient is dependent on the translation between the images (see App. D for more details). In this experimental setup, the information flow to the images is limited and we can only learn the true distribution for σ 2 but not for the 2D images covariances.\nTo address the problems with limited information flow of the previous experiment, we propose a second experiment to negate the influence of these aforementioned factors. First, each individual problem has a randomly sampled relative pose, where the first frame stays fixed. This removes the influence of the translation on the gradient direction. The noise is still drawn from the same distributions as earlier. Second, we fix the noise in the first frame to be small, isotropic, and homogeneous in nature. Furthermore, we only learn the covariances in the second frame and provide the optimization with the ground truth noise in the first frame. Fig. 6 and Fig. 7 show, that under these constraints, we are not only able the learn the distribution for σ 2 but also Σ 2D . Together, both experiments show, that we can learn the correct distributions from noisy data by following the gradient that minimizes the rotational error." }, { "figure_ref": [ "fig_3" ], "heading": "Real World Data", "publication_ref": [ "b47", "b54", "b6", "b11", "b56", "b31", "b47", "b50", "b56", "b47", "b50", "b31", "b47", "b10", "b47", "b47" ], "table_ref": [], "text": "We evaluate our method on the KITTI [21] and EuRoC [9] dataset. Since KITTI shows outdoor driving sequences with the mean on the train and test set weighted by the sequence lengths. As for SuperPoint, our methods improve all metrics consistently for unseen data. Our learned covariances are significantly better for relative pose estimation than the approximation used in [48].\nand EuRoC shows indoor scenes captured with a drone, they exhibit different motion models as well as a variety of images. For KITTI we choose sequences 00-07 as the training set for both supervised and self-supervised training. Sequences 08-10 are used as the test set. We use a smaller UNet [55] architecture as our network to predict the covariances for the whole image. We chose this network since it gives us a good balance between batch size, training time and performance. The network predicts the parameters for the covariances directly. We choose\nΣ 2D (s, α, β) = sR α β 0 0 1 -β R α(14)\nas a parameterization [7]. To ensure that our network predicts valid covariances the network output is filtered with\nf 1 (x) = (1 + |x|) sign(x) (15) f 2 (x) = x (16\n)\nf 3 (x) = 1 1 + e -x(17)\nfor s, α, β, respectively. Feature points that have subpixel accuracy use the nearest pixel covariance. See App. E for more details on the training setup. Supervised Learning. To show that our method generalizes to different keypoint detectors, we train two networks, one for SuperPoint [12] and one for KLT tracks obtained from [66]. The SuperPoint keypoints are matched using SuperGlue [57]. For training we use a batch size of 8 images pairs for SuperPoint and 16 images pairs for KLT tracks. We trained for 100 epochs for both SuperPoint and KLT tracks. More training details are provided in the supplementary material. To ensure our network does not overfit on specific keypoint locations, we randomly crop the images before finding correspondences during training time. During evaluation we use the uncropped images to obtain features. During training we randomly perturb the ground truth pose as a starting point. To increase robustness, we first use the eigenvalue based optimization of the NEC in a RANSAC scheme [32] to filter outliers. This is followed by a custom least squares implementation of the NEC (NEC-LS), followed by optimizing Eq. 3. As reported in [48] we found, that such a mutli-stage optimization provides the most robust and accurate results. We show examples of how the DNLS-learned covariances change the energy function landscape in the supplementary material.\nSelf-Supervised Learning. We evaluate our selfsupervised training setup on the same data as our supervised training. Due to needing image tuples instead of pairs, we reduce the batch size to 12 for KLT image triplets. This gives us 24 and 36 images pairs per batch, respectively. The training epochs are reduced to 50. More training details for the supervised and self-supervised training can be found in the supplementary material.\nResults. We evaluate the learned covariances in a VO setting. We compare the proposed DNLS approach to the NIST ÉR-5PT [51] NEC [33] NEC-LS WEIGHTED OURS SELF-OURS TAB. [57] finding far fewer matches. As reported in [48] the least squares implementations struggle with bad initialization under these adverse conditions with NEC-LS performing especially poor. From all least squares optimizations, our learned covariances consistently perform the best, even outperforming the NEC most of the time.\npopular 5pt algorithm [51] and the NEC [33] as implemented in [32]. To investigate the benefit of our learned covariances we include the NEC-LS implementation as well as the symmetric PNEC with the covariances from [48] in Tab. 2. For Tab. 1 we additionally include a weighted version of our custom NEC-LS implementation with matching confidence from SuperGlue as weights. All methods are given the same feature matches and use a constant motion model for initializing the optimizations. We evaluate on the rotational versions of the RPE 1 and RPE n and the cosine error e t for the translation as defined in [11,48]. Tab. 1 and Tab. 2 show the average results on the test set over 5 runs for SuperPoint and KLT tracks on KITTI [21], respectively. We show additional results in App. G. Our methods consistently perform the best over all sequences, with the self-supervised being on par with our supervised training. Compared to its non-probabilistic counterpart NEC-LS, our method improves the RPE 1 by 7% and 13% and the RPE n by 37% and 23% for different keypoint detectors on unseen data. It also improves upon weighted methods, like weighted NEC-LS and the non-learned covariances for the PNEC [48]. This demonstrates the importance of correctly modeling the feature correspondence quality. We show an example trajectory in Fig. 8. Tab. 3 shows the results on the EuRoC dataset for Su-perPoint. Pose estimation is significantly more difficult compared to KITTI, often having few correspondences between images. However, our method generalizes to different datasets, with the network trained on KITTI and our self-supervised approach, outperforming the others most of the time. Especially a direct comparison with NEC-LS, the closest non-probabilistic method, shows significant improvements of 7% for RPE 1 and 48% for the RPE n ." }, { "figure_ref": [], "heading": "Discussion and Limitations", "publication_ref": [], "table_ref": [], "text": "Our experiments demonstrate the capability of our framework to to correctly learn positional uncertainty, lead-ing to improved results for relative pose estimation for VO. Our approach generalizes to different feature extractors and to different datasets, providing a unified approach to estimate the noise distribution of keypoint detectors. However, our method requires more computational resources than the original uncertainty estimation for the PNEC.\nWe evaluate our learned covariances in a visual odometry setting, showing that they lead to reduced errors and especially less drift in the trajectory. However, this does not guarantee that the covariances are calibrated. Our framework inherits the ambiguity of the PNEC with regard to the noise scale. The true scale of the noise is not observable from relative pose estimation alone and only the relative scale between covariances can be learned. For the purposes of VO, this scale ambiguity is negligible.\nAs our synthetic experiments show, diverse data is needed to correctly identify the 2D noise distribution. However, obtaining the noise distribution is difficult for keypoint detectors -hence learning it from pose regression. Further limitations are addressed in App. B." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We present a novel DNLS framework for estimating positional uncertainty. Our framework can be combined with any feature extraction algorithm, making it extremely versatile. Regressing the noise distribution from relative pose estimation, ensures that learned covariance matrices are suitable for visual odometry tasks. In synthetic experiments, our framework is capable to learn the correct noise distribution from noisy data. We showed the practical application of our framework on real-world data for different feature extractors. Our learned uncertainty consistently outperforms a variety of non-probabilistic relative pose estimation algorithms as well as other uncertainty estimation methods. For a focal length similar to the one found in the KITTI dataset, the relative error is 0.015%." }, { "figure_ref": [], "heading": "Learning Correspondence Uncertainty via Differentiable Nonlinear Least Squares", "publication_ref": [], "table_ref": [], "text": "Supplementary Material" }, { "figure_ref": [], "heading": "A. Overview", "publication_ref": [], "table_ref": [], "text": "This supplementary material presents additional insight into learning positional uncertainty using DNLS. We start by addressing limitations of our framework in App. B. App. C gives a derivation of the residual variance σ 2 s for the symmetric PNEC. We investigate the unobservabilities of the gradient in App. D. The training and evaluation details are given in App. E. We show further quantitative evaluations in App. F and App. G. This includes examples of how the learned covariances move the minimum around the ground truth and the results on the sequences 00-07 of the KITTI [21] dataset. We compare our learned covariances against error estimates from reprojection using ground truth poses." }, { "figure_ref": [ "fig_5", "fig_6", "fig_5", "fig_6" ], "heading": "B. Limitations", "publication_ref": [], "table_ref": [], "text": "In this section, we will address limitations of our method, not mentioned in the main paper due to constrained space. We learn to estimate the noise distribution of keypoint detectors, using regression on the pose error. The gradient we use for learning the distribution is restricted to points in the image that are detected as keypoints. This restrict our method to learn only on regions of the image with a high chance of producing keypoints. While we don't need uncertainty information for regions without keypoints, this sparse information flow might reduce generalization capabilities to different datasets. Sparsity if further enhanced by using RANSAC to filter outliers, removing points that are too far off. However, we choose to include RANSAC for our training to obtain better pose estimates for gradients not dominated by outliers. We tried to mitigate the effect of overfitting on keypoint positions by cropping the images, leading to different keypoint positions. Furthermore, our experiments showed that generalization between KITTI and EuRoC are possible. Fig. 11 and Fig. 12 show examples where our method performs worse and better than the NEC-LS optimization based on the estimated covariances. We investigate the keypoints with the highest and lowest reprojection error. As Fig. 11 shows, our method is not always able to compensate keypoints on dynamic objects leading to a large rotational error. The trajectories in Fig. 12 show the improvements our method is able to achieve compared to NEC-LS." }, { "figure_ref": [], "heading": "C. Approximating σ 2 s", "publication_ref": [], "table_ref": [], "text": "This section show derives the residual variance from the bearing vector covariances in both images. Given both bearing vectors f and f are noisy, we can write them as\nf = µ + η, η ∼ N (0, Σ) ,(18)\nf = µ + η , η ∼ N (0, Σ ) ,(19)\nwith a constant and a noise term. We then get the new normal vector as\nns = (µ + η) × R(µ + η )(20)\n= μRµ + μRη + ηRµ + ηRη , with a constant term µ n = μRµ and a noise term η n = Rη +ηRµ + ηRη . The noise term is zero centered and has a variance of\nΣn = (Rµ i )Σ i (Rµ i ) + μi RΣ i R μi + Σ , (21\n)\nwhere Σ is constructed from the columns of Σ and Σ R = RΣ R as\nΣ =    (Σ 2 × Σ R,3 + Σ 3 × Σ R,2 ) (Σ 3 × Σ R,1 + Σ 1 × Σ R,3 ) (Σ 1 × Σ R,2 + Σ 2 × Σ R,1 )    . (22\n)\nAs stated in the main paper, we use an approximation of the noise distribution. Since Σ is order of magnitudes smaller than the other terms, we can approximate Σn as\nΣn ≈ (Rµ i )Σ i (Rµ i ) + μi RΣ i R μi . (23\n)\nThe final residual variance is given by\nσ 2 s = t Σnt .(24)\nFig. 9 shows a comparison between our approximation and a the true residual distribution, given noisy image points. Do to the unprojection of the image points to bearing vectors, the trace of the bearing vector covariances is small for a focal length f of ca. 720 pixels on the KITTI dataset, since tr(Σ) ∼ 1/f 2 . Given the small covariances, Σ is several magnitudes smaller than the other terms, making the approximation accurate. Fig. 10 shows the correlation between the variance and the focal length." }, { "figure_ref": [ "fig_7", "fig_8", "fig_8", "fig_7", "fig_9", "fig_1", "fig_9" ], "heading": "D. Gradient", "publication_ref": [], "table_ref": [], "text": "In this section, we show that the gradient ∂L/∂Σ 2D is restricted by the problem geometry. We state the components needed to obtain ∂L/∂Σ 2D and show, how the geometry restricts their direction. Therefore, given a constant geometry the overall gradient direction only moves little throughout the training.\nWe start by rewriting the residual es of symmetric PNEC energy function as\nes = n σs = n d Σ + d Σ ,(25)\nfor easier differentiation, with the components Although their estimated covariances is somewhat lower (especially in (c)) this is not enough to compensate the error. (e) shows an example where points with a higher reprojection error get assigned a covariances on a similar level or slightly better than good correspondences.\nn = t f exp xRf f R exp x f t ,(26)\nd Σ = exp xRf × t Σ exp xRf × t ,(27)\nd Σ = t f exp xRΣ R exp x f t . (28\nSince we are working with rotations in SO(3) we differentiate with regard to x ∈ so(3) around the identity rotation. This gives us the following gradients\n∂n ∂x = 2 (Rf f R exp x f t) × ( f t) ,(29)\n∂d Σ ∂x = 2 (Rf ) × ( tΣ t exp xRf ) ,(30)\n∂d Σ ∂x = 2 (RΣ R exp x f t) × ( f t) ,(31)\nwith regard to the rotation. The direction of each gradient is restricted by the cross product. The gradient for the residual is given by\n∂es ∂x = 1 σs ∂n ∂x - n 2σ 3 s ∂d Σ ∂x + ∂d Σ ∂x .(32)\nThe gradients with regard to the bearing vector covariances are solely dependent on the geometry as they are given by\n∂d Σ ∂Σ = t × (exp xRf ) t × (exp xRf ) ,(33)\n∂d Σ ∂Σ = R exp x f t R exp x f t . (34\n)\nThe gradients of the residual are given by\n∂es ∂Σ = - n 2σ 3 s ∂d Σ ∂Σ ,(35)\n∂es ∂Σ = - n 2σ 3 s ∂d Σ ∂Σ .(36)\nSince all components are restricted by the geometry of the problem, the overall gradient is somewhat restricted as well. We show this empirically in the following.\nFig. 13 and Fig. 14 give the distribution of the gradient for the first experiment on synthetic data, where all individual problems share the same geometric setup. Fig. 14 shows the eigenvectors of ∂L/∂Σ 2D for one covariance in the image plane. After 10 epochs of training, the eigenvectors are mainly located at 4 distinct regions, showing the restriction of the gradient direction. Even after 100 epochs of training certain regions show only few eigenvectors. The angular distribution of the eigenvectors in Fig. 13 show 4 distinct peaks, with almost no eigenvectors in between.\nFig. 15 and Fig. 16 show the distribution of the gradient for the second experiment on synthetic data, with more diverse data. Given the diverse data, there are eigenvectors in all directions, even after 10 epochs. Fig. 15 still shows 4 distinct peaks, however there is no sparsity in the distribution.\nThe sparse distribution of the gradient direction prohibit learning the correct noise distribution for the first experiment. Only the residual variance is correctly estimated. However, the introduction of diverse data with different geometries removes this restriction, leading better covariance estimates." }, { "figure_ref": [], "heading": "E. Hyperparameters", "publication_ref": [ "b47", "b47", "b47" ], "table_ref": [], "text": "This section details the training and evaluation parameters for our DNLS framework for estimating noise distributions of keypoints. All models are trained on two RTX 5000 GPUs with 16GB of memory for around 3 days. We use a UNet architecture with 3 output channels for predicting the uncertainty parameters. The UNet has 4 down convolutions and 4 up convolutions with 32, 64, 128, 256 and 128, 64, 32, 16 channels, respectively. Tab. 5 gives the SuperPoint and SuperGlue hyperparameters for training and evaluation. For our supervised training, we train on consecutive image pairs of the training sequences. For our self-supervised training we create the training tuples from 3 consecutive images. When training with SuperPoint, we crop the images to size (1200, 300), whereas Table 4. Parameters used for training and evaluation.\nfor KLT-Tracks, we crop it to (1200, 320). We found that reducing the height too much for KLT-tracks leads to not enough tracks. For evaluating with KLT-tracks on KITTI we change the following to [48]: instead of tracking keypoints over multiple images, we start with fresh keypoints for each image pair. To account for the symmetric PNEC, we slightly modify the uncertainty extraction. We use [48,suppl.,Eqn. (8)] as the uncertainty measure for the tracks in both frames. We found, that these changes already give better results than the ones stated [48]. Tab. " }, { "figure_ref": [ "fig_3", "fig_2" ], "heading": "F. Moving the Minimum", "publication_ref": [ "b31" ], "table_ref": [], "text": "Fig. 18 and Fig. 17 show examples for energy functions around the ground truth pose on the KITTI dataset. The energy functions are evaluated with keypoints filtered using the reprojection error also used in the RANSAC scheme of [32] to remove outliers. We show the energy functions evaluated for rotations around the ground truth for yaw and pitch. While the overall shape of the energy function stays the same, our methods moves the minimum closer to the ground truth pose by learning the covariances." }, { "figure_ref": [], "heading": "G. Further Results", "publication_ref": [ "b50", "b41", "b31", "b41", "b11", "b41" ], "table_ref": [], "text": "In this section we present additional results on the KITTI dataset, not presented in the main paper due to constrained space. We give the evaluation results for all sequences, training and test set. To present more comparisons with baseline methods, we replace the Nistér-5pt [51] with the 8pt [42] algorithm. Furthermore, we replace the weighted NEC-LS and the KLT-PNEC. Instead, we add another PNEC method, where we approximate the error distribution using a reprojection error. Following [32], we triangulate a 3D point using the feature correspondence p i , p i and the ground truth pose. We reproject the point into the images as pi , p i and approximate the the error distribution as scaled isotropic covariances\nΣ 2D,i = pi -p i 2 I 2 ,(37)\nΣ 2D,i = p i -p i 2 I 2 .(38)\nWe clip the scale of the covariances at 0.01 and 4.0. Tab. 7 shows the results for the training and test set on KITTI with SuperPoint. While the reprojection method achieves the best results for the RPE 1 and et, our methods are often not far behind. This shows, that our network is capable and not too far off, when it comes to pose estimation. Tab. 6 shows the results for KITTI with KLT-tracks. We show trajectories for all sequences of the KITTI dataset in Fig. 20 and Fig. 19. Our method consistently achieves the smallest drift over all sequences. The energy function is evaluated for KLT-tracks for two pose estimation problems on the KITTI dataset, filtered with RANSAC at the ground truth pose. We compare the PNEC energy function using the KLT-covariances with our supervised and self-supervised covariances. While the overall shape of the energy function stays the same, our learned covariances move the minimum closer to the ground truth.\n8PT [42] NEC [ [12] keypoints. We replace the Nistér-5pt [50] with the 8pt [42] algorithm to show more results. We also show, an approximation of the true error distance using reprojected points (this is excluded from being bold or underlined). While the reprojection approximation achieves the best results on almost all sequences, our methods are often not far behind. This emphasises, that our method is able to effectively learn covariances. " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgements. This work was supported by the ERC Advanced Grant SIMULACRON, by the Munich Center for Machine Learning and by the EPSRC Programme Grant VisualAI EP/T028572/1." } ]
uncertainty estimates from images regress uncertainty estimates from pose error images with correspondences camera pose estimated pose pose error gradient R12 , t12 R 12 , t 12 R err ground truth w/o uncertainty with uncertainty Figure 1. We present a differentiable nonlinear least squares (DNLS) framework for learning feature correspondence quality by computing per-feature positional uncertainty. The uncertainty estimates (left, bottom images) are regressed from a pose estimation error (middle), enabling the framework across a range of (handcrafted, learned) feature extractors. Our learned covariances (right, orange trajectory) improve orientation estimation by up to 11% over state-of-the-art probabilistic pose estimation methods on the KITTI dataset [21].
Learning Correspondence Uncertainty via Differentiable Nonlinear Least Squares
[ { "figure_caption": "Figure 4 .4Figure 4. Rotational error (a) and differences between the true residual variance σ2 and the learned variance σ 2 (b) over the training epochs. Starting from uniform covariances, our method adapts the covariances for each keypoint to minimize the rotational error. Simultaneously, this leads to a better estimate of σ 2 .", "figure_data": "", "figure_id": "fig_0", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Rotational error (a) and differences between the true residual variance σ2 and the learned variance σ 2 (b) over the training epochs. As previously, our method learns to adapt the covariances for each keypoint to minimize rotational error. Minimizing the rotational error leads to a significantly better estimate of σ 2 .", "figure_data": "", "figure_id": "fig_1", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Estimated (red) covariance ellipses in the second frame, learned from 128 000 examples. Ground truth (green) covariances as comparison. Training data with enough variety gives a gradient that allows to correctly learn the covariances even in the image plane, overcoming the unobservabilities of the first experiment.", "figure_data": "", "figure_id": "fig_2", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. Qualitative trajectory comparison for KITTI seq. 00. Since we compare monocular methods, that cannot estimate the correct scale from a pair of images, we use the scale of the ground truth translations for visualization purposes. Both, our supervised and self-supervised approaches lead to significant improvements in the trajectory. There is little drift even without additional rotation averaging[11] or loop closure[49].", "figure_data": "", "figure_id": "fig_3", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 .Figure 10 .910Figure 9. Approximation of the residual variances. The analytical approximation given in the main paper accurately models the true distribution of the residual given a similar setup to the KITTI dataset.", "figure_data": "", "figure_id": "fig_4", "figure_label": "910", "figure_type": "figure" }, { "figure_caption": "Figure 11 .11Figure 11. Left: estimated keypoints with covariances (color-coded ellipses) for examples where our method performs worse than NEC-LS. Good ( ) and bad correspondences ( ) based on the reprojection error. Right: corresponding sections of the trajectory. (a) and (c) show examples with keypoints on dynamic objects.Although their estimated covariances is somewhat lower (especially in (c)) this is not enough to compensate the error. (e) shows an example where points with a higher reprojection error get assigned a covariances on a similar level or slightly better than good correspondences.", "figure_data": "", "figure_id": "fig_5", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 .12Figure 12. Left: estimated keypoints with covariances (color-coded ellipses) for examples where our method performs better than NEC-LS. Good ( ) and bad correspondences ( ) based on the reprojection error. Right: corresponding sections of the trajectory. Covariances for bad correspondences are estimated to be higher in these examples. They are down-weighted in the optimization leading to better pose estimates. Hyperparameter KITTI EuRoC optimizer ADAM ADAM β1 0.9 0.9 β2 0.999 0.999 learning rate 5 • 10 -4 5 • 10 -4 PNEC and theseus regularization 10 -13 10 -13 damping 10 7 10 7 iterations 100 100 RANSAC iterations 5000 5000 threshold 10 -6 8 • 10 -7", "figure_data": "", "figure_id": "fig_6", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 13 .13Figure 13. Histogram of eigenvector angles for the gradient ∂L/∂Σ 2D after 10, 50, and 100 epochs. The histogram shows 4 distinct peaks, with only a few points in between. This shows the limited direction that the gradients have, making it difficult to learn the true distribution of the covariances with little diversity in the training data.", "figure_data": "", "figure_id": "fig_7", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 .14Figure 14. Distribution of eigenvectors of the gradient ∂L/∂Σ 2D after 10, 50, and 100 epochs. Eigenvectors are color coded (green to blue and yellow to red) depending, whether there are the 1st or 2nd eigenvector and their epoch. While after 100 epochs most of the circle is covered, the eigenvectors aggregate at certain positions. Especially after 10 epochs, the eigenvectors are sparsely distributed. This shows a limited range of directions for the gradient.", "figure_data": "", "figure_id": "fig_8", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15 .15Figure15. Histogram of eigenvector angles for the gradient ∂L/∂Σ 2D after 10, 50, and 100 epochs. While it shows 4 distinct peaks, event after only 10 epochs many points lie in between. The direction of the gradient is not limited, allowing for a better fit to the ground truth noise distribution.", "figure_data": "", "figure_id": "fig_9", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Figure 16 .Figure 17 .Figure 18 .161718Figure16. Distribution of eigenvectors of the gradient ∂L/∂Σ 2D after 10, 50, and 100 epochs. Eigenvectors are color coded (green to blue and yellow to red) depending, whether there are the 1st or 2nd eigenvector and their epoch. Even after 10 epochs, the eigenvectors are evenly distributed. This show, that the gradient has no limit for its direction, allowing for a better fit to the noise distribution even in the image plane.", "figure_data": "", "figure_id": "fig_10", "figure_label": "161718", "figure_type": "figure" }, { "figure_caption": "10 Figure 19 . 10 Figure 20 .10191020Figure 19. Trajectory comparison for the KITTI visual odometry sequences for SuperPoint keypoints. Since we compare monocular methods, that cannot estimate the correct scale from a pair of images, we use the scale of the ground truth translations for visualization purposes.", "figure_data": "", "figure_id": "fig_11", "figure_label": "10191020", "figure_type": "figure" }, { "figure_caption": "Quantitative comparison on the KITTI [21] dataset with SuperPoint[12] keypoints. We compare two rotation and one translation metric. The results are shown for each test sequence together with the mean results on the training and test set weighted by the sequence length. Both our training setups outperform the non-probablitic algorithms but also the weighted NEC-LS using SuperGlue confidences consistently across unseen data. The learned uncertainties are able to generalise well and improve the relative pose estimation significantly.", "figure_data": "NIST ÉR-5PT [50]NEC [33]NEC-LSWEIGHTEDOURSOURS SELF-NEC-LSSUPERVISEDSUPERVISEDSeq. RPE1 RPEnet RPE1 RPEnet RPE1 RPEnet RPE1 RPEnet RPE1 RPEnet RPE1 RPEnet080.195 17.020 4.24 0.081 8.284 3.66 0.056 7.004 2.50 0.054 6.059 2.50 0.050 4.067 2.46 0.050 4.118 2.46090.142 5.754 1.74 0.053 1.646 1.43 0.052 1.553 0.71 0.051 1.354 0.70 0.049 1.317 0.71 0.049 1.278 0.70100.295 16.678 6.57 0.167 9.264 4.43 0.064 4.787 1.79 0.063 4.389 1.76 0.063 3.513 1.64 0.065 3.821 1.65train 0.249 11.506 4.13 0.141 10.127 2.97 0.082 6.910 1.72 0.081 6.410 1.72 0.077 2.378 1.69 0.077 2.505 1.69test 0.200 14.349 4.07 0.089 6.917 3.28 0.056 5.353 1.96 0.055 4.676 1.95 0.052 3.333 1.91 0.053 3.408 1.91NIST ÉR-5PT [50]NEC [33]NEC-LSKLT-PNEC [48]OURSOURS SELF-SUPERVISEDSUPERVISEDSeq. RPE1 RPEnet RPE1 RPEnet RPE1 RPEnet RPE1 RPEnet RPE1 RPEnet RPE1 RPEnet080.126 6.929 3.44 0.088 3.902 8.91 0.053 2.908 2.49 0.054 2.524 2.42 0.048 2.373 2.36 0.047 1.706 2.36090.090 2.544 1.28 0.054 2.027 6.76 0.052 2.307 0.74 0.046 1.003 0.69 0.043 1.244 0.64 0.042 1.141 0.64100.188 11.554 4.43 0.119 8.302 8.53 0.066 4.576 1.78 0.063 4.480 1.71 0.058 3.789 1.58 0.056 3.623 1.60train 0.204 9.677 3.19 0.173 8.301 8.59 0.103 3.955 1.73 0.104 4.213 1.66 0.094 2.782 1.60 0.096 2.737 1.61test 0.129 6.722 3.11 0.085 4.237 8.34 0.055 3.060 1.96 0.054 2.514 1.90 0.048 2.359 1.82 0.048 1.910 1.83", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Quantitative comparison on the KITTI [21] dataset with KLT tracks [66]. As in Tab. 1, we show the results on the test set together", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": ".501 71.87 31.86 0.320 39.50 43.12 0.387 52.92 46.31 0.388 56.52 46.82 0.327 31.12 35.56 0.332 31.81 34.01 V1 02 0.541 32.01 20.36 0.389 28.11 26.95 0.540 70.08 28.94 0.542 68.35 29.81 0.444 30.39 21.98 0.436 29.07 21.29 V1 03 0.660 27.39 25.00 0.492 25.42 31.06 0.552 76.72 31.58 0.555 78.14 32.25 0.510 29.52 24.19 0.520 31.18 24.13 V2 01 0.515 61.45 33.51 0.316 31.95 39.79 0.310 35.84 39.00 0.314 38.62 39.62 0.285 17.61 32.40 0.295 22.41 30.58 V2 02 0.545 43.73 22.24 0.396 25.48 32.21 0.369 26.96 25.36 0.365 25.09 25.81 0.382 25.32 21.16 0.386 21.91 20.34 V2 03 1.123 36.71 28.77 0.976 48.26 37.60 0.939 107.11 36.74 0.941 100.73 36.71 0.942 52.72 31.13 0.991 55.41 30.40 mean 0.631 48.45 27.56 0.463 33.51 36.03 0.494 58.90 35.61 0.496 58.95 36.11 0.461 30.57 28.46 0.472 31.44 27.44 Quantitative comparison on the Vicon sequences of the EuRoC dataset [9] with SuperPoint [12] keypoints. The dataset is more difficult than KITTI (see Tab. 2 and Tab. 1) with SuperPoint and SuperGlue", "figure_data": "1", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "4 gives the training parameter for optimizer, theseus and the PNEC energy function not stated in the main paper. Hyperparameters for SuperPoint and SuperGlue during training and evaluation on the KITTI and EuRoC dataset.", "figure_data": "Hyperparametertraining KITTI EuRoCmax keypoints25620481024keypoint threshold0.005 0.005 0.0005nms radius333weightsoutdoor outdoor indoorsinkhorn iterations202020match threshold0.50.50.01", "figure_id": "tab_3", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Quantitative comparison on the KITTI [21] dataset with KLT tracks[66]. We replace the Nistér-5pt [50] with the 8pt[42] algorithm to show more results. We also show, an approximation of the true error distance using reprojected points (this is excluded from being bold or underlined). While the reprojection approximation achieves the best results on almost all sequences, our methods are often not far behind. This emphasises, that our method is able to effectively learn covariances. .244 2.89 0.059 6.901 1.43 0.050 6.634 0.76 0.042 1.178 0.70 0.041 1.242 0.70 0.035 0.964 0.58 07 0.231 7.086 8.86 0.185 4.402 8.67 0.112 2.341 6.69 0.103 2.772 6.54 0.109 3.715 6.63 0.120 3.434 4.82 08 0.183 10.423 4.21 0.081 8.284 3.66 0.056 7.004 2.50 0.050 4.067 2.46 0.050 4.118 2.46 0.048 3.623 2.30 09 0.185 5.485 2.29 0.053 1.646 1.43 0.052 1.553 0.71 0.049 1.317 0.71 0.049 1.278 0.70 0.048 1.160 0.69 10 0.198 8.960 4.09 0.167 9.264 4.43 0.064 4.787 1.79 0.063 3.513 1.64 0.065 3.821 1.65 0.060 2.404 1.21 train 0.203 10.051 3.54 0.141 10.127 2.97 0.082 6.910 1.72 0.077 2.378 1.69 0.077 2.505 1.69 0.076 2.606 1.44 test 0.186 9.023 3.74 0.089 6.917 3.28 0.056 5.353 1.96 0.052 3.333 1.91 0.053 3.408 1.91 0.050 2.839 1.73", "figure_data": "33]NEC-LSOURSOURS SELF-REPROJECTIONSUPERVISEDSUPERVISEDSeq. RPE1 RPEnet RPE1 RPEnet RPE1 RPEnet RPE1 RPEnet RPE1 RPEnet RPE1 RPEnet000.185 7.203 2.61 0.153 5.505 9.32 0.121 2.403 1.42 0.115 2.994 1.31 0.113 3.110 1.30 0.117 3.080 1.29010.253 7.162 2.89 0.659 28.523 5.24 0.270 8.991 2.20 0.294 6.433 2.23 0.349 6.042 2.27 0.363 7.712 2.20020.159 7.451 1.85 0.115 6.891 7.69 0.079 3.751 1.06 0.078 3.411 0.99 0.075 3.342 0.99 0.083 4.410 0.99030.131 4.822 2.47 0.089 1.889 7.45 0.051 1.493 1.17 0.058 0.602 1.01 0.049 0.444 1.00 0.047 0.608 1.00040.126 1.899 1.08 0.037 0.846 6.42 0.037 0.816 0.50 0.030 0.387 0.44 0.030 0.428 0.43 0.028 0.549 0.33050.148 5.563 3.35 0.155 10.630 9.75 0.089 6.352 2.40 0.046 1.285 2.23 0.046 1.235 2.23 0.056 1.644 2.17060.142 3.376 1.55 0.066 1.984 7.30 0.044 1.325 0.63 0.032 1.576 0.50 0.032 1.569 0.50 0.031 1.467 0.45070.170 5.347 6.41 0.258 12.558 12.51 0.120 5.371 5.58 0.094 2.731 4.97 0.098 2.500 5.15 0.073 2.132 4.18080.144 8.508 3.49 0.088 3.902 8.91 0.053 2.908 2.49 0.048 2.373 2.36 0.047 1.706 2.36 0.047 2.454 2.31090.151 4.546 1.71 0.054 2.027 6.76 0.052 2.307 0.74 0.043 1.244 0.64 0.042 1.141 0.64 0.044 1.385 0.64100.148 6.540 2.88 0.119 8.302 8.53 0.066 4.576 1.78 0.058 3.789 1.58 0.056 3.623 1.60 0.057 2.615 1.37train 0.168 6.407 2.69 0.173 8.301 8.59 0.103 3.955 1.73 0.094 2.782 1.60 0.096 2.737 1.61 0.100 3.193 1.52test 0.146 7.246 2.97 0.085 4.237 8.34 0.055 3.060 1.96 0.048 2.359 1.82 0.048 1.910 1.83 0.048 2.234 1.768PT [42]NEC [33]NEC-LSOURSOURS SELF-REPROJECTIONSUPERVISEDSUPERVISEDSeq. RPE1 RPEnet RPE1 RPEnet RPE1 RPEnet RPE1 RPEnet RPE1 RPEnet RPE1 RPEnet000.216 11.650 3.40 0.132 12.483 3.20 0.116 8.728 1.35 0.114 2.277 1.38 0.114 2.522 1.38 0.113 2.363 1.28010.246 8.080 3.83 0.539 22.857 1.55 0.082 6.378 1.00 0.060 5.811 0.99 0.057 5.770 0.94 0.054 5.997 0.81020.188 12.003 2.06 0.093 7.594 1.76 0.069 4.050 1.01 0.066 2.224 0.99 0.066 2.237 1.00 0.065 2.679 0.95030.167 8.308 3.42 0.090 3.863 3.31 0.055 3.754 1.12 0.059 2.239 1.13 0.057 2.051 1.12 0.054 2.394 1.07040.160 2.682 1.45 0.040 0.486 0.81 0.041 0.434 0.49 0.038 1.041 0.46 0.037 0.808 0.46 0.027 0.526 0.30050.198 9.236 4.56 0.119 11.779 3.65 0.062 12.437 2.50 0.055 1.931 2.37 0.055 1.949 2.40 0.053 2.123 2.02060.193 5", "figure_id": "tab_4", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Full results on the KITTI [21] dataset with SuperPoint", "figure_data": "", "figure_id": "tab_5", "figure_label": "7", "figure_type": "table" } ]
Dominik Muhle; Lukas Koestler; Krishna Murthy Jatavallabhula; Daniel Cremers
[ { "authors": "Sameer Agarwal; Keir Mierle; Others", "journal": "", "ref_id": "b0", "title": "Ceres solver", "year": "" }, { "authors": "Relja Arandjelovic; Petr Gronat; Akihiko Torii; Tomas Pajdla; Josef Sivic", "journal": "", "ref_id": "b1", "title": "Netvlad: Cnn architecture for weakly supervised place recognition", "year": "2016" }, { "authors": "Herbert Bay; Tinne Tuytelaars; Luc Van Gool", "journal": "", "ref_id": "b2", "title": "Surf: Speeded up robust features", "year": "2006" }, { "authors": "Christopher M Bishop", "journal": "Springer", "ref_id": "b3", "title": "Pattern recognition and machine learning", "year": "2007" }, { "authors": "Eric Brachmann; Alexander Krull; Sebastian Nowozin; Jamie Shotton; Frank Michel; Stefan Gumhold; Carsten Rother", "journal": "", "ref_id": "b4", "title": "Dsac-differentiable ransac for camera localization", "year": "2017" }, { "authors": "Jesus Briales; Laurent Kneip; Javier Gonzalez; -Jimenez ", "journal": "", "ref_id": "b5", "title": "A certifiably globally optimal solution to the non-minimal relative pose problem", "year": "2018" }, { "authors": "M J Brooks; W Chojnacki; D Gawley; A Van Den; Hengel", "journal": "", "ref_id": "b6", "title": "What value covariance information in estimating vision parameters?", "year": "2001" }, { "authors": "Keenan Burnett; David J Yoon; Angela P Schoellig; Timothy D Barfoot", "journal": "Robotics", "ref_id": "b7", "title": "Radar odometry combining probabilistic estimation and unsupervised feature learning", "year": "2021" }, { "authors": "Michael Burri; Janosch Nikolic; Pascal Gohl; Thomas Schneider; Joern Rehder; Sammy Omari; Markus W Achtelik; Roland Siegwart", "journal": "The International Journal of Robotics Research", "ref_id": "b8", "title": "The euroc micro aerial vehicle datasets", "year": "2016" }, { "authors": "Cesar Cadena; Luca Carlone; Henry Carrillo; Yasir Latif; Davide Scaramuzza; José Neira; Ian Reid; John J Leonard", "journal": "IEEE Transactions on robotics", "ref_id": "b9", "title": "Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age", "year": "2016" }, { "authors": "Chee-Kheng Chng; Álvaro Parra; Tat-Jun Chin; Yasir Latif", "journal": "Digital Image Computing: Techniques and Applications (DICTA)", "ref_id": "b10", "title": "Monocular rotational odometry with incremental rotation averaging and loop closure", "year": "2020" }, { "authors": "Tomasz Daniel Detone; Andrew Malisiewicz; Rabinovich", "journal": "", "ref_id": "b11", "title": "Superpoint: Self-supervised interest point detection and description", "year": "2018" }, { "authors": "Justin Domke", "journal": "PMLR", "ref_id": "b12", "title": "Generic methods for optimization-based modeling", "year": "2012" }, { "authors": "Leyza Baldo; Dorini ; Siome Klein Goldenstein", "journal": "Computer Vision and Image Understanding", "ref_id": "b13", "title": "Unscented feature tracking", "year": "2011" }, { "authors": "J Engel; V Koltun; D Cremers", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b14", "title": "Direct sparse odometry", "year": "2018" }, { "authors": "J Engel; T Schöps; D Cremers", "journal": "", "ref_id": "b15", "title": "LSD-SLAM: Largescale direct monocular SLAM", "year": "2014" }, { "authors": "Pablo Kaveh Fathian; Emily A Ramirez-Paredes; Willard Doucette; Nicholas R Curtis; Gans", "journal": "IEEE Robotics and Automation Letters (RAL)", "ref_id": "b16", "title": "Quest: A quaternionbased approach for camera motion estimation from minimal feature points", "year": "2018" }, { "authors": "O D Faugeras; S Maybank", "journal": "", "ref_id": "b17", "title": "Motion from point matches: multiplicity of solutions", "year": "1989" }, { "authors": "A Martin; Robert C Fischler; Bolles", "journal": "Communications of the ACM", "ref_id": "b18", "title": "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography", "year": "1981" }, { "authors": "Wolfgang Förstner; Eberhard Gülch", "journal": "", "ref_id": "b19", "title": "A fast operator for detection and precise location of distinct points, corners and centres of circular features", "year": "1987" }, { "authors": "Andreas Geiger; Philip Lenz; Raquel Urtasun", "journal": "", "ref_id": "b20", "title": "Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite", "year": "2012" }, { "authors": "Hugo Germain; Guillaume Bourmaud; Vincent Lepetit", "journal": "", "ref_id": "b21", "title": "S2dnet: Learning accurate correspondences for sparse-todense feature matching", "year": "2020" }, { "authors": "I Richard; Hartley", "journal": "IEEE Transactions on pattern analysis and machine intelligence", "ref_id": "b22", "title": "In defense of the eight-point algorithm", "year": "1997" }, { "authors": "R I Hartley; A Zisserman", "journal": "Cambridge University Press", "ref_id": "b23", "title": "Multiple View Geometry in Computer Vision", "year": "2004" }, { "authors": "Ganesh Iyer; Krishna Murthy Jatavallabhula; Gunshi Gupta; Madhava Krishna; K ; Liam Paull", "journal": "", "ref_id": "b24", "title": "Geometric consistency for self-supervised end-to-end visual odometry", "year": "2018" }, { "authors": "Krishna Murthy; Jatavallabhula ; Ganesh Iyer; Liam Paull", "journal": "", "ref_id": "b25", "title": "∇ slam: Dense slam meets automatic differentiation", "year": "2020" }, { "authors": "Kenichi Kanatani", "journal": "Systems and Computers in Japan", "ref_id": "b26", "title": "For geometric inference from images, what kind of statistical model is necessary", "year": "2004" }, { "authors": "Kenichi Kanatani", "journal": "IJCV", "ref_id": "b27", "title": "Statistical optimization for geometric fitting: Theoretical accuracy bound and high order error analysis", "year": "2008" }, { "authors": "Y Kanazawa; K Kanatani", "journal": "", "ref_id": "b28", "title": "Do we really have to consider covariance matrices for image features?", "year": "2001" }, { "authors": "Alex Kendall; Matthew Grimes; Roberto Cipolla", "journal": "", "ref_id": "b29", "title": "Posenet: A convolutional network for real-time 6-dof camera relocalization", "year": "2015" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "CoRR", "ref_id": "b30", "title": "Adam: A method for stochastic optimization", "year": "2015" }, { "authors": "Laurent Kneip; Paul Furgale", "journal": "", "ref_id": "b31", "title": "Opengv: A unified and generalized approach to real-time calibrated geometric vision", "year": "2014" }, { "authors": "Laurent Kneip; Simon Lynen", "journal": "", "ref_id": "b32", "title": "Direct optimization of frame-to-frame rotation", "year": "2013" }, { "authors": "Laurent Kneip; Roland Siegwart; Marc Pollefeys", "journal": "", "ref_id": "b33", "title": "Finding the exact rotation between two images independently of the translation", "year": "2012" }, { "authors": "Erwin Kruppa", "journal": "Hölder", "ref_id": "b34", "title": "Zur Ermittlung eines Objektes aus zwei Perspektiven mit innerer Orientierung", "year": "1913" }, { "authors": "Zuzana Kukelova; Martin Bujnak; Tomas Pajdla", "journal": "", "ref_id": "b35", "title": "Polynomial eigenvalue solutions to the 5-pt and 6-pt relative pose problems", "year": "2008" }, { "authors": "Kenneth Levenberg", "journal": "Quarterly of applied mathematics", "ref_id": "b36", "title": "A method for the solution of certain non-linear problems in least squares", "year": "1944" }, { "authors": "Hongdong Li; Richard Hartley", "journal": "", "ref_id": "b37", "title": "Five-point motion estimation made easy", "year": "2006" }, { "authors": "John Lim; Nick Barnes; Hongdong Li", "journal": "IEEE TPAMI", "ref_id": "b38", "title": "Estimating relative camera motion from the antipodal-epipolar constraint", "year": "2010" }, { "authors": "Philipp Lindenberger; Paul-Edouard Sarlin; Viktor Larsson; Marc Pollefeys", "journal": "", "ref_id": "b39", "title": "Pixel-Perfect Structure-from-Motion with Featuremetric Refinement", "year": "2021" }, { "authors": "Jonathan Long; Evan Shelhamer; Trevor Darrell", "journal": "", "ref_id": "b40", "title": "Fully convolutional networks for semantic segmentation", "year": "2015" }, { "authors": " Hc Longuet-Higgins", "journal": "", "ref_id": "b41", "title": "Readings in computer vision: issues, problems, principles, and paradigms. A computer algorithm for reconstructing a scene from two projections", "year": "1987" }, { "authors": " David G Lowe", "journal": "IJCV", "ref_id": "b42", "title": "Distinctive image features from scaleinvariant keypoints", "year": "2004" }, { "authors": "Bruce D Lucas; Takeo Kanade", "journal": "", "ref_id": "b43", "title": "An iterative image registration technique with an application to stereo vision", "year": "1981" }, { "authors": "Donald W Marquardt", "journal": "Journal of the society for Industrial and Applied Mathematics", "ref_id": "b44", "title": "An algorithm for least-squares estimation of nonlinear parameters", "year": "1963" }, { "authors": "Jochen Meidow; Christian Beder; Wolfgang Förstner", "journal": "ISPRS Journal of Photogrammetry and Remote Sensing", "ref_id": "b45", "title": "Reasoning with uncertain points, straight lines, and straight line segments in 2d", "year": "2009" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "Communications of the ACM", "ref_id": "b46", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2021" }, { "authors": " Muhle; N Koestler; F Demmel; Bernard; Cremers", "journal": "", "ref_id": "b47", "title": "The probabilistic normal epipolar constraint for frameto-frame rotation optimization under uncertain feature positions", "year": "2022" }, { "authors": "R Mur-Artal; J D Tardós", "journal": "IEEE Transactions on Robotics", "ref_id": "b48", "title": "Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras", "year": "2017" }, { "authors": "D Nister", "journal": "", "ref_id": "b49", "title": "An efficient solution to the five-point relative pose problem", "year": "2003" }, { "authors": "D Nistr; O Naroditsky; J Bergen", "journal": "CVPR", "ref_id": "b50", "title": "Visual odometry", "year": "2004" }, { "authors": "Luis Pineda; Taosha Fan; Maurizio Monge; Shobha Venkataraman; Paloma Sodhi; Ricky Tq Chen; Joseph Ortiz; Daniel Detone; Austin Wang; Stuart Anderson; Jing Dong; Brandon Amos; Mustafa Mukadam", "journal": "NeurIPS", "ref_id": "b51", "title": "Theseus: A Library for Differentiable Nonlinear Optimization", "year": "2022" }, { "authors": "René Ranftl; Vladlen Koltun", "journal": "", "ref_id": "b52", "title": "Deep fundamental matrix estimation", "year": "2018" }, { "authors": "Joseph Redmon; Santosh Divvala; Ross Girshick; Ali Farhadi", "journal": "", "ref_id": "b53", "title": "You only look once: Unified, real-time object detection", "year": "2016" }, { "authors": "Olaf Ronneberger; Philipp Fischer; Thomas Brox", "journal": "Springer", "ref_id": "b54", "title": "Unet: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "E Rublee; V Rabaud; K Konolige; G Bradski", "journal": "", "ref_id": "b55", "title": "Orb: An efficient alternative to sift or surf", "year": "2011" }, { "authors": "Paul-Edouard Sarlin; Daniel Detone; Tomasz Malisiewicz; Andrew Rabinovich", "journal": "", "ref_id": "b56", "title": "Superglue: Learning feature matching with graph neural networks", "year": "2020" }, { "authors": "Paul-Edouard Sarlin; Ajaykumar Unagar; Mans Larsson; Hugo Germain; Carl Toft; Viktor Larsson; Marc Pollefeys; Vincent Lepetit; Lars Hammarstrand; Fredrik Kahl", "journal": "", "ref_id": "b57", "title": "Back to the feature: Learning robust camera localization from pixels to pose", "year": "2021" }, { "authors": "Sameer Sheorey; Shalini Keshavamurthy; Huili Yu; Hieu Nguyen; Clark N Taylor", "journal": "", "ref_id": "b58", "title": "Uncertainty estimation for klt tracking", "year": "2014" }, { "authors": "R M Steele; C Jaynes", "journal": "", "ref_id": "b59", "title": "Feature uncertainty arising from covariant image noise", "year": "2005" }, { "authors": "Henrik Stewenius; Christopher Engels; David Nistér", "journal": "ISPRS Journal of Photogrammetry and Remote Sensing", "ref_id": "b60", "title": "Recent developments on direct relative orientation", "year": "2006" }, { "authors": "Richard Szeliski", "journal": "Springer Science & Business Media", "ref_id": "b61", "title": "Computer vision: algorithms and applications", "year": "2010" }, { "authors": "Carlo Tomasi; Takeo Kanade", "journal": "IJCV", "ref_id": "b62", "title": "Detection and tracking of point features", "year": "1991" }, { "authors": "Akihiko Torii; Relja Arandjelovic; Josef Sivic; Masatoshi Okutomi; Tomas Pajdla", "journal": "", "ref_id": "b63", "title": "24/7 place recognition by view synthesis", "year": "2015" }, { "authors": "Bill Triggs; Richard I Philip F Mclauchlan; Andrew W Hartley; Fitzgibbon", "journal": "", "ref_id": "b64", "title": "Bundle adjustment-a modern synthesis", "year": "1999" }, { "authors": "Vladyslav Usenko; Nikolaus Demmel; David Schubert; Jörg Stückler; Daniel Cremers", "journal": "IEEE Robotics and Automation Letters (RAL)", "ref_id": "b65", "title": "Visual-inertial mapping with non-linear factor recovery", "year": "2020" }, { "authors": "Von Lukas; Patrick Stumberg; Qadeer Wenzel; Daniel Khan; Cremers", "journal": "IEEE Robotics and Automation Letters (RAL)", "ref_id": "b66", "title": "Gn-net: The gauss-newton loss for multiweather relocalization", "year": "2020" }, { "authors": "N Yang; L Stumberg; R Wang; D Cremers", "journal": "", "ref_id": "b67", "title": "D3vo: Deep depth, deep pose and deep uncertainty for monocular visual odometry", "year": "2020" }, { "authors": "N Yang; R Wang; J Stueckler; D Cremers", "journal": "", "ref_id": "b68", "title": "Deep virtual stereo odometry: Leveraging deep depth prediction for monocular direct sparse odometry", "year": "2018" }, { "authors": "Kwang Moo; Yi ; Eduard Trulls; Vincent Lepetit; Pascal Fua", "journal": "", "ref_id": "b69", "title": "Lift: Learned invariant feature transform", "year": "2016" }, { "authors": "Bernhard Zeisl; Pierre Georgel; Florian Schweiger; Eckehard Steinbach; Nassir Navab", "journal": "", "ref_id": "b70", "title": "Estimation of location uncertainty for scale invariant feature points", "year": "2009" }, { "authors": "Hongmou Zhang; Denis Grießbach; Jürgen Wohlfeil; Anko Börner", "journal": "Springer", "ref_id": "b71", "title": "Uncertainty model for template feature matching", "year": "2017" } ]
[ { "formula_coordinates": [ 4, 73.72, 506.99, 212.64, 28.23 ], "formula_id": "formula_0", "formula_text": "E(R, t) = i e 2 i σ 2 i = i |t (f i × Rf i )| 2 t f i RΣ i R f i t ,(1)" }, { "formula_coordinates": [ 4, 61.5, 700.16, 220.99, 14.66 ], "formula_id": "formula_1", "formula_text": "σ 2 s,i = t ( (Rf i )Σ i (Rf i ) + f i RΣ i R f i )t . (2" }, { "formula_coordinates": [ 4, 282.49, 704.51, 3.87, 8.64 ], "formula_id": "formula_2", "formula_text": ")" }, { "formula_coordinates": [ 4, 385.23, 298.36, 78.81, 28.23 ], "formula_id": "formula_3", "formula_text": "E s (R, t) = i e 2 i σ2" }, { "formula_coordinates": [ 4, 458.11, 306.99, 87, 17.45 ], "formula_id": "formula_4", "formula_text": "s,i(3)" }, { "formula_coordinates": [ 4, 388.84, 400.77, 156.27, 14.63 ], "formula_id": "formula_5", "formula_text": "Σ 2D = arg min Σ2D L ,(4)" }, { "formula_coordinates": [ 4, 375.29, 483.05, 169.82, 29.12 ], "formula_id": "formula_6", "formula_text": "e rot = ∠ R R (5) L( R, R; Σ 2D ) = e rot(6)" }, { "formula_coordinates": [ 4, 369.58, 560.77, 175.53, 14.63 ], "formula_id": "formula_7", "formula_text": "R = arg min R E s (R, t; Σ 2D )(7)" }, { "formula_coordinates": [ 4, 337.16, 642.32, 207.95, 26.83 ], "formula_id": "formula_8", "formula_text": "dL dΣ 2D = - ∂ 2 E s ∂Σ 2D ∂R ∂ 2 E s ∂R∂R -1 e rot ∂R .(8)" }, { "formula_coordinates": [ 5, 146.63, 622.23, 135.87, 9.81 ], "formula_id": "formula_9", "formula_text": "L sup = e rot (9" }, { "formula_coordinates": [ 5, 282.49, 622.55, 3.87, 8.64 ], "formula_id": "formula_10", "formula_text": ")" }, { "formula_coordinates": [ 5, 382.2, 159.54, 162.91, 20.56 ], "formula_id": "formula_11", "formula_text": "L cycl = ∠ (i,j)∈P R ij ,(10)" }, { "formula_coordinates": [ 5, 365.7, 240.79, 179.41, 20.56 ], "formula_id": "formula_12", "formula_text": "L anchor = (i,j)∈P ∠R ij R ij,NEC(11)" }, { "formula_coordinates": [ 5, 377.8, 344.88, 167.31, 9.81 ], "formula_id": "formula_13", "formula_text": "L self = L cycl + λL anchor .(12)" }, { "formula_coordinates": [ 6, 126.3, 629.49, 160.06, 28.14 ], "formula_id": "formula_14", "formula_text": "σ 2 i,norm = N • σ 2 i N j=0 σ 2 j (13)" }, { "formula_coordinates": [ 7, 89.8, 502.49, 196.56, 20.69 ], "formula_id": "formula_15", "formula_text": "Σ 2D (s, α, β) = sR α β 0 0 1 -β R α(14)" }, { "formula_coordinates": [ 7, 122.01, 567.82, 164.36, 27.34 ], "formula_id": "formula_16", "formula_text": "f 1 (x) = (1 + |x|) sign(x) (15) f 2 (x) = x (16" }, { "formula_coordinates": [ 7, 282.21, 585.83, 4.15, 8.64 ], "formula_id": "formula_17", "formula_text": ")" }, { "formula_coordinates": [ 7, 122.01, 599.5, 164.36, 22.31 ], "formula_id": "formula_18", "formula_text": "f 3 (x) = 1 1 + e -x(17)" }, { "formula_coordinates": [ 9, 370.93, 252.45, 174.18, 13.82 ], "formula_id": "formula_19", "formula_text": "f = µ + η, η ∼ N (0, Σ) ,(18)" }, { "formula_coordinates": [ 9, 370.93, 265.91, 174.18, 13.82 ], "formula_id": "formula_20", "formula_text": "f = µ + η , η ∼ N (0, Σ ) ,(19)" }, { "formula_coordinates": [ 9, 355.97, 292.09, 189.14, 13.82 ], "formula_id": "formula_21", "formula_text": "ns = (µ + η) × R(µ + η )(20)" }, { "formula_coordinates": [ 9, 340.31, 340.03, 201.48, 12.76 ], "formula_id": "formula_22", "formula_text": "Σn = (Rµ i )Σ i (Rµ i ) + μi RΣ i R μi + Σ , (21" }, { "formula_coordinates": [ 9, 541.79, 343.46, 3.32, 6.91 ], "formula_id": "formula_23", "formula_text": ")" }, { "formula_coordinates": [ 9, 358.84, 370.43, 182.95, 36.63 ], "formula_id": "formula_24", "formula_text": "Σ =    (Σ 2 × Σ R,3 + Σ 3 × Σ R,2 ) (Σ 3 × Σ R,1 + Σ 1 × Σ R,3 ) (Σ 1 × Σ R,2 + Σ 2 × Σ R,1 )    . (22" }, { "formula_coordinates": [ 9, 541.79, 383.98, 3.32, 6.91 ], "formula_id": "formula_25", "formula_text": ")" }, { "formula_coordinates": [ 9, 349.01, 438.8, 192.78, 16.22 ], "formula_id": "formula_26", "formula_text": "Σn ≈ (Rµ i )Σ i (Rµ i ) + μi RΣ i R μi . (23" }, { "formula_coordinates": [ 9, 541.79, 442.22, 3.32, 6.91 ], "formula_id": "formula_27", "formula_text": ")" }, { "formula_coordinates": [ 9, 402.05, 465.8, 143.06, 11.28 ], "formula_id": "formula_28", "formula_text": "σ 2 s = t Σnt .(24)" }, { "formula_coordinates": [ 9, 381.5, 637.36, 163.62, 21.13 ], "formula_id": "formula_29", "formula_text": "es = n σs = n d Σ + d Σ ,(25)" }, { "formula_coordinates": [ 9, 347.18, 671.33, 197.93, 10.07 ], "formula_id": "formula_30", "formula_text": "n = t f exp xRf f R exp x f t ,(26)" }, { "formula_coordinates": [ 9, 341.53, 687.49, 203.59, 13.82 ], "formula_id": "formula_31", "formula_text": "d Σ = exp xRf × t Σ exp xRf × t ,(27)" }, { "formula_coordinates": [ 9, 338.83, 700.08, 202.96, 10.98 ], "formula_id": "formula_32", "formula_text": "d Σ = t f exp xRΣ R exp x f t . (28" }, { "formula_coordinates": [ 10, 87.74, 448.5, 198.62, 19.49 ], "formula_id": "formula_33", "formula_text": "∂n ∂x = 2 (Rf f R exp x f t) × ( f t) ,(29)" }, { "formula_coordinates": [ 10, 82.52, 469.54, 203.84, 19.49 ], "formula_id": "formula_34", "formula_text": "∂d Σ ∂x = 2 (Rf ) × ( tΣ t exp xRf ) ,(30)" }, { "formula_coordinates": [ 10, 79.82, 490.58, 206.54, 19.49 ], "formula_id": "formula_35", "formula_text": "∂d Σ ∂x = 2 (RΣ R exp x f t) × ( f t) ,(31)" }, { "formula_coordinates": [ 10, 96.82, 535.72, 189.55, 20.98 ], "formula_id": "formula_36", "formula_text": "∂es ∂x = 1 σs ∂n ∂x - n 2σ 3 s ∂d Σ ∂x + ∂d Σ ∂x .(32)" }, { "formula_coordinates": [ 10, 82.81, 582.38, 203.55, 19.49 ], "formula_id": "formula_37", "formula_text": "∂d Σ ∂Σ = t × (exp xRf ) t × (exp xRf ) ,(33)" }, { "formula_coordinates": [ 10, 80.12, 603.42, 202.92, 19.49 ], "formula_id": "formula_38", "formula_text": "∂d Σ ∂Σ = R exp x f t R exp x f t . (34" }, { "formula_coordinates": [ 10, 283.04, 609.76, 3.32, 6.91 ], "formula_id": "formula_39", "formula_text": ")" }, { "formula_coordinates": [ 10, 133.54, 638.62, 152.82, 20.98 ], "formula_id": "formula_40", "formula_text": "∂es ∂Σ = - n 2σ 3 s ∂d Σ ∂Σ ,(35)" }, { "formula_coordinates": [ 10, 131.73, 661.65, 154.64, 20.98 ], "formula_id": "formula_41", "formula_text": "∂es ∂Σ = - n 2σ 3 s ∂d Σ ∂Σ .(36)" }, { "formula_coordinates": [ 12, 126.21, 99.07, 160.15, 15.41 ], "formula_id": "formula_42", "formula_text": "Σ 2D,i = pi -p i 2 I 2 ,(37)" }, { "formula_coordinates": [ 12, 126.21, 113.06, 160.15, 15.41 ], "formula_id": "formula_43", "formula_text": "Σ 2D,i = p i -p i 2 I 2 .(38)" } ]
10.1145/3581783.3611744
2023-08-13
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b1", "b13", "b15", "b26", "b27", "b65", "b73", "b2", "b4", "b9", "b10", "b31", "b43", "b44", "b52", "b58", "b59", "b38", "b32", "b34", "b61", "b61", "b32", "b60", "b62", "b64", "b53", "b25", "b29", "b30", "b33", "b53", "b54", "b62", "b64", "b62", "b62", "b54", "b33", "b33", "b15", "b50", "b39", "b47", "b17" ], "table_ref": [], "text": "Under real-world nighttime haze imaging conditions, the illumination is dominated by various artificial light sources such as neon lights and they have different locations and colors with limited luminance range. Therefore, apart from the haze, the acquired degraded images also will be affected by multiple scattering, uneven illumination, glow, blur, hidden noise, etc. Compared to daytime image dehazing, how to recover clear images from nighttime hazy scenarios becomes a new challenging task.\nTo the best of our knowledge, significant progress has been made for daytime image dehazing. The current daytime haze removal approaches, including prior-based methods [1,2,14,16,27,28,66,74] and learning-based algorithms [3,5,10,11,32,44,45,53,59,60], have limited effectiveness in restoring nighttime hazy images due to two reasons. First, the widely used haze imaging model [39] is unable to fully describe the complex formation of a nighttime hazy image. Second, there exist notable degradation discrepancies between daytime and nighttime hazy scenarios, further hindering the recovery of nighttime hazy images.\nTo address the diverse types of degradations that occur in nighttime hazy environments, some new imaging models [33,35,62] are proposed to illustrate the degradation characteristics such as nonuniform illumination and glow. Subsequently, several model-based nighttime haze removal algorithms [33, 35-38, 49, 50, 61-63] have been developed to restore the degraded images. Although the above approaches have achieved decent dehazing results to some extent, they cannot simultaneously overcome all types of degradations due to their focus on only partial corruption factors. [62], GS [33], MRP [61], OSFD [63], CFEN-ViT [65], RIDCP [54] and our NightHazeFormer. (i) The t-SNE map of various synthetic nighttime hazy image datasets. The feature distribution of degradations in our UNREAL-NH dataset is more closer to real-world nighttime hazy scenes (REAL-NH) than existing synthetic datasets. (j) Histogram of FID values for various synthetic datasets. Obviously, our UNREAL-NH exhibits the smallest FID value among all synthetic datasets, which quantitatively proves that our UNREAL-NH dataset is more realistic.\nCompared to model-based methods, deep learning based haze removal networks [26,30,31,34,54,55,63,65] for nighttime hazy images are still limited, primarily due to the lack of realistic synthetic datasets. Existing large-scale synthetic datasets, such as NHC [63], NHM [63], GTA5 [55], NightHaze [34] and YellowHaze [34], are unable to comprehensively simulate the complex degradations presented in real-world nighttime hazy images, especially for light effects and spatially variant illumination. Consequently, the dehazing networks trained on these synthetic datasets usually suffer from poor generalization to real-world nighttime hazy images, leading to unsatisfactory restoration results. Additionally, most nighttime haze removal networks solely rely on synthetic datasets for training, making it challenging to acquire the domain knowledge from real data due to the domain shift problem.\nTo address the above issues, we develop NightHazeFormer, a transformer-based network for nighttime haze removal, which consists of a supervised pre-training phase and a semi-supervised fine-tuning phase. For pre-training, we introduce two powerful physical priors, dark channel prior (DCP) [16] and bright channel prior (BCP) [51], into the transformer decoder to generate the nonlearnable prior queries. These queries are served as the explicit degradations guidance for the self-attention transformer block and help our model to learn rich priors from input nighttime hazy images, thereby further improving the model's robustness and understanding for nighttime hazy scenes. For fine-tuning, we employ the pre-trained model from the synthetic domain to yield coarse haze-free images in an unsupervised fashion. Then, an efficient haze removal method called BCCR [40] is adopted to dehaze them for improving the visibility. Finally, the obtained pseudo ground truths are combined with real-world nighttime hazy images and fine-tuned in the synthetic domain to reduce the discrepancy between the synthetic and real domain. As shown in Fig. 1(a)-(h), our NightHazeFormer produces a better dehazed result for a realworld nighttime hazy image. In addition, to bridge this gap between synthetic and real data, we have created a large-scale synthetic nighttime hazy image called UNREAL-NH. Fig. 1(i) depicts t-SNE map [48] of various synthetic datasets and real-world nighttime hazy image dataset (REAL-NH), which indicates that the simulated degradations of our UNREAL-NH are more realistic. Also, in Fig. 1(j), the \"Fréchet Inception Distance\" (FID) metric [18] that measures the distance between synthetic and real data at feature level quantitatively proves the superiority of our UNREAL-NH.\nThe main contributions are summarized as follows:\n• We propose an end-to-end transformer-based network, called NightHazeFormer, for nighttime haze removal. By incorporating two powerful priors into the transformer decoder, our NightHazeFormer generates non-learnable prior queries that effectively guide our network to learn abundant prior features from input nighttime hazy images. • A semi-supervised fine-tuning training paradigm is developed to improve the generalization ability. We combine the real-world nighttime hazy images with the generated pseudo ground truth labels, which are then fed into the synthetic domain to fine-tune the pre-trained model and enable it to learn the domain knowledge of real data. " }, { "figure_ref": [], "heading": "RELATED WORK 2.1 Daytime Dehazing Methods", "publication_ref": [ "b38", "b15", "b73", "b39", "b13", "b0", "b26", "b67", "b68", "b69", "b70", "b71", "b74", "b7", "b8", "b56", "b55", "b63", "b72", "b43", "b52", "b10", "b58", "b4", "b59" ], "table_ref": [], "text": "For daytime hazy scenarios, the imaging light sources are mainly dominated by the global airlight and the classic atmospheric scattering model [39] is widely used to elucidate the degradation process of hazy images. To restore the haze-free image, earlier dehazing approaches usually make use of the priors or constraints (e.g. DCP [16], CAP [74], BCCR [40], color-lines [14], haze-lines [1], RLP [27], etc) to estimate the transmission and inverse the physical model to obtain the clear image. However, the presented priors may be invalid for diverse real-world scenes. Recently, with the rapid development of deep learning, numerous networks have been proposed to address computer vision tasks, such as image enhancement [19-25, 58, 64, 67], pan-sharpening [68][69][70][71][72], shadow removal [75], image desnowing [8,9,57] and general image restoration [56,64,73].\nTo improve the haze removal performance, several efficient and effective dehazing networks have been developed to achieve the The architecture of our NightHazeFormer. Our approach comprises two stages: supervised pre-training and semisupervised fine-tuning. For pre-training, we train our transformer network using paired synthetic images. The priors are incorporated into the transformer decoder to generate the prior queries, which guide the model to learn rich priors from input images. For fine-tuning, we devise a semi-supervised progressive refinement paradigm to improve the generalization ability. First, the unsupervised learning strategy allows our pre-trained model to fully leverage priors from real data and generate appropriate pseudo ground truths. Second, the generated pseudo-labels further enhance the model's generalization performance on the real domain in a cyclic supervised manner.\nend-to-end mapping from a hazy image to a haze-free image, such as FFA-Net [44], AECR-Net [53], PSD-Net [11], PMNet [59], PDD-GAN [5], SFUDA [60], etc. Although these aforementioned dehazing approaches perform well for daytime hazy scenarios, they are not effective to achieve quality improvement for nighttime hazy images. This is due to the fact that nighttime haze conditions usually contains multiple adverse effects and existing daytime dehazing methods cannot address these degradations." }, { "figure_ref": [], "heading": "Nighttime Dehazing Methods", "publication_ref": [ "b61", "b32", "b60", "b62", "b35", "b36", "b33", "b62", "b62", "b54", "b54", "b62", "b25", "b64", "b53" ], "table_ref": [], "text": "To address the degradations presented in nighttime hazy scenes, Zhang et al. [62] construct a new imaging model to conduct nighttime dehazing. Considering the glow around artificial light sources, Li et al. [33] introduce a glow term into the atmospheric scattering model and perform the glow separation (GS). The maximum reflectance prior (MRP) [61] suited for nighttime hazy scenes has been developed to achieve fast restoration. Afterwards. Zhang et al. [63] devise an optimal-scale fusion-based dehazing (OSFD) method for nighttime hazy scenes. Liu et al. propose some variational-based decomposition models [36,37] to achieve structure dehazing and details enhancement. Unfortunately, these model-based algorithms only focus on partial degradations, which may result in unsatisfactory restoration results. On the other hand, deep learning based methods have been applied for nighttime image dehazing. Owing to requiring paired nighttime hazy images and clean images for training, Liao et al. [34] design two synthetic datasets (i.e. NightHaze and YellowHaze) by adding the haze into the collected nighttime images. Subsequently, several large-scale benchmarks are provided for nighttime dehazing, such as NHC [63], NHM [63] and GTA5 [55].\nUsing these synthetic datasets, some networks, such as high-low frequency decomposition network [55], ND-Net [63] and GAPSF [26], are proposed to achieve nighttime dehazing. Recently, the universal dehazing networks, such as CFEN-ViT [65] and RIDCP [54], are designed for both daytime and nighttime hazy scenes.\nWhile learning-based nighttime image haze removal approaches have shown promising results for synthetic data, they usually struggle to generalize well to real-world nighttime hazy images. The main reasons for this are two-fold. First, there are significant inherent differences between previous generated synthetic dataset and real-world nighttime hazy images. Second, existing learning-based methods resort to training on synthetic datasets, which lacks the domain knowledge of the real data." }, { "figure_ref": [ "fig_1" ], "heading": "METHODOLOGY 3.1 Framework Overview", "publication_ref": [ "b5", "b15", "b50", "b39" ], "table_ref": [], "text": "In Fig. 2, our framework consists of two stages: a supervised pretraining phase using prior query transformer network and a semisupervised fine-tuning training phase using pseudo-labels.\nSupervised Pre-training. For pre-training, we initially adopt the effective encoder-decoder transformer architecture with NAF-Blocks [6] as our backbone to learn the domain knowledge of nighttime hazy images. Due to the complex and multiple degradations presented in nighttime hazy images, we incorporate two powerful priors (i.e. DCP [16] and BCP [51]) into the transformer decoder to generate the non-learnable prior queries. Guided by prior information, the provided queries can effectively instruct the model to learn specific degradations from input nighttime hazy images, thereby enhancing the robustness of the model and understanding for nighttime hazy scenes. In this stage, the labeled synthetic data is solely used for supervised training to acquire a pre-trained model in the synthetic domain.\nSemi-supervised Fine-tuning. To improve the generalization ability of the pre-trained model, we propose a semi-supervised finetuning training paradigm based on the generated pseudo-labels. First, we perform training on unlabeled real data in an unsupervised manner to obtain the coarse haze-free images. Then, these results are further refined using an efficient daytime dehazing method named BCCR [40] to generate pseudo ground truths (GT). Finally, we combine the pseudo-GT labels with the corresponding realworld nighttime hazy images to form image pairs that are fed into the synthetic domain for fine-tuning the previous pre-trained model. This semi-supervised fine-tuning approach facilitates the acquisition of domain knowledge from real data, effectively improving the generalization performance to real-world nighttime hazy images." }, { "figure_ref": [], "heading": "Prior Query Transformer Network", "publication_ref": [ "b5", "b3", "b46", "b15", "b50", "b6", "b45" ], "table_ref": [], "text": "Transformer Encoder. Given an input nighttime hazy image 𝐼 with dimensions 𝐻 ×𝑊 × 3, we encode it into patches and feed forward them into the transformer encoder. For the design of the transformer encoder, we adopt NAFBlocks [6] with excellent learning capability and high computational efficiency for feature extraction. The resolution of the feature maps is gradually reduced through down-sampling operations to extract multi-level features, thereby enabling the model to learn the hierarchical feature representation of the input image across scale. Notably, we observe that the feature maps with lower resolution are more effective in capturing global information, as each pixel represents a larger spatial region with richer information, such as shape, textures, and colors of the input image. Therefore, at the scale with the lowest resolution, we further employ eight NAFBlocks to learn the latent features and feed them to the transformer decoder.\nTransformer Decoder. Since the nighttime hazy images usually contain complex and diverse degradations, it is necessary to introduce physical knowledge into the transformer decoder block to guide the model training. Previous studies [4,47] have utilized the learnable queries to deal with detection and restoration tasks. Inspired by these methods, we incorporate two physical priors, namely DCP [16] and BCP [51], into the transformer decoder to generate the non-learnable queries. These non-learnable queries explicitly help the model to understand the degradations of nighttime hazy images and learn rich priors from input images, which can be calculated as follows:\n𝑄 𝑝𝑟𝑖𝑜𝑟 = 𝑀𝐿𝑃 (𝐷𝐶𝑃 (𝐼 ) + 𝐵𝐶𝑃 (𝐼 ))(1)\nwhere 𝑀𝐿𝑃 stands for the multi-layer perceptron. The usage of the DCP enables our network to focus on the hazy regions of degraded image, thus improving the dehazing ability. However, existing methods have demonstrated that the DCP may lead to darker results.\nTo compensate for the deficiency of the DCP, we further incorporate the BCP to assist the model in learning the priors related to brightness features, thereby enhancing the contrast of the restored image. The combination of these two priors can improve the model's robustness and understanding for nighttime hazy images.\nIn this way, we utilize the non-learnable embedding of the prior knowledge as the queries (𝑄) of the multi-head attention and the latent features are used for keys (𝐾) and values (𝑉 ). The multi-head self-attention is calculated as follows:\n𝐴𝑡𝑡𝑛(𝑄, 𝐾, 𝑉 ) = 𝑆𝑜 𝑓 𝑡𝑚𝑎𝑥 ( 𝑄𝐾 𝑇 √ 𝑑 )𝑉(2)\nwhere 𝑑 represents the dimension. The decoded features proficiently integrate the degradation features guided by physical priors, which provides sufficient guidance for the stripping of degradations in the subsequent process. These features are then passed through several up-sampling operations and fused with the corresponding features extracted from each stage in the transformer encoder to obtain the haze-free restoration results with dimensions 𝐻 × 𝑊 × 3. Similarly, we also adopt NAFBlocks in the transformer decoder to learn the high-level features of the input images. Loss Functions. Our NightHazeFormer is optimized using two supervised loss functions. We first use the PSNR loss [7] as our basic restoration loss:\nL 𝑝𝑠𝑛𝑟 = -𝑃𝑆𝑁 𝑅(𝑁𝑖𝑔ℎ𝑡𝐻𝑎𝑧𝑒𝐹𝑜𝑟𝑚𝑒𝑟 (𝐼 ), 𝐽 )(3)\nwhere 𝐽 is the corresponding ground-truth of the input nighttime hazy image 𝐼 . Furthermore, we adopt the perceptual loss L 𝑝𝑒𝑟 to improve the visual quality of the restored results, which is calculated as follows:\nL 𝑝𝑒𝑟 = 2 ∑︁ 𝑗=1 1 𝐶 𝑗 𝐻 𝑗 𝑊 𝑗 ∥𝜙 𝑗 (𝑁𝑖𝑔ℎ𝑡𝐻𝑎𝑧𝑒𝐹𝑜𝑟𝑚𝑒𝑟 (𝐼 )) -𝜙 𝑗 (𝐽 ) ∥ 1 (4)\nwhere 𝐶 𝑗 , 𝐻 𝑗 and 𝑊 𝑗 respectively stand for the channel number, height and width of the feature map. 𝜙 𝑗 represents the specified layer of VGG-19 [46]. Overall, the losses for supervised training can be expressed as:\nL 𝑠𝑙 = L 𝑝𝑠𝑛𝑟 + 𝜆 𝑝𝑒𝑟 L 𝑝𝑒𝑟(5)\nwhere 𝜆 𝑝𝑒𝑟 is trade-off weight." }, { "figure_ref": [], "heading": "Semi-supervised Fine-tuning Training", "publication_ref": [ "b62", "b62", "b54", "b33", "b33", "b39", "b10", "b15", "b50", "b14" ], "table_ref": [], "text": "Owing to the inherent domain gap between synthetic and real data, existing nighttime haze removal networks solely trained on synthetic data suffer from limited generalization ability, resulting in unsatisfactory restoration results for real-world nighttime hazy images. To tackle this issue, we propose a semi-supervised finetuning training paradigm to help the pre-trained model narrow the discrepancy between synthetic and real domain. It consists of two phases: unsupervised learning using unlabeled real data and followed by supervised learning using pseudo-labels. Specifically, the unlabeled real-world nighttime hazy images from our REAL-NH are first employed to train the model in an unsupervised manner. Through unsupervised learning, the model is able to better understand the degradations distribution and feature representation of the real data. However, due to the insufficient dehazing ability of the model trained on the synthetic domain, we further perform the supervised learning to fine-tune the pretrained model based on the pseudo-labels. In order to generate [63], NHM [63], GTA5 [55], NightHaze [34], YellowHaze [34], and UNREAL-NH (Ours), respectively. (g) stands for real-world nighttime hazy patches extracted from REAL-NH.\npseudo ground truths (pseudo-GT), we make use of an efficient dehazing method called BCCR [40] to improve the quality of the dehazed results obtained from unsupervised training. By combining these generated pseudo-GT labels with real-world nighttime hazy images as paired data, we feed them into the synthetic domain to fine-tune the pre-trained model. This fine-tuning strategy enables the network to learn domain knowledge from real data, significantly enhancing the model's generalization performance.\nFor the design of unsupervised losses, we follow the method proposed in [11] and incorporate the classic DCP [16] as the prior loss using an energy optimization function:\nL 𝑑𝑐𝑝 = 𝑡 𝑇 𝐿𝑡 + 𝜆(𝑡 -t) 𝑇 (𝑡 -t)(6)\nwhere 𝑡 and t respectively represent the transmission map estimated by the DCP and our model. 𝐿 is a Laplacian-like matrix. 𝜆 is a hyperparameter that controls the balance between the fidelity term and the penalty term. The DCP loss facilitates our network in acquiring the haze-related features from real-world nighttime hazy images. However, L 𝑑𝑐𝑝 may lead to darker restoration results than expected. To overcome the drawbacks of L 𝑑𝑐𝑝 , we further employ the effective BCP [51] as an additional prior loss to improve the brightness and contrast of the restoration result. The BCP loss is calculated as follows:\nL 𝑏𝑐𝑝 = ∥𝑡 -t ∥ 1(7)\nwhere 𝑡 and t respectively denote the transmission map estimated by the BCP and our model. In addition, following the method [15], three types of losses, namely spatial consistency loss L 𝑠𝑝𝑎 , exposure control loss L 𝑒𝑥𝑝 and color constancy loss L 𝑐𝑜𝑙 , are incorporated into the unsupervised loss functions to enhance the haze removal performance.\nThe unsupervised loss function L 𝑢𝑙 can be expressed as follows:\nL 𝑢𝑙 = 𝜆 𝑑𝑐𝑝 L 𝑑𝑐𝑝 + 𝜆 𝑏𝑐𝑝 L 𝑏𝑐𝑝 +𝜆 𝑠𝑝𝑎 L 𝑠𝑝𝑎 + 𝜆 𝑒𝑥𝑝 L 𝑒𝑥𝑝 + 𝜆 𝑐𝑜𝑙 L 𝑐𝑜𝑙(8)\nwhere 𝜆 𝑑𝑐𝑝 , 𝜆 𝑏𝑐𝑝 , 𝜆 𝑠𝑝𝑎 , 𝜆 𝑒𝑥𝑝 and 𝜆 𝑐𝑜𝑙 are trade-off weights." }, { "figure_ref": [ "fig_2", "fig_3", "fig_0" ], "heading": "EXPERIMENTS 4.1 Dataset Generation", "publication_ref": [ "b62", "b62", "b54", "b33", "b33", "b12", "b17", "b62" ], "table_ref": [], "text": "Existing synthetic datasets for nighttime image dehazing, including NHC [63], NHM [63], GTA5 [55], NightHaze [34] and Yellow-Haze [34], only incorporate a limited range of degradations and fail to adequately simulate various point sources, the surrounding glow, hidden noise, blurring effects, etc. As a result, these datasets show significant disparities from real-world nighttime hazy scenes, as shown in Fig. 3. To overcome the limitations of the above datasets, we utilized UNREAL Engine 4.27 [13] to construct a large-scale paired synthetic dataset of Nighttime Hazy images, called UNREAL-NH. Fig. 4 illustrates some examples of synthetic nighttime hazy images from our UNREAL-NH dataset, accompanied by multiple critical degradation effects we have considered. In addition, we employ FID metric [18] to objectively measure the difference between the constructed synthetic dataset and real-world dataset, as illustrated in Fig. 1 To be specific, we first adopt various fog effects and add multiple point light sources with different colors in the Unreal Engine to create 1260 pairs of synthetic nighttime hazy images and the corresponding clean images with a resolution of 2008 × 1129. Then, several post-processing techniques, such as motion blur, lens flare and bloom, are applied to the generated synthetic nighttime hazy images to make them more realistic. Finally, to facilitate training, the random overlap cropping strategies are adopted to generate 10080 pairs with a resolution of 480 × 480.\nMoreover, we also have collected 250 REAL-world Nighttime Hazy images called REAL-NH, in which 150 images are from NHRW [63] and other 100 images are collected from the Internet." }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [ "b62", "b62", "b51", "b40", "b28", "b41", "b42" ], "table_ref": [], "text": "Datasets. For pre-training, our UNREAL-NH dataset is used for supervised learning, which is split into a training set with 8064 image pairs, a validation set with 1008 image pairs and a test set with 1008 image pairs. For fine-tuning, we select 200 real-world nighttime hazy images from our REAL-NH dataset for semi-supervised learning, while the remaining 50 images are used for testing. In addition, we also train our NightHazeFormer on 8073 image pairs from NHR dataset following [63]. Subsequently, we conduct the quantitative evaluation on a test set of 897 images from NHR dataset and the complete NHM datesets (including 350 images) [63] to demonstrate the superiority of our proposed NightHazeFormer.\nEvaluation Metrics. We utilize Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) [52] to assess the dehazing results on the synthetic datasets (UNREAL-NH, NHR and NHM). Moreover, two non-reference image quality assessment metrics, namely NIQE [41] and MUSIQ [29], are adopted to evaluate the dehazed results on REAL-NH test dataset for quantitative comparisons. We choose MUSIQ model trained on one aesthetics quality dataset (AVA [42]). The code of non-reference metrics is available on the github 1 .\nImplementation Details. Our framework is implemented using PyTorch [43] and trained on an NVIDIA RTX 3090 GPU (24GB) with a batch size of 16. To augment the training data in UNREAL-NH, we randomly crop the images into patches with a size of 256 × 256 and apply random rotations of 90, 180, 270 degrees, as well as the horizontal flip. In the supervised training, we adopt Adam optimizer with an initial learning rate of 2 × 10 -4 , 𝛽 1 = 0.9 and 𝛽 2 = 0.999. For unsupervised training, the initial learning rate is set to 5 × 10 -5 . In the training process, we also employ the Cyclic Learning Rate (CyclicLR) with a maximum learning rate of 1.2 times the initial learning rate. The trade-off weights 𝜆 𝑝𝑒𝑟 , 𝜆 𝑑𝑐𝑝 , 𝜆 𝑏𝑐𝑝 , 𝜆 𝑠𝑝𝑎 , 𝜆 𝑒𝑥𝑝 and 𝜆 𝑐𝑜𝑙 are set to 0.2, 10 -4 , 10 -4 , 5, 10 -3 , 0.2, respectively." }, { "figure_ref": [ "fig_5", "fig_6", "fig_7" ], "heading": "Comparisons with State-of-the-art Methods", "publication_ref": [ "b61", "b32", "b60", "b60", "b62", "b36", "b64", "b25", "b53" ], "table_ref": [ "tab_1", "tab_1" ], "text": "To verify the effectiveness and generalization ability of our Night-HazeFormer, we conduct comparisons with several state-of-the-art specific nighttime dehazing methods, including model-based methods (i.e. NDIM [62], GS [33], MRP [61], FAST-MRP [61], OSFD [63] and VD [37]) and learning-based methods (i.e. CFEN-ViT [65],\n1 https://github.com/chaofengc/Awesome-Image-Quality-Assessment GAPSF [26] and RIDCP [54]) on synthetic and real-world datasets. The dehazed results of GAPSF on NHM, NHR and REAL-NH test datasets are provided by the authors. For CFEN-ViT and RIDCP, we retrain their released code on the training set of synthetic datasets (UNREAL-NH and NHR) and then employ the retrained models on the corresponding testing sets to ensure fair comparisons. Since CFEN-ViT and RIDCP have the domain adaptation capability for nighttime dehazing, their released pre-trained models are adopted on real-world dataset (REAL-NH) for comparisons.\nVisual Comparisons on Synthetic and Real-world Images. The visual comparisons on synthetic nighttime hazy images from UNREAL-NH, NHR and NHM datasets are illustrated in Fig. 5 and Fig. 6. It is clearly observed that the results of NDIM, GS, MRP, FAST-MRP, OSFD and VD not only fail to overcome multiple degradations but also suffer from halo artifacts. GAPSF tends to yield dehazed results that appear darker with fewer details. Although CFEN-ViT and RIDCP effectively remove nighttime haze, they still encounter challengings in restoring fine details. In contrast, our NightHazeFormer shows promising performance in nighttime haze removal and details restoration. Furthermore, our results achieve the highest PSNR and SSIM values for test images.\nIn addition, we also evaluate the visual effects on real-world nighttime hazy image from REAL-NH dataset in Fig. 7. From the visual comparisons, we find that NDIM, GS, MRP, FAST-MRP, OSFD and VD fail to restore the details and remove the glow around the artificial light sources. The learning-based methods, such as CFEN-ViT and RIDCP, also struggle with handling glow effects due to their insufficient generalization performance. GAPSF is effective in mitigating glow around lights, but this method tends to produce results with color shifts and darkness. Compared to the aforementioned methods, our NightHazeFormer simultaneously achieves haze removal, glow suppression, details restoration and color correction which demonstrates superior generalization ability. Quantitative Comparisons on Synthetic and Real-World Datasets. Table 1 reports the quantitative comparisons on the testing set of three synthetic datasets (UNREAL-NH, NHR and NHM ) and a real-world dataset (REAL-NH). For synthetic datasets (UNREAL-NH, NHR and NHM), the clear images instead of lowlight images are considered as the reference images to calculate PSNR and SSIM. In addition, the results obtained by different methods have variations in resolution, which may lead to discrepancies when calculating non-reference image quality metrics. To ensure fair comparisons, the dehazed results of all methods on the REAL-NH test dataset are resized to 600 × 400 for objective evaluation. As depicted in Table 1, our NightHazeFormer outperforms all the compared methods in terms of PSNR and SSIM values on the UNREAL-NH, NHR, NHM datasets. Moreover, it achieves the best MUSIQ-AVA and NIQE scores on the REAL-NH test dataset, verifying its outstanding generalization performance." }, { "figure_ref": [ "fig_8", "fig_9", "fig_11", "fig_11" ], "heading": "Ablation Studies", "publication_ref": [ "b16", "b11" ], "table_ref": [ "tab_2", "tab_3" ], "text": "In order to demonstrate the effectiveness of the design in our proposed NightHazeFormer, a series of ablation studies are performed.\nEffectiveness of NAFBlock Module. The usage of NAFBlock contributes to features extraction with high computational efficiency. To prove the contribution of the NAFBlock module, two well-known modules, ResBlock [17] and ViTBlock [12], are used to replace the NAFBlock in the transformer decoder. In addition, we also employ eight NAFBlocks in the transformer encoder to extract the latent features from the image with the lowest resolution. The latent features modeling helps our model capture richer global information. To demonstrate this point, we conduct ablation study by removing eight NAFBlocks from the transformer encoder. The quantitative comparisons presented in Table 2 and visual results depicted in Fig. 8 demonstrate the significance of latent features modeling and the effectiveness of the NAFBlock module.\nEffectiveness of Prior Queries. The non-learnable prior queries 𝑄 𝑃𝑟𝑖𝑜𝑟 in the transformer decoder generated by two powerful priors guide the model to extract specific degradations. To verify the effectiveness of two priors, we consider the basic encoder-decoder architecture without any priors as our baseline and then introduce the DCP and BCP separately into the baseline to generate the prior queries 𝑄 𝐷𝐶𝑃 and 𝑄 𝐵𝐶𝑃 , respectively. Table 3 illustrates the quantitative comparisons of various settings on the UNREAL-NH test dataset, which indicates that the combination of two priors can effectively improve the haze removal performance. Furthermore, the visual results in Fig. 9 also demonstrate the effectiveness of the generated non-learnable prior queries.\nEffectiveness of Semi-supervised Fine-tuning. Our proposed framework mainly consists of two stages: supervised learning and semi-supervised fine-tuning. To verify the contribution of the semisupervised fine-tuning stage, an ablation study is performed involving three settings: 1) Setting i: conducting only supervised learning, 2) Setting ii: conducting both supervised and unsupervised learning, 3) Setting iii: conducting supervised and unsupervised learning, followed by fine-tuning based on the generated pseudo-labels. In Fig. 10(b)-(c), the dehazing model trained with both supervised and unsupervised learning demonstrates superior generalization ability for real-world nighttime hazy scenes. Compared Fig. 10(c " }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we have proposed NightHazeFormer, a two-stage transformer-based network for nighttime haze removal. Previous model-based and learning-based algorithms have the limitations to address multiple degradations presented in real-world nighttime haze scenarios. To circumvent the above problem, we integrate two well-known priors into the transformer decoder to provide the prior queries for specific degradations extraction. Then, we develop a semi-supervised fine-tuning training paradigm to improve the generalization performance. Specifically, the generated pseudo ground truths and the real-world nighttime hazy images are paired together and then fed into the synthetic domain to fine-tune the pre-trained model. This procedure significantly contributes to acquiring the domain knowledge from real data. Besides, to address the issue of unrealistic degradations simulation in existing synthetic datasets for nighttime haze removal, we construct a large-scale nighttime hazy image dataset called UNREAL-NH for training data. Extensive experiments demonstrate that our proposed NightHazeFormer achieves superior haze removal performance and generalization ability over all other state-of-the-art dehazing methods in terms of qualitative and quantitative comparisons." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "This work was supported in part by the Open Research Fund Program of Data Recovery Key Laboratory of Sichuan Province (Grant No. DRN2306) and the Natural Science Foundation of Fujian Province (Grant No. 2021J01867)." } ]
Nighttime image dehazing is a challenging task due to the presence of multiple types of adverse degrading effects including glow, haze, blur, noise, color distortion, and so on. However, most previous studies mainly focus on daytime image dehazing or partial degradations presented in nighttime hazy scenes, which may lead to unsatisfactory restoration results. In this paper, we propose an end-to-end transformer-based framework for nighttime haze removal, called NightHazeFormer. Our proposed approach consists of two stages: supervised pre-training and semi-supervised fine-tuning. During the pre-training stage, we introduce two powerful priors into the transformer decoder to generate the non-learnable prior queries, which guide the model to extract specific degradations. For the finetuning, we combine the generated pseudo ground truths with input real-world nighttime hazy images as paired images and feed into the synthetic domain to fine-tune the pre-trained model. This semisupervised fine-tuning paradigm helps improve the generalization to real domain. In addition, we also propose a large-scale synthetic dataset called UNREAL-NH, to simulate the real-world nighttime haze scenarios comprehensively. Extensive experiments on several synthetic and real-world datasets demonstrate the superiority of our NightHazeFormer over state-of-the-art nighttime haze removal methods in terms of both visually and quantitatively. Dataset will be available at https://github.com/Owen718/NightHazeFormer.
NightHazeFormer: Single Nighttime Haze Removal Using Prior Query Transformer
[ { "figure_caption": "Figure 1 :1Figure 1: (a) Nighttime hazy image. (b)-(h) are the dehazed results with NDIM[62], GS[33], MRP[61], OSFD[63], CFEN-ViT[65], RIDCP[54] and our NightHazeFormer. (i) The t-SNE map of various synthetic nighttime hazy image datasets. The feature distribution of degradations in our UNREAL-NH dataset is more closer to real-world nighttime hazy scenes (REAL-NH) than existing synthetic datasets. (j) Histogram of FID values for various synthetic datasets. Obviously, our UNREAL-NH exhibits the smallest FID value among all synthetic datasets, which quantitatively proves that our UNREAL-NH dataset is more realistic.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure2: The architecture of our NightHazeFormer. Our approach comprises two stages: supervised pre-training and semisupervised fine-tuning. For pre-training, we train our transformer network using paired synthetic images. The priors are incorporated into the transformer decoder to generate the prior queries, which guide the model to learn rich priors from input images. For fine-tuning, we devise a semi-supervised progressive refinement paradigm to improve the generalization ability. First, the unsupervised learning strategy allows our pre-trained model to fully leverage priors from real data and generate appropriate pseudo ground truths. Second, the generated pseudo-labels further enhance the model's generalization performance on the real domain in a cyclic supervised manner.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Visual comparison of various synthetic datasets. (a)-(f) represent the synthetic nighttime hazy patches and the corresponding clean patches extracted from NHC[63], NHM[63], GTA5[55], NightHaze[34], YellowHaze[34], and UNREAL-NH (Ours), respectively. (g) stands for real-world nighttime hazy patches extracted from REAL-NH.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: (a) Examples of our UNREAL-NH. (b) Examples of multiple degradations extracted from (a).", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "(j). A smaller FID value indicates a closer resemblance to the real domain. Compared to previous datasets, our proposed UNREAL-NH contains most of the common degradation factors and significantly simulates real-world nighttime hazy scenes.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Visual comparisons on synthetic nighttime hazy images images from our UNREAL-NH.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Visual comparisons on synthetic nighttime hazy images from NHR (first row) and NHM (last row).", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Visual comparisons on real-world nighttime hazy images from REAL-NH dataset.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Visual results of ablation study on NAFBlock module. (a) Input. (b) ResBlock[17]. (c) ViTBlock [12]. (d) without latent features extraction. (e) NAFBlock (Ours). (f) GT.", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Visual results of ablation study on prior queries. (a) Input. (b) without priors. (c) with 𝑄 𝐷𝐶𝑃 . (d) with 𝑄 𝐵𝐶𝑃 . (e) with 𝑄 𝑃𝑟𝑖𝑜𝑟 (Ours). (f) GT.", "figure_data": "", "figure_id": "fig_9", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": ") with Fig.10(d), our full model with fine-tuning stage contributes to dehazing performance improvement.Effectiveness of Unsupervised Losses. To demonstrate the contribution of our unsupervised loss committee, we conduct the ablation experiments as follows: (a) without spatial consistency loss L 𝑠𝑝𝑎 , (b) without exposure control loss L 𝑒𝑥𝑝 , (c) without color constancy loss L 𝑐𝑜𝑙 , (d) without DCP loss L 𝑑𝑐𝑝 , (e) without BCP loss L 𝑏𝑐𝑝 , and (f) with all losses. As viewed in Fig.11(a), the loss", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Visual results of ablation study on the semisupervised fine-tuning.", "figure_data": "", "figure_id": "fig_11", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Visual results of ablation study on the unsupervised losses. (a) without L 𝑠𝑝𝑎 . (b) without L 𝑒𝑥𝑝 . (c) without L 𝑐𝑜𝑙 . (d) without L 𝑑𝑐𝑝 . (e) without L 𝑏𝑐𝑝 . (f) Our method.committee without L 𝑠𝑝𝑎 results in poor contrast and details restoration. In Fig.11(b) and Fig.11(e), without L 𝑒𝑥𝑝 or L 𝑏𝑐𝑝 , the dehazed result appears to be darker than expected. Fig.11(c) shows that the absence of L 𝑐𝑜𝑙 leads to the color shifts. From Fig.11(d), the lack of L 𝑑𝑐𝑝 makes the results blurred with residue hazes. We can observe that our method with all unsupervised losses generates visually satisfactory haze removal result.", "figure_data": "", "figure_id": "fig_12", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Quantitative comparisons of state-of-the-art methods on the UNREAL-NH, REAL-NH, NHR and NHM datasets. SSIM ↑ NIQE ↓ MUSIQ-AVA ↑ PSNR ↑ SSIM ↑ PSNR ↑ SSIM ↑", "figure_data": "TypeUNREAL-NH PSNR ↑ NDIM [62] Method Venue ICIP'2014 9.3953 0.4143 4.0133 REAL-NH 4.9710NHR [63] 11.5749 0.5971 12.6878 0.6171 NHM [63]GS [33]ICCV'20159.17220.3926 3.84454.959913.1634 0.6266 11.8528 0.6148Model-based methodsMRP [61]CVPR'20179.91460.4391 4.06575.046112.0909 0.6955 13.1088 0.6545FAST-MRP [61]CVPR'201710.7856 0.4488 4.08314.930913.5419 0.6837 13.3081 0.6491OSFD [63]MM'20209.08160.4227 4.08535.096111.8953 0.6923 13.2819 0.6667VD [37]CVPRW'2022 11.2984 0.5171 3.92315.122313.2572 0.7267 13.7675 0.6865CFEN-ViT [65]Arxiv'202119.1704 0.6254 3.84664.875521.5380 0.8150 16.9481 0.7224Learning-based methodsGAPSF [26]Arxiv'2023--4.12084.630013.1660 0.6945 13.4808 0.6653RIDCP [54]CVPR'202317.6887 0.6456 3.78565.259021.8923 0.8777 14.8597 0.7516Learning-based methodNightHazeFormer (Ours)MM'202327.9277 0.8669 3.68115.263523.6649 0.9031 18.5400 0.7842", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Ablation study on NAFblock module.", "figure_data": "SettingUNREAL-NH PSNR↑ SSIM↑ResBlock23.9723 0.7786ViTBlock23.5148 0.7668w/o latent features 24.8271 0.8038NAFBlock (Ours) 27.9277 0.8669", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation study on prior queries.", "figure_data": "SettingUNREAL-NH PSNR↑ SSIM↑Baseline24.8434 0.7993Baseline + 𝑄 𝐷𝐶𝑃25.2372 0.8139Baseline + 𝑄 𝐵𝐶𝑃25.2761 0.8149Baseline + 𝑄 𝑃𝑟𝑖𝑜𝑟 (Ours) 27.9277 0.8669", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" } ]
Yun Liu; Zhongsheng Yan; Sixiang Chen; Tian Ye; Wenqi Ren; Erkang Chen
[ { "authors": "Dana Berman; Tali Treibitz; Shai Avidan", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b0", "title": "Single image dehazing using haze-lines", "year": "2020" }, { "authors": "Minh Trung; Wonha Bui; Kim", "journal": "IEEE Transactions on Image Processing", "ref_id": "b1", "title": "Single image dehazing using color ellipsoid prior", "year": "2018" }, { "authors": "Bolun Cai; Xiangmin Xu; Kui Jia; Chunmei Qing; Dacheng Tao", "journal": "IEEE Transactions on Image Processing", "ref_id": "b2", "title": "De-hazeNet: An end-to-end system for single image haze removal", "year": "2016" }, { "authors": "Nicolas Carion; Francisco Massa; Gabriel Synnaeve; Nicolas Usunier; Alexander Kirillov; Sergey Zagoruyko", "journal": "Springer", "ref_id": "b3", "title": "End-to-end object detection with transformers", "year": "2020" }, { "authors": "Xiaoxuan Chai; Junchi Zhou; Hang Zhou; Juihsin Lai", "journal": "", "ref_id": "b4", "title": "PDD-GAN: Priorbased GAN Network with Decoupling Ability for Single Image Dehazing", "year": "2022" }, { "authors": "Liangyu Chen; Xiaojie Chu; Xiangyu Zhang; Jian Sun", "journal": "", "ref_id": "b5", "title": "Simple baselines for image restoration", "year": "2022" }, { "authors": "Liangyu Chen; Xin Lu; Jie Zhang; Xiaojie Chu; Chengpeng Chen", "journal": "", "ref_id": "b6", "title": "HINet: Half instance normalization network for image restoration", "year": "2021" }, { "authors": "Sixiang Chen; Tian Ye; Yun Liu; Erkang Chen; Jun Shi; Jingchun Zhou", "journal": "", "ref_id": "b7", "title": "SnowFormer: scale-aware transformer via context interaction for single image desnowing", "year": "2022" }, { "authors": "Sixiang Chen; Tian Ye; Yun Liu; Taodong Liao; Jingxia Jiang; Erkang Chen; Peng Chen", "journal": "", "ref_id": "b8", "title": "MSP-former: Multi-scale projection transformer for single image desnowing", "year": "2023" }, { "authors": "Sixiang Chen; Tian Ye; Jun Shi; Yun Liu; Jingxia Jiang; Erkang Chen; Peng Chen", "journal": "", "ref_id": "b9", "title": "DEHRFormer: Real-time transformer for depth estimation and haze removal from varicolored haze scenes", "year": "2023" }, { "authors": "Zeyuan Chen; Yangchao Wang; Yang Yang; Dong Liu", "journal": "", "ref_id": "b10", "title": "PSD: Principled synthetic-to-real dehazing guided by physical priors", "year": "2021" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b11", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "", "journal": "Epic Games, Inc", "ref_id": "b12", "title": "Unreal Engine", "year": "2021" }, { "authors": "Raanan Fattal", "journal": "ACM Transactions on Graphics", "ref_id": "b13", "title": "Dehazing using color-lines", "year": "2014" }, { "authors": "Chunle Guo; Chongyi Li; Jichang Guo; Chen Change Loy; Junhui Hou; Sam Kwong; Runmin Cong", "journal": "", "ref_id": "b14", "title": "Zero-reference deep curve estimation for lowlight image enhancement", "year": "2020" }, { "authors": "Kaiming He; Jian Sun; Xiaoou Tang", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b15", "title": "Single image haze removal using dark channel prior", "year": "2011" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b16", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Martin Heusel; Hubert Ramsauer; Thomas Unterthiner; Bernhard Nessler; Sepp Hochreiter", "journal": "", "ref_id": "b17", "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "year": "2017" }, { "authors": "Jie Huang; Yajing Liu; Xueyang Fu; Man Zhou; Yang Wang; Feng Zhao; Zhiwei Xiong", "journal": "", "ref_id": "b18", "title": "Exposure normalization and compensation for multiple-exposure correction", "year": "2022" }, { "authors": "Jie Huang; Yajing Liu; Feng Zhao; Keyu Yan; Jinghao Zhang; Yukun Huang; Man Zhou; Zhiwei Xiong", "journal": "", "ref_id": "b19", "title": "Deep fourier-based exposure correction network with spatial-frequency interaction", "year": "2022" }, { "authors": "Jie Huang; Zhiwei Xiong; Xueyang Fu; Dong Liu; Zheng-Jun Zha", "journal": "", "ref_id": "b20", "title": "Hybrid image enhancement with progressive laplacian enhancing unit", "year": "2019" }, { "authors": "Jie Huang; Feng Zhao; Man Zhou; Jie Xiao; Naishan Zheng; Kaiwen Zheng; Zhiwei Xiong", "journal": "", "ref_id": "b21", "title": "Learning sample relationship for exposure correction", "year": "2023" }, { "authors": "Jie Huang; Man Zhou; Yajing Liu; Mingde Yao; Feng Zhao; Zhiwei Xiong", "journal": "", "ref_id": "b22", "title": "Exposure-consistency representation learning for exposure correction", "year": "2022" }, { "authors": "Jie Huang; Pengfei Zhu; Mingrui Geng; Jiewen Ran; Xingguang Zhou; Chen Xing; Pengfei Wan; Xiangyang Ji", "journal": "", "ref_id": "b23", "title": "Range scaling global u-net for perceptual image enhancement on mobile devices", "year": "2018" }, { "authors": "Jingxia Jiang; Jinbin Bai; Yun Liu; Junjie Yin; Sixiang Chen; Tian Ye; Erkang Chen", "journal": "", "ref_id": "b24", "title": "RSFDM-Net: Real-time spatial and frequency domains modulation network for underwater image enhancement", "year": "2023" }, { "authors": "Yeying Jin; Beibei Lin; Wending Yan; Wei Ye; Yuan Yuan; Robby T Tan", "journal": "", "ref_id": "b25", "title": "Enhancing visibility in nighttime haze images using guided APSF and gradient adaptive convolution", "year": "2023" }, { "authors": " Mingye; Can Ju; Charles A Ding; Wenqi Guo; Dacheng Ren; Tao", "journal": "IEEE Transactions on Image Processing", "ref_id": "b26", "title": "IDRLP: Image dehazing using region line prior", "year": "2021" }, { "authors": "Mingye Ju; Can Ding; Y Jay Guo; Dengyin Zhang", "journal": "IEEE Transactions on Image Processing", "ref_id": "b27", "title": "IDGCP: Image dehazing based on gamma correction prior", "year": "2019" }, { "authors": "Junjie Ke; Qifei Wang; Yilin Wang; Peyman Milanfar; Feng Yang", "journal": "", "ref_id": "b28", "title": "MUSIQ: Multi-scale image quality transformer", "year": "2021" }, { "authors": "Beomhyuk Koo; Gyeonghwan Kim", "journal": "", "ref_id": "b29", "title": "Nighttime haze removal with glow decomposition using GAN", "year": "2020" }, { "authors": "Shiba Kuanar; Dwarikanath Mahapatra; Monalisa Bilas; K R Rao", "journal": "The Visual Computer", "ref_id": "b30", "title": "Multi-path dilated convolution network for haze and glow removal in nighttime images", "year": "2022" }, { "authors": "Boyi Li; Xiulian Peng; Zhangyang Wang; Jizheng Xu; Dan Feng", "journal": "", "ref_id": "b31", "title": "AOD-Net: All-in-one dehazing network", "year": "2017" }, { "authors": "Yu Li; Robby T Tan; Michael S Brown", "journal": "", "ref_id": "b32", "title": "Nighttime haze removal with glow and multiple light colors", "year": "2015" }, { "authors": "Yinghong Liao; Zhuo Su; Xiangguo Liang; Bin Qiu", "journal": "", "ref_id": "b33", "title": "Hdp-net: Haze density prediction network for nighttime dehazing", "year": "2018" }, { "authors": "Yun Liu; Anzhi Wang; Hao Zhou; Pengfei Jia", "journal": "Signal Processing", "ref_id": "b34", "title": "Single nighttime image dehazing based on image decomposition", "year": "2021" }, { "authors": "Yun Liu; Zhongsheng Yan; Jinge Tan; Yuche Li", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b35", "title": "Multi-purpose oriented single nighttime image haze removal based on unified variational retinex model", "year": "2023" }, { "authors": "Yun Liu; Zhongsheng Yan; Aimin Wu; Tian Ye; Yuche Li", "journal": "", "ref_id": "b36", "title": "Nighttime image dehazing based on variational decomposition model", "year": "2022" }, { "authors": "Yun Liu; Zhongsheng Yan; Tian Ye; Aimin Wu; Yuche Li", "journal": "Engineering Applications of Artificial Intelligence", "ref_id": "b37", "title": "Single nighttime image dehazing based on unified variational decomposition model and multi-scale contrast enhancement", "year": "2022" }, { "authors": "E J Mccartney", "journal": "", "ref_id": "b38", "title": "Optics of the atmosphere: scattering by molecules and particles", "year": "1976" }, { "authors": "Gaofeng Meng; Ying Wang; Jiangyong Duan; Shiming Xiang; Chunhong Pan", "journal": "", "ref_id": "b39", "title": "Efficient image dehazing with boundary constraint and contextual regularization", "year": "2013" }, { "authors": "Anish Mittal; Rajiv Soundararajan; Alan C Bovik", "journal": "IEEE Signal Processing Letters", "ref_id": "b40", "title": "Making a \"completely blind\" image quality analyzer", "year": "2013" }, { "authors": "Naila Murray; Luca Marchesotti; Florent Perronnin", "journal": "", "ref_id": "b41", "title": "AVA: A largescale database for aesthetic visual analysis", "year": "2012" }, { "authors": "Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga", "journal": "", "ref_id": "b42", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "2019" }, { "authors": "Zhilin Xu Qin; Yuanchao Wang; Xiaodong Bai; Huizhu Xie; Jia", "journal": "", "ref_id": "b43", "title": "FFA-Net: Feature fusion attention network for single image dehazing", "year": "2020" }, { "authors": "Si Wenqi Ren; Hua Liu; Jinshan Zhang; Xiaochun Pan; Ming-Hsuan Cao; Yang", "journal": "", "ref_id": "b44", "title": "Single image dehazing via multi-scale convolutional neural networks", "year": "2016" }, { "authors": "Karen Simonyan; Andrew Zisserman", "journal": "", "ref_id": "b45", "title": "Very deep convolutional networks for large-scale image recognition", "year": "2014" }, { "authors": "Jeya Maria; Jose Valanarasu; Rajeev Yasarla; Vishal M Patel", "journal": "", "ref_id": "b46", "title": "Transweather: Transformer-based restoration of images degraded by adverse weather conditions", "year": "2022" }, { "authors": "Laurens Van Der Maaten; Geoffrey Hinton", "journal": "Journal of Machine Learning Research", "ref_id": "b47", "title": "Visualizing data using t-SNE", "year": "2008" }, { "authors": "Wenhui Wang; Anna Wang; Chen Liu", "journal": "IEEE Transactions on Image Processing", "ref_id": "b48", "title": "Variational single nighttime image haze removal with a gray haze-line prior", "year": "2022" }, { "authors": "Wenhui Wang; Anna Wang; Xingyu Wang; Haijing Sun; Qing Ai", "journal": "Signal Processing", "ref_id": "b49", "title": "Rapid nighttime haze removal with color-gray layer decomposition", "year": "2022" }, { "authors": "Yinting Wang; Shaojie Zhuo; Dapeng Tao; Jiajun Bu; Na Li", "journal": "Signal Processing", "ref_id": "b50", "title": "Automatic local exposure correction using bright channel prior for under-exposed images", "year": "2013" }, { "authors": "Zhou Wang; Alan C Bovik; Hamid R Sheikh; Eero P Simoncelli", "journal": "IEEE Transactions on Image Processing", "ref_id": "b51", "title": "Image quality assessment: from error visibility to structural similarity", "year": "2004" }, { "authors": "Haiyan Wu; Yanyun Qu; Shaohui Lin; Jian Zhou; Ruizhi Qiao; Zhizhong Zhang; Yuan Xie; Lizhuang Ma", "journal": "", "ref_id": "b52", "title": "Contrastive learning for compact single image dehazing", "year": "2021" }, { "authors": "Rui-Qi Wu; Zheng-Peng Duan; Chun-Le Guo; Zhi Chai; Chongyi Li", "journal": "", "ref_id": "b53", "title": "RIDCP: Revitalizing real image dehazing via high-quality codebook priors", "year": "2023" }, { "authors": "Wending Yan; Robby T Tan; Dengxin Dai", "journal": "Springer", "ref_id": "b54", "title": "Nighttime defogging using high-low frequency decomposition and grayscale-color networks", "year": "2020" }, { "authors": "Zizheng Yang; Jie Huang; Jiahao Chang; Man Zhou; Hu Yu; Jinghao Zhang; Feng Zhao", "journal": "", "ref_id": "b55", "title": "Visual recognition-driven image restoration for multiple degradation with intrinsic semantics recovery", "year": "2023" }, { "authors": "Tian Ye; Sixiang Chen; Yun Liu; Yi Ye; Jinbin Bai; Erkang Chen", "journal": "", "ref_id": "b56", "title": "Towards real-time high-definition image snow removal: Efficient pyramid network with asymmetrical encoder-decoder architecture", "year": "2022" }, { "authors": "Tian Ye; Sixiang Chen; Yun Liu; Yi Ye; Erkang Chen; Yuche Li", "journal": "", "ref_id": "b57", "title": "Underwater light field retention: Neural rendering for underwater imaging", "year": "2022" }, { "authors": "Tian Ye; Yunchen Zhang; Mingchao Jiang; Liang Chen; Yun Liu; Sixiang Chen; Erkang Chen", "journal": "Springer", "ref_id": "b58", "title": "Perceiving and Modeling Density for Image Dehazing", "year": "2022" }, { "authors": "Hu Yu; Jie Huang; Yajing Liu; Qi Zhu; Man Zhou; Feng Zhao", "journal": "", "ref_id": "b59", "title": "Sourcefree domain adaptation for real-world image dehazing", "year": "2022" }, { "authors": "Jing Zhang; Yang Cao; Shuai Fang; Yu Kang; Chang Wen; Chen ", "journal": "", "ref_id": "b60", "title": "Fast haze removal for nighttime image using maximum reflectance prior", "year": "2017" }, { "authors": "Jing Zhang; Yang Cao; Zengfu Wang", "journal": "IEEE", "ref_id": "b61", "title": "Nighttime haze removal based on a new imaging model", "year": "2014" }, { "authors": "Jing Zhang; Yang Cao; Zheng-Jun Zha; Dacheng Tao", "journal": "", "ref_id": "b62", "title": "Nighttime dehazing with a synthetic benchmark", "year": "2020" }, { "authors": "Jinghao Zhang; Jie Huang; Mingde Yao; Zizheng Yang; Hu Yu; Man Zhou; Feng Zhao", "journal": "", "ref_id": "b63", "title": "Ingredient-oriented multi-degradation learning for image restoration", "year": "2023" }, { "authors": "Dong Zhao; Jia Li; Hongyu Li; Long Xu", "journal": "", "ref_id": "b64", "title": "Complementary feature enhanced network with vision transformer for image dehazing", "year": "2021" }, { "authors": "Xuan Zhao", "journal": "", "ref_id": "b65", "title": "Single image dehazing using bounded channel difference prior", "year": "2021" }, { "authors": "Naishan Zheng; Jie Huang; Feng Zhao; Xueyang Fu; Feng Wu", "journal": "IEEE Transactions on Multimedia", "ref_id": "b66", "title": "Unsupervised underexposed image enhancement via self-illuminated and perceptual guidance", "year": "2022" }, { "authors": "Man Zhou; Jie Huang; Xueyang Fu; Feng Zhao; Danfeng Hong", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b67", "title": "Effective pan-sharpening by multiscale invertible neural network and heterogeneous task distilling", "year": "2022" }, { "authors": "Man Zhou; Jie Huang; Chongyi Li; Hu Yu; Keyu Yan; Naishan Zheng; Feng Zhao", "journal": "", "ref_id": "b68", "title": "Adaptively learning low-high frequency information integration for pan-sharpening", "year": "2022" }, { "authors": "Man Zhou; Jie Huang; Keyu Yan; Gang Yang; Aiping Liu; Chongyi Li; Feng Zhao", "journal": "", "ref_id": "b69", "title": "Normalization-based feature selection and restitution for pansharpening", "year": "2022" }, { "authors": "Man Zhou; Jie Huang; Keyu Yan; Hu Yu; Xueyang Fu; Aiping Liu; Xian Wei; Feng Zhao", "journal": "", "ref_id": "b70", "title": "Spatial-frequency domain information integration for pan-sharpening", "year": "2022" }, { "authors": "Man Zhou; Keyu Yan; Jie Huang; Zihe Yang; Xueyang Fu; Feng Zhao", "journal": "", "ref_id": "b71", "title": "Mutual information-driven pan-sharpening", "year": "2022" }, { "authors": "Man Zhou; Hu Yu; Jie Huang; Feng Zhao; Jinwei Gu; Chen Change Loy; Deyu Meng; Chongyi Li", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b72", "title": "Deep fourier up-sampling", "year": "2022" }, { "authors": "Qingsong Zhu; Jiaming Mai; Ling Shao", "journal": "IEEE Transactions on Image Processing", "ref_id": "b73", "title": "A fast single image haze removal algorithm using color attenuation prior", "year": "2015" }, { "authors": "Yurui Zhu; Jie Huang; Xueyang Fu; Feng Zhao; Qibin Sun; Zheng-Jun Zha", "journal": "", "ref_id": "b74", "title": "Bijective mapping network for shadow removal", "year": "2022" } ]
[ { "formula_coordinates": [ 4, 112.1, 630.9, 182.49, 8.43 ], "formula_id": "formula_0", "formula_text": "𝑄 𝑝𝑟𝑖𝑜𝑟 = 𝑀𝐿𝑃 (𝐷𝐶𝑃 (𝐼 ) + 𝐵𝐶𝑃 (𝐼 ))(1)" }, { "formula_coordinates": [ 4, 374.4, 166.65, 184.34, 20.72 ], "formula_id": "formula_1", "formula_text": "𝐴𝑡𝑡𝑛(𝑄, 𝐾, 𝑉 ) = 𝑆𝑜 𝑓 𝑡𝑚𝑎𝑥 ( 𝑄𝐾 𝑇 √ 𝑑 )𝑉(2)" }, { "formula_coordinates": [ 4, 360.68, 329.62, 198.06, 8.43 ], "formula_id": "formula_2", "formula_text": "L 𝑝𝑠𝑛𝑟 = -𝑃𝑆𝑁 𝑅(𝑁𝑖𝑔ℎ𝑡𝐻𝑎𝑧𝑒𝐹𝑜𝑟𝑚𝑒𝑟 (𝐼 ), 𝐽 )(3)" }, { "formula_coordinates": [ 4, 327.27, 392.61, 231.47, 26.34 ], "formula_id": "formula_3", "formula_text": "L 𝑝𝑒𝑟 = 2 ∑︁ 𝑗=1 1 𝐶 𝑗 𝐻 𝑗 𝑊 𝑗 ∥𝜙 𝑗 (𝑁𝑖𝑔ℎ𝑡𝐻𝑎𝑧𝑒𝐹𝑜𝑟𝑚𝑒𝑟 (𝐼 )) -𝜙 𝑗 (𝐽 ) ∥ 1 (4)" }, { "formula_coordinates": [ 4, 393.44, 474.93, 165.3, 8.43 ], "formula_id": "formula_4", "formula_text": "L 𝑠𝑙 = L 𝑝𝑠𝑛𝑟 + 𝜆 𝑝𝑒𝑟 L 𝑝𝑒𝑟(5)" }, { "formula_coordinates": [ 5, 118.09, 366.07, 176.49, 9.08 ], "formula_id": "formula_5", "formula_text": "L 𝑑𝑐𝑝 = 𝑡 𝑇 𝐿𝑡 + 𝜆(𝑡 -t) 𝑇 (𝑡 -t)(6)" }, { "formula_coordinates": [ 5, 144.9, 499.15, 149.69, 10.03 ], "formula_id": "formula_6", "formula_text": "L 𝑏𝑐𝑝 = ∥𝑡 -t ∥ 1(7)" }, { "formula_coordinates": [ 5, 79.82, 599.8, 214.76, 19.72 ], "formula_id": "formula_7", "formula_text": "L 𝑢𝑙 = 𝜆 𝑑𝑐𝑝 L 𝑑𝑐𝑝 + 𝜆 𝑏𝑐𝑝 L 𝑏𝑐𝑝 +𝜆 𝑠𝑝𝑎 L 𝑠𝑝𝑎 + 𝜆 𝑒𝑥𝑝 L 𝑒𝑥𝑝 + 𝜆 𝑐𝑜𝑙 L 𝑐𝑜𝑙(8)" } ]
10.3115/980845.980860
2023-05-16
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "A quick glance at the available NLP tools and corpora quickly reveals that there are much more resources available for syntactic than for semantic analysis. Therefore, if a particular application requires a deep understanding of its input, the requirement of semantics is typically sidestepped. Instead, meaning and understanding are either approximated through less complicated mechanisms, or left up to the obscure inner workings of neural approaches (often paying the price of needing a larger labelled corpus or needing extensive computing resources such as GPU clusters).\nWe argue that one reason for the lack of resources for deep semantic analysis is due to the lack of a common, uniform representation scheme that is able to represent all aspects of semantics. That is not to say that there is not already a lot of extensive research performed in different areas of semantics: it is rather that these efforts need to develop their own specific-problem-related notations because of this lack.\nIt is our belief that in order to arrive at a common representation scheme, we also need to develop a common modelling scheme that allows to model semantics. By \"modelling scheme\" we mean formalisms similar to the Unified Modeling Language (UML) for software Engineering. UML provides a mechanism for developers to model software systems by letting them identify components and relations between these components. The same mechanism can be used to model a system at different detailing levels thus allowing for a hierarchical model.\nThis paper aims to contribute to the goal of a common semantic representation scheme by presenting Meta SRL++, a uniform modelling scheme for all types of semantic information using Semantic Graphs. The paper is structured as follows: Section 2 introduces Semantic Graphs, which are diagrams that model some semantic content. Section 3 then demonstrates the usage of Meta SRL++ on semantic data from the European Pathfinder project MUHAI. We then discuss related work (Section 4) and conclude the paper with a potential extension of our modelling scheme called SRL++, a semantic representation scheme based on Meta SRL++." }, { "figure_ref": [], "heading": "Semantic Graphs", "publication_ref": [ "b6", "b5", "b8", "b1", "b7", "b7" ], "table_ref": [], "text": "As its name implies, Meta SRL++ subscribes to the longstanding history in cognitive science to operationalize semantic information as frames (Minsky, 1975;Fillmore, 1976) or schemas (Rumelhart, 1980) that capture the recurrent aspects of experience, which is also pursued in various Semantic Role Labelling (SRL) approaches such as FrameNet (Baker et al., 1998) and PropBank (Palmer et al., 2005). That is, it describes the semantic elements of a document as well as the roles that these semantic elements play with respect to other elements.\nIn contrast to existing SRL approaches (e.g. PropBank; Palmer et al., 2005), all elements in our semantic modelling scheme are semantic units (and not simply parts of a textual sentence), and predicates are not only (or, at least, mainly) derived from verbs. Instead, predicates can represent all sorts of semantic information. Therefore, they could have been derived from all sort of combination of information in texts or even other modalities.\nAs we decided to model only semantic information, there is, in principle, no association of the semantic information to the document it was created from (out of practical reasons this association can be created, though, see Section 3.2).\nOur modelling scheme is realized by Semantic Graphs. Semantic Graphs consists of different elements that fall into two categories: nodes and labelled edges." }, { "figure_ref": [], "heading": "Nodes", "publication_ref": [], "table_ref": [], "text": "There are three kinds of nodes in our Semantic Graphs: concepts, entities and ommitted nodes.\nConcept nodes (representing predicates) are the main building block of Semantic Graphs. They each represent a single semantic aspect. They are depicted as a box with the concept name in it. Meta SRL++ does not dictate the set of concepts to choose from (in the same way that UML does not dictate the content of e.g., a component). It is the responsibility of the author of a Semantic Graph to specify concepts in a way that the readers can understand a graph.\nConcepts are however not enough to describe all the semantics we need to model: we also need entities. Entities are individual instances of one or more classes, i.e. in order to understand them it is not only important to know their distinguishing feature (e.g. a name or a value), but also the classes they are an instance of. The most notable examples are Named Entities, but also other objects are typically given entity status.\nFortunately, entity semantics is the one field of semantics where there is already a commonly used notation and where there are sufficient tools to handle them, so Meta SRL++ does not aim to reinvent the wheel: it adopts the standard definitions of con-Figure 1: Example of a Semantic Graph for the sentence \"at the bottom of the well is a brightly-lit room, which appears to be an office of some sort\" cepts as generalizations of entities of the same type, and treats entity classes as concepts.\nIn contrast to concepts, entities are always leaves in the Semantic Graph; they do not have edges leading away from them. We depict them as circles with a value label. Optionally, there can be one or more concept names on top of the value label. These concepts are some of the classes this entity belongs to.\nFinally, we also include leaf nodes that are called \"ommitted nodes\", depicted as a grey circle. These nodes are used for capturing semantic elements that are implied by a sentence, but which is missing in the surface text (Null Instantiations as FrameNet calls them; also known as Implicit Arguments)." }, { "figure_ref": [ "fig_1" ], "heading": "Labelled Edges", "publication_ref": [], "table_ref": [], "text": "Concepts are connected to other nodes via directed Labeled Edges (although the arrows are often omitted if the direction is obvious). These edges represent role relations of the connected nodes to the concept. Therefore, the edge labels are always one of the role names from the concept from which they are originating. They are depicted as directed lines that originate at the bottom of the concept they relate to, and lead to the top of nodes that take the roles regarding this concept.\nThere is a special role that can be used: the indexed role. This role is indexed by a positive Integer starting from 1. It models the case that sometimes there is a multitude of elements in the same role, e.g. child roles of a \"parent\" concept. In order to prevent the necessity to define a separate role for any (finite) number of such children in a parent concept, the use of an indexed role reduces definition overhead. An example of an indexed role can be found in Fig. 3." }, { "figure_ref": [], "heading": "An example", "publication_ref": [], "table_ref": [], "text": "Let us now look at an example in Figure 1, which shows a sentence (\"at the bottom of the well is a brightly-lit-room which appears to be an office of some sort\") and its manually-constructed semantic graph. In this sentence, there are two parts which can be divided into two subgraphs.\nThe subgraph on the right represents a room, located at the bottom of a well, that is lit to a rather high degree. The top-most node is the Bottom concept. If we look up our concept catalogue, we will find that this concept has two roles, a \"Container\", and a \"Contained\". In the description of the Bottomconcept we might find that the \"Container\" role represents the outlining element, and the \"Contained\" the element that sits at the bottom of the \"Container\". Therefore, the \"Container\" role is given to the Well concept and the \"Contained\" to the lighted room. Equally, the Lighting concept might have two roles, the lit \"Object\" and the \"Degree\" to which the \"Object\" is lit. The Object is obviously the Room. The \"Degree\" is filled by an entity node. This entity node has the value 4 and is of a concept 5-level degree. This construction results from mapping a textual expression to a semantic representation of this expression. This has the advantage of maybe covering multiple textual expressions to the same representation (e.g. \"very bright\"), to be independent of the semantics of the English expression (\"brightly-lit\") which might not have an equivalence in other languages, and to be much more understandable digitally. Importantly, the Lighting concept might very well have more possible roles (e.g. a light source), but not all of them are filled in this sentence.\nIn the same manner the left side of the Semantic Graph can be interpreted. In fact, in the subgraph on the left, the IsA concept has two roles with the meaning \"A is equal to B\". We modeled the expressions \"appears\" with a probability entity in the role \"Degree\" of the IsA concept.\nThe two sides of the Graph are connected by sharing the Room concept. This could have been also the results of two sentences (\"At the bottom of the well is a brightly-lit room. This room appears to be an office of some sort\"). It is also conceivable to take the right subgraph and put it completely (i.e. starting at Bottom) in the role A position of IsA, but this would represent more the sentence \"At the bottom of the well a brightly-lit room appears to be an office of some sort\"). This Semantic Graph is a representation of the semantics of the example sentence that abstracts away from actual linguistic expressions. 1 This means that the same model can also be used for other languages such as Chinese, where the sentence would appear as \" 在井底有一间光亮的小室,可能 是一间办公室\". Moreover, the representation is modality-independent, and could be used for modelling e.g. a movie scene. The scene might start with the brightly-lit room with fuzzy focus, and then gradually clear the focus to reveal the details of an office.\nAs with any modelling scheme, some choices remain up to the discretion of the modeller. It may, for instance, not always be clear whether to model a concept as an entity or vice versa. As a rule, entities cannot have outgoing edges (i.e. roles). If edges are needed, you need to use the concept form. Only leaf nodes can be modeled as either concepts or entities.\nOne of the consequences of semantic parsing is that the same word can be parsed into completely different concepts. For example, the word \"it\" might be modelled as a concept that refers to a reference of a single, 3rd person (in semantic, not syntactic terms) entity, an entity of the Movie concept; or it can be part of concepts that model multi-word expressions (like in \"Hold it!\").\n3 Usage of MetaSRL++ in MUHAI MUHAI (Meaning and Understanding in Humancentric AI) is a European Pathfinder project that studies how to develop meaningul AI. 'Meaningful' here means AI systems that complement the reactive behavior of current-generation AI systems with rich models of problem situations in domains for more deliberate reasoning. The MUHAI project includes a diverse set of case studies, ranging from everyday activities such as cooking to social media observatories and artwork interpretation (Steels, 2022).\nThese different subprojects produce different types of semantic data in different natural languages. In order to export this data into a suited, uniform format and to offer applications of this data a uniform format, MUHAI selected Meta SRL++ as its overall modelling scheme. To that end, a Python library was created that allows to read and write the Meta SRL++ XML format. " }, { "figure_ref": [ "fig_0", "fig_0", "fig_1", "fig_1" ], "heading": "Historic Events from Knowledge Graphs", "publication_ref": [ "b4" ], "table_ref": [], "text": "As a first example, please find in Fig. 2 a piece of raw semantic data, in this case some parts of a knowledge graph containing information on historic events of the French Revolution (Blin, 2022) in form of parts of a Turtle (.ttl) file. The upper part of Fig. 2 contains the file content, the lower part a graphical representation of the same content.\nUsing this example, we modelled this data in Meta SRL++ (see Fig. 3). Knowledge graphs represent information using semantic triples (subject, predicate, object). Since entities cannot have outgoing edges in Meta SRL++, all predicates that link two entities in the knowledge graph were elevated to the status of concept nodes, while the entities simply remain entities. In this case it is not really possible to have human-friendly labels for the entities, so we simply used the same label for the edges. Edges to entities were labelled with \"id\" if they lead to entity labels of the knowledge graph and with \"value\" in all other cases. This choice of edge labelling implies that the corresponding concepts contain theses labels as possible roles.\nThere are two differences to this rather mechanic way of translating these Knowledge Base triples. First, we decided that the top-level concept of this type of data is an event of the type \"sem:Event\" (as the entry states anyway), and that the main entity (in this example wd:Q1073320) should be added as an \"id\" role of this concept. Also, the \"rdfs:label\" relation was modelled as the corresponding role of that concept. The second difference is the handling of the \"sem:subEventOf\" relation. In order to model the sub events of an event more explicitly (as events are the main content of this data), also sub events were modelled as roles of an encompassing event. In this case this role was modelled as an indexed role (the \"12\" in Fig. 3 serves only illustratory purposes to show that the top event could have a whole number of sub events).\nFor the above method, obviously, the semantics of the concepts are defined by the definition of the relations. As the latter are quite well defined, also the first can be understood. Roles, obviously, play only a small role in this example." }, { "figure_ref": [], "heading": "Causation in Italian Sentences", "publication_ref": [ "b3" ], "table_ref": [], "text": "In a second example we have Italian sentences in a CoNLL-style format with additional annotations about cause and effect relations (see Figure 4), inspired by earlier work on causal semantic frames in English (Beuls et al., 2021). A semantic frame (such as the Causation frame) can be straightforwardly modeled as a concept; and its frame elements (e.g. cause, effect, and so on) as its roles.\nInteresting to note is that the original data only provided the causation labels as semantic data, while all other fields contain standard syntaxrelated information such as lemmas, POS tags, and (dependency) parsing labels, as well as the tokens of the original sentence. We only modelled the sequences of the original sentence that belong to the corresponding causation elements (which were, incidentally or not) also subtrees of the parse tree).\nFrom our point of view, such sequences have normally no business in a pure Semantic Graph as they are language-dependent and contain semantic information only indirectly as natural language. However, for some use cases it may be more efficient to keep the connection between semantics and surface texts; or sometimes we need to be able to model the relation between nonsensical phrases and otherwise purely semantic content (as in \"and then she said 'Hnnngom' or something which I did not understand.\"). To illustrate such use cases, we explicitly modelled surface sequences as entities of the concept UnanalysedSubtree, which are elements of a LanguageDoc which also has a language role. Finally, we also wanted to keep the information that the text portion that was analysed was a sentence. We are of the opinion that sentences are primarily non-semantic entities (because the question of how to portion semantic content into sentences is more a cultural aspect that can be answered differently for text generation depending on e.g. the expected literary abilities of the target audience). Therefore, we put a Sentence concept at the top." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "Our scheme aims to be a modelling scheme, i.e. a kind of meta representation for semantics (that also claims to be able to model a large amount of semantic aspects). To our knowledge, no other meta representation scheme exists as all of the related work are approaches that are targeted towards representing semantics concretely. Therefore, we decided to relate our scheme to other work by roughly outlining how these semantic representation schemes can be modelled using our approach. Out of space restrictions, we have to restrict ourselves to three semantic representation schemes: AMR, UMR, and UCCA." }, { "figure_ref": [ "fig_3", "fig_4", "fig_5" ], "heading": "Abstract Meaning Representation (AMR)", "publication_ref": [ "b2", "b12", "b11", "b9" ], "table_ref": [], "text": "AMR is a notation based on PENMAN. From a structural point of view, AMR consists of nodes which are labelled each with a variable name and a concept label and labelled edges (which represent relations) (Banarescu et al., 2013). The semantic concepts are the nodes of the graph and the edges represent the relations that bound the different nodes. Every semantic concept and every node in the graph is assigned to a variable and it is labeled with English words (ex: boy ?b), Prop-Bank notation (ex: say-01 ?s) or, in certain cases, by special keywords (ex: amr-unknown ?a). The possible relations between the edges can be represented by Frame Arguments (ex: :arg0), general semantic relations (ex: :polarity), relations for quantities (ex: :quant), for time (ex: :time) and for lists (ex: :op1). More in detail, every AMR graph has a unique root, displayed as the top node in the tree, variables (e, x, y, etc.), events, concepts (ex: boy) and roles (ex : ARG0, ARG1, etc.). A property of AMR graphs is their ability to invert roles (the relations are semantically equivalent, but are structured differently). It must be underlined that Abstract Meaning Representation is geared towards English and the vocabulary of English (Xue et al., 2014), even if some efforts had been made to apply it to other languages (parser in Chinese, French, German, Spanish, Japanese) (Vanderwende et al., 2015).\nThis structure can be converted to Meta SRL++ by replacing: and by moving all outgoing edges of nodes to the bottom and all ingoing ones to the top and by removing all variables.\n4.1.1 Example: from AMR to Meta SRL++ Let's take the sentence \"We need to borrow 55% of the hammer price until we can get planning permission for restoration which will allow us to get a mortgage.\" (taken from (Schneider et al., 2015)). In the textual form of AMR, this be parsed into Fig. 6. The graphical form of this structure can be found in Fig. 7.\nGiven the method mentioned above, a corresponding Meta SRL++ Semantic Graph looks quite similar (see Fig. 8)." }, { "figure_ref": [], "heading": "Uniform Meaning Representation", "publication_ref": [], "table_ref": [], "text": "Uniform Meaning Representation (UMR) is based on AMR for the (intra) sentence structures and adds semantic document structures like temporal and modal dependencies, and co-reference relations. These add two issues to the way we transformed AMR structures into Semantic Graphs. First, AMR Constants cannot longer be replaced by Entities so easily as dependencies require also Constants to have outgoing edges (which would be forbidden by modelling them as Meta SRL++ Entities). This is not a big problem, we can also model them as concepts. The second issue is that the additional UMR relations cannot be modelled as Roles any longer. Consider the UMR structure in Fig. 9 (cited after (umr, 2022)) and their AMR-like conversion into Semantic Graphs in Fig. 10. The s1t2 reference is the value of the temporal role of the sentence concept. The s1t2 elements has also a contained role to s1t. If we now replace s1t2 by an edge from sentence to s1t2, the relation between the incoming temporal and the outgoing contained role is broken. As a solution to this problem, we incorporate the sentence concept into their immediate roles, model them as concepts and add the transitive edges to these concepts (see Fig. 11)." }, { "figure_ref": [], "heading": "Universal Conceptual Cognitive Annotation", "publication_ref": [ "b0", "b0" ], "table_ref": [], "text": "The main goal of the Universal Conceptual Cognitive Annotation (UCCA) is to graph-visualise and annotate natural languages using just semantic categories. Only semantic categories are actively annotated, while distributional regularities are learned implicitly by a statistical parser. The graph's representation of semantic differentiation is its primary concern rather than distributional regularities. The collection of relations and their arguments makes up the UCCA semantic representation. The relationships that each layer represents are specified. Each layer specifies the relations which he represents. The foundational layer is designed to cover the entire text so that each word is in at least one node. The nodes of the graphs are called \"units\". A unit may be either:\n1. A terminal or 2. Several elements that are jointly viewed as a single entity.\nA Non-terminal unit will be composed of a single relation and its arguments or it may contain secondary relations as well. The UCCA graph follows three main rules: (i) Each unit is a node, (ii) Descendants of non-terminal units are the sub-units, (iii) Non-terminal nodes \"only represent the fact that their descendants form a unit so they do not bear any features\" (Abend and Rappoport, 2013).\nIn UCCA, the foundational layer views the text as a collection of \"Scenes\", which describes \"some movement or action, or a temporally persistent state\" and \"one main relation, which is the anchor of the Scene\" (Abend and Rappoport, 2013)." }, { "figure_ref": [], "heading": "From UCCA to Meta SRL++", "publication_ref": [], "table_ref": [], "text": "From a structural point of view, UCCA graphs consist of unlabelled non-terminal nodes, of labelled edges and of terminal nodes that consist of smaller text units (e.g. words). These graphs can be converted to Meta SRL++ by replacing:\n• unlabelled non-terminal nodes by a single concept (e.g. UCCA.Unit) that is always the same " }, { "figure_ref": [], "heading": "Conclusion & Further Work", "publication_ref": [], "table_ref": [], "text": "We argued that one reason for the lack of resources for deep semantic analysis is the lack of a common, uniform representation scheme for deeper seman-tics that is able to represent all aspects of semantics. We further discussed the idea that the reason for this lack of such a representation scheme is the lack of a common modelling scheme that allows to model semantics. We presented Meta SRL++, our proposal for a uniform modeling scheme for all types of semantic information. We demonstrated how our modelling scheme can be used to convert two heterogeneous semantic data examples into a common format that can be used to export and import semantic data. We explained related work and what the novelty of our approach compared to these approaches is.\nIn the future, we plan to extend Meta SRL++ to SRL++, a semantic representation scheme based on Meta SRL++. To that end we foresee a way to define concepts and to establish an infrastructure to browse and edit existing concept definitions and contribute new ones. We think that a separation between basic and composed concepts (the latter consisting of basic and other composed concepts) will allow for an efficient usage of SRL++-encoded semantics by applications. Finally, we want to examine approaches to create a minimal set of basic concepts that offers a viable basis for covering a large semantic space. We hope that this will serve as a step towards a larger number of semantic resources and tools, as well as a step towards better neural semantic representations." } ]
Despite enormous progress in Natural Language Processing (NLP), our field is still lacking a common deep semantic representation scheme. As a result, the problem of meaning and understanding is typically sidestepped through more simple, approximative methods. This paper argues that in order to arrive at such a scheme, we also need a common modelling scheme. It therefore introduces Meta SRL++, a uniform, language-and modality-independent modelling scheme based on Semantic Graphs, as a step towards a common representation scheme; as well as a method for defining the concepts and entities that are used in these graphs. Our output is twofold. First, we illustrate Meta SRL++ through concrete examples. Secondly, we discuss how it relates to existing work in the field.
Meta SRL++: A Uniform Scheme for Modelling Deeper Semantics
[ { "figure_caption": "Figure 2 :2Figure 2: Example Raw Data from a Knowledge Graph.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Semantic Graph in Meta SRL++", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Semantic Graph", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Example Sentence as AMR", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Example Sentence as graph (AMR)", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Example Sentence as Meta SRL++", "figure_data": "", "figure_id": "fig_5", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "•labelled edges by labelled edges • terminal nodes by Entities of a suited class with the text unit as a value Using this recipe, e.g. the first sentence of Fig. 12 can be converted to the Semantic Graph in Fig. 13.", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "FigureFigure 12 :12Figure 10: Snipet Conversion", "figure_data": "", "figure_id": "fig_7", "figure_label": "12", "figure_type": "figure" } ]
Fritz Hohl; Nianheng Wu; Martina Galetti; Remi Van Trijp
[ { "authors": "Omri Abend; Ari Rappoport", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "UCCA: A semantics-based grammatical annotation scheme", "year": "2013" }, { "authors": "Collin F Baker; Charles J Fillmore; John B Lowe", "journal": "", "ref_id": "b1", "title": "The Berkeley FrameNet project", "year": "1998" }, { "authors": "Laura Banarescu; Claire Bonial; Shu Cai; Madalina Georgescu; Kira Griffitt; Ulf Hermjakob; Kevin Knight; Philipp Koehn; Martha Palmer; Nathan Schneider", "journal": "", "ref_id": "b2", "title": "Abstract meaning representation for sembanking", "year": "2013" }, { "authors": "Katrien Beuls; Paul Van Eecke; Sophie Vanja; Cangalovic", "journal": "Linguistics Vanguard", "ref_id": "b3", "title": "A computational construction grammar approach to semantic frame extraction", "year": "2021" }, { "authors": "Inès Blin", "journal": "", "ref_id": "b4", "title": "Building a French revolution narrative from wikidata", "year": "2022" }, { "authors": "Charles J Fillmore", "journal": "Annals of the New York Academy of Sciences", "ref_id": "b5", "title": "Frame Semantics and the Nature of Language", "year": "1976" }, { "authors": "Marvin Minsky", "journal": "", "ref_id": "b6", "title": "A Framework for Representing Knowledge", "year": "1975" }, { "authors": "Martha Palmer; Dan Gildea; Paul Kingsbury", "journal": "Computational Linguistics", "ref_id": "b7", "title": "The Proposition Bank: A corpus annotated with semantic roles", "year": "2005" }, { "authors": "David Rumelhart", "journal": "", "ref_id": "b8", "title": "Schemata: The Building Blocks of Cognition", "year": "1980" }, { "authors": "Nathan Schneider; Tim O 'gorman; Jeffrey Flanigan", "journal": "", "ref_id": "b9", "title": "Amr tutorial", "year": "2015" }, { "authors": "", "journal": "", "ref_id": "b10", "title": "Foundations for Meaning and Understanding in Human-centric AI", "year": "2022" }, { "authors": "Lucy Vanderwende; Arul Menezes; Chris Quirk", "journal": "", "ref_id": "b11", "title": "An amr parser for english, french, german, spanish and japanese and a new amr-annotated corpus", "year": "2015" }, { "authors": "Nianwen Xue; Ondrej Bojar; Jan Hajic; Martha Palmer; Zdenka Uresova; Xiuhong Zhang", "journal": "", "ref_id": "b12", "title": "Not an interlingua, but close: Comparison of english amrs to chinese and czech", "year": "2014" } ]
[]
2023-05-16
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b11", "b23", "b31", "b19", "b21", "b9", "b22", "b18", "b10", "b0", "b14", "b28", "b25" ], "table_ref": [], "text": "Video understanding tasks such as action recognition have shown tremendous progress in the recent years towards * Work done as a NEC Labs Intern [12,24,32,20,22,10,23,19,11]. These methods are often expensive due to the large amount of computation involved in processing videos. However, many applications especially those in AR/VR often function in resource-constrained settings and often limit to using keypoint based human pose information that can be captured efficiently using hardware sensors. A major draw back of keypoint based methods is that they miss contextual information reducing the overall accuracy.\nTo address this problem, we develop a method that uses object and human keypoints to understand videos. By integrating object keypoints in our action recognition pipeline, we can recover the scene context information that is lost by using human keypoints. We propose capturing object keypoint information using the Pavlidis [1] algorithm over an existing real-time segmen-tation method [15]. This information can also be alternatively obtained from hardware sensors such as WiFi or radar based sensors [29,26]. This generates significant keypoint data over multiple frames that can be difficult to learn. Hence, we structure the joint and keypoint information in intermediate space using a transformer based architecture with joint and positional embeddings that allow KeyNet to recover contextual information and train for the action recognition and localization tasks.\nIn this setting, our method is not only is capable of preserving the advantage of the computation efficiency of keypoints-based methods but is also able to compensate for the loss of context information with our proposed context-aware structure representation. The primary contributions of our work can be summarized as three aspects: 1) We propose a context-aware structure representation using human and object keypoints in videos. To the best of our knowledge, it is the first work that utilizes the sub-sampled keypoints to provide context features of objects. 2) We propose KeyNet, a transformerbased network that successfully model the higher-order interaction of various actors and object in videos. 3) On various datasets, we demonstrate that our KeyNet architecture achieves superior performance as compared to the prior convolution-based methods and is an efficient video understanding method for real-world applications." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b16", "b21", "b9", "b22", "b3", "b1", "b7", "b27", "b30", "b3", "b18", "b10", "b8", "b12", "b19" ], "table_ref": [], "text": "We discuss and compare against other video understanding methods.\nRGB and multi-modal video understanding Recent work on action recognition often use 2D/3D convolutions, optical flow and transformer based methods to learn relationships over spatio-temporal elements. For example, a large body of existing work uses the output from convolution blocks and aggregates the intermediate features. This representation is then pooled, along with LSTM or other building blocks to learn the temporal information. In contrast, 3D convolution methods, learn the temporal information with the spatial information. For example, some proposed methods, [17,22,10,23] use a short video snippet as an input and use a series of deep convolution networks to capture the spatial-temporal features. Other methods such as I3D networks [4], and their generated features have been used for variety of video understanding tasks. For example, SlowFast networks combines the knowledge between fast frame rate and slow frame rate video to obtain high accuracy [2]. Multi-stream-based methods [8,28,31,4,19,11] combine information from video frames and other modality, such as optical flow, human pose, and audio. They use multiple streams of deep convolution networks to model the knowledge from different modalities and leverage fusion techniques [9] to integrate the knowledge for action recognition. There are several methods that capture human-object interactions in the RGB space, often explicitly using the object information in the scene by using an object detector or convolutional feature maps to capture extract objects in the scene [13,20]." }, { "figure_ref": [], "heading": "Keypoint-based Methods Existing work over", "publication_ref": [ "b31", "b17", "b20", "b6", "b5", "b24", "b29" ], "table_ref": [], "text": "Keypoint-based action recognition uses the skeletonbased action recognition. Existing work follows the classification by detection approach or a top-down approach. Here, the first step is to estimate the keypoints and then use this information to create \"video tracklets\" of human skeletons, learning classification or localization tasks over this intermediate representation. For example, ST-GCN [32] uses graph convolution networks to jointly learn the relation of each human body part across each actor. Other work [18,21] extend this work with addition edges to reasonably aggregate the spatial temporal information in videos. Early work in this area follows the RGB methods, extracting the pose features and then using RNN/LSTMs to learn the temporal information [7,6,25]. These methods do not capture any object information, and often limit to basic human pose-based action classes such as \"walking\", \"dancing\" etc. Another work, captures the object interactions but uses a separate RGB stream to learn objects and fuses it using a relational network [30].\nOur Work. Our work intends to design a contextaware structure representation for videos that is aware of not only actors but also the interactive objects. Distinct from Non-keypoints-based action recognition, our method only uses sparse information in videos as input and models the knowledge with a lightweight model, therefore, make it more computationally efficient. Different from the skeleton-based methods, we build our structure representation by using the human and objects key-points, early in our video representations. This allows the network to learn human-object interaction in the keypoint space, but introduces additional complexity in the intermediate space, where a large amount of keypoint information is introduced. In the next section, we introduce, how we structure this intermediate space to allow transformer networks learn from this information." }, { "figure_ref": [ "fig_1" ], "heading": "KeyNet", "publication_ref": [], "table_ref": [], "text": "In this section, we describe the overall design of our proposed Keynet as shown in Figure 2. Our primary goal is to validate the hypothesis that using sparse keypoints can generate a representation that is sufficient to learn the interactions between each actor and the background context information.\nThe model consists of three stages and establishes a tubelet based action recognition pipeline. First, we estimate a set of human and object keypoints for T frames video clip. Second, the Keypoints Embedding Network projects the keypoints to more representative features by introducing positional embeddings that introduces position, segment and temporal information to the keypoints. Finally, an Action Tagger Network learns the higher-order interactive features and assigns action tags for each actor or predict the action label for the video, depending on the dataset. We introduce the proposed Action Representation in Section 3.1, the Keypoints Embedding Network in Section 3.2, and the Action Tagger Network is described in Section 3.3. To obtain a scene sequence D for action representation, we proposed a keypoints sampling method to extract N human tracklets as H i for actor features and M objects keypoints as O j for contextual features." }, { "figure_ref": [ "fig_2" ], "heading": "Action Representation", "publication_ref": [ "b26", "b1", "b26", "b14", "b0" ], "table_ref": [], "text": "The object keypoints are introduced to compensate the context information loss in the scene information, often observed in keypoints-based methods.\nHuman Tracklet. To get N human tracklets, we combine a person detector with simple IOU-based tracker, to build a person tubelets over T frames. Then, we use the HR-Net keypoints estimator is used to extract P human joints information for each detected person over T frames [27]. More precisely, for our person detector, we follow previous works [2] to apply Faster R-CNN with ResNeXt-101-FPN backbone. This detector is pretrained on COCO and fine-tune on AVA with mAP 93.9AP@50 on the AVA validation set. Regarding keypoints Estimator, we use HRNet [27] pretrained on PoseTrack with 81.6% AP on PoseTrack18 validation set. By selecting the top N person based on the detection confidence score, we can form a human tracklet S sequence with N * P * T keypoints.\nObject Keypoint. We extract object keypoints is to provide contextual features in scenes to enhance the performance for those object interactive actions. We proposed that human-object interactive action can be modeled by a set of class-agnostic keypoints with only the shape and spatial information about the object. Therefore, we extract the object keypoints by performing a subsampling along the contour of the mask detected by Mask R-CNN [15]. The flowchart for extracting keypoints is shown in Figure 3 More specifically, for each video clip, we apply Mask R-CNN on its keyframe to collect the class-agnostic masks and for each object mask. For contour tracing, we leverage the Theo Pavlidis' Algorithm [1] to obtain a set of keypoints around each detected object. Finally, by applying an equal distance sampling on the contour, we extract the keypoints that have the same interval along the contour of the detected mask. Hence, by selecting the top M object with the highest confidence scores, we can obtain B with K * P keypoints for each T frames video clips." }, { "figure_ref": [ "fig_4" ], "heading": "Keypoints Embedding Network", "publication_ref": [ "b23", "b23", "b23", "b23" ], "table_ref": [], "text": "In this section, we describe how to build an intermediate structured information using no RGB data, and with just person and object keypoints to perform action classification. To effectively learn the knowledge of action in keypoints representation, we need the information of the spatial correlation between each joint as well as how these joints may evolve through time. Therefore, we embed this information into the scene sequence by first converting each keypoint in a scene sequence to a sequence of Token and linearly projecting each Token into an embedding E, a learnable lookup table to model the relationship of each keypoint.\nTokenization: The goal of tokenization is to address extra spatial temporal information and convert it into a more representative information for learning the interaction between keypoints. To achieve this goal, we extend the prior tokenization techniques [24] by adding an additional instance token in the embedding representation for our experiments. For Position Token, Type Token and Segment Token, we follow previous work [24] to provide each keypoints with representations of spatial location, temporal location index, and the unique body type information (e.g. Head, Shoulder, and Wrist.) respectively. Our addition of extending the Segment Token to T time frames and addressing the idea of Instance Token to indicate the id of tracklets that keypoints belong to in the current scene allow the network to learn localization information in the scene. We generalize the application of previous tokenization methods from pair-wise matching to jointly provide information of the spatial-temporal correlation of multiple instances at the same time. For the equation below, we described how to convert a scene sequence to 4 types of tokens:\nPosition Token [24]: The down-sampled spatial location of original image and gives the unique representation of each pixel coordinate. For a keypoints P , we write its Position Token as ρ, whose range lies in \n{ρ p t 1 1 , ρ p t 2 1 • • • ρ p t 1 2 , ρ p t 2 2 ...ρ p t-T K N -1 • • • ρ p t-T K N }(1)\nType Token [24] The Type Token represents the characteristic of human body(i.e. Head, Right Shoulder and Left Wrist). It is range from [1, K] where K is the number of keypoints. It provides the knowledge of how each part of human body evolves in the keypoint sequence, which is an essential to achieve high accuracy at low resolutions. We assign the Type Token of a keypoint P p t k n as k and the Type Token for the n th person in timestamp t can be written as k p t n . A general expression for Type Token is shown below\n{1 p t 1 , 2 p t 1 • • • 1 p t 2 , 2 p t 2 • • • (K -1) p t-T N • • • K p t-T N }(2)\nSegment Token The Segment token provides the difference between the timestamp of keypoints p t and the timestamp of key-frames. In our modelling of the video scene sequence, the range of Segment token is from [1, T] where T is the total number of frames in a video clip. We assign the Segment Token of a keypoint P p t k n as t and the Segment Token for the k th keypoint from the n th person can be written as t p k n . The general expression of the Segment token is shown in Equation 3\n{1 p t 1 1 , 1 p t 2 1 • • • 1 p t 1 2 , 1 p t 2 2 • • • T p t-T K N -1 • • • T p t-T K N }(3)\nInstance Token The Instance Token provides the spatial correlation for a keypoint P t that provides instance correlation within a frame. It serves a similar role as the Segment Token, providing spatial instead of temporal information. We assign the Instance Token of a keypoint P p t k n as n and the Instance Token for the k th keypoint in timestamp t can be written as n p t k . The general expression of the Segment token is shown in Equation 4\n{1 p t 1 , 1 p t 2 • • • 2 p t 1 , 2 p t 2 ...(N -1) p t-T K • • • N p t-T K }(4)\nHere we define P p t k n as the k th keypoint for the n th person in timestamp t. The visualization of our proposed tokenization methods in demonstrated in Figure 4. After tokenizing the scene sequence as the four types of the aforementioned tokens, we linearly projected each token to four types of embedding metrics and the output can be obtained by summing information of each type of token. That is E = E p osition + E T ype + E S egment + E I nstance. Finally, the Action Tagger Network takes the embedding E as input to make the actor-level action recognition for each token." }, { "figure_ref": [], "heading": "Action Tagger Network", "publication_ref": [ "b4" ], "table_ref": [], "text": "The goal of the Action Tagger Network is to learn the spatial-temporal correlation of each keypoints P t in scene sequence D and make the prediction for given downstream tasks (e.g. action recognition and action localization) To achieve this, similar to make the prediction in sentence-level and token-level classification sub-task in BERT, we feed embedding vector E to a series of self-attention blocks to model the higher-order-interaction for every keypoints embedding vectors. Then, we feed this representation to a fully-connected layers which is a learnable linear projection to make either sentence-level or token-level predictions.\nTransformer In our implementation of Transformers, we use the Transformers create three vectors from each of the input vectors (in our case the embedding of each keypoints). Hence, for each of the keypoint embedding, we create a projection for the Query vector (Q), a Key vector (K), and a Value vector (V). Next, we score for every keypoints of scene sequence S against other keypoints by taking the dot product of the query vector(Q) with the key vector(K) the respective keypoints. Finally, it normalizes the score by √ D and a softmax operation. By multiplying each value vector(V) by the softmax score, the result can be obtained by summing up the weighted value vectors. The self-attention equation is as follow:\nAttention(Q, K, V ) = sof tmax( QK T √ D )\nHierarchical Transformer Encoder In our experiments, we find that as the length of input sequence increases, the computation complexity slows down the learning efficiency for the transformer due to its quadratic processing time, i.e., O(n 2 ) for a sequence with n elements. Hence, to address this quadratic inefficiency, for long sequences, instead of learning the self-attention weight of all keypoints in a single Transformer, we replace it with our proposed Hierarchical Transformer Encoder that learn the action representation in a hierarchical manner. Hence, given a keypoints embedding features E ρ t n , a Keypoints Encode Transformer will first encode it into a list of action-level representation. We follow [5] we take the first representation h ρ t n as the feature for an actor. to the number of total action classes in the given dataset.\nE ρ t n = (e ρ t\nR ρ t n = (r ρ t n 1 , r ρ t n 2 , ...r ρ t n K ) r ρ t n = h ρ t n + P p t k n + T p k n d ρn = T ransf ormer(R ρn )\nwhere P p t k n is the Instance Token and T p k n is the Segment Token." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "We evaluate the effectiveness of our approach on two tasks, action recognition, and action detection. For action recognition, we report the performance on JHMDB and Kinetics datasets, reporting the Top-1 accuracy score. For action localization, we report the performance on the AVA dataset, evaluating the mean average precision (mAP). The content of this section is organized as follow: First, we introduce the subsets of datasets used in our experiment in section 4.1. Then, we describe the implementation details in section 4.2. Finally, we report the performance of action recognition and action detection in 4.3 and 4.4, respectively." }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [ "b15", "b31" ], "table_ref": [], "text": "JHMDB Dataset [16]. JHMDB dataset is a pose-action recognition dataset that consists of 659 training videos and 267 testing videos. It provides rich annotations including 15 joint positions, puppet mask, and puppet flow, which make it a good fit to evaluate KeyNet utilizing the evolution of human joints as the major information to recognize human action. In our experiments, we use this dataset as a starting point to validate if using only Keypoints as input modality is feasible for transformer-based architectures to recognize simple person movement actions. For evaluation, we report the performance of action recognition in terms of accuracy on the first split of the JHMDB dataset. [32]." }, { "figure_ref": [], "heading": "Kinetics-skeleton Dataset", "publication_ref": [ "b3", "b31", "b2", "b13", "b13" ], "table_ref": [], "text": "Kinetics-skeleton dataset is collected by providing extra annotation of human skeleton keypoints on the Kinetics [4] dataset. Originally, the Kinetics dataset only provide coarse-grained action labels over the entire sequenece. Yan et al. [32] use publicly available human pose estimator, Openpose [3], to extract 18 keypoints for the top two persons, in every scene, with the highest confidence scores, in terms of the summation of joint confidence scores. In our experiments, we use this to validate if the proposed KeyNet can recognize action with keypoints annotation on different human body parts. For evaluation, we manually select 16 action categories and report the performance in terms of accuracy.\nAVA Dataset [14]. The Atomic visual Action (AVA) v2.1 consists of 211K, 57K, and 117K video clips for training, validation, and test sets. The center frame or keyframe is taken at 1 FPS from 430 15-minute movie clips with dense actor level annotation of all the bounding boxes and one or more among the 80 action classes. For evaluation, our goal is to focus on validating the effectiveness and feasibility of keypoint based approach on multiple actors. We sub-sample this dataset for two reasons. First, this dataset is heavily imbalanced, and even though RGB data can be augmented to handle class imbalance, improving class imbalances for pose information is rather difficult. Second, we identify the classes, where scene information provides the largest utility and test our methods specifically for those classes. Hence, to ease the high imbalance nature of AVA dataset, we manually select the 20 action classes that have more than 2000 samples including 8 classes of person movement actions (P), 4 classes of person-person interactive actions (PP) and 8 classes of person-object manipulation actions(PO). For evaluation, we follow the official method of using frame-level mean average precision (frame-AP) at IOU threshold 0.5 as described in [14] " }, { "figure_ref": [], "heading": "Experiment Details", "publication_ref": [], "table_ref": [], "text": "In this subsection, we provide our experiment details, including our hyperparameter settings and the data preprocessing procedure used in evaluating KeyNet. We use Adam as the optimizer and design a learning rate schedule with a linear warmup. The learning rate will warm up to the initial learning rate η for a fraction of 0.01 of total training iterations and then linear decay to 0 as reaching the total training iteration. For the action localization task on AVA dataset, we choose N = 5 human tracklets and M = 3 object masks to form the scene sequence and optimize our KeyNet model with batch size 32. For the action recognition task, we choose N = 5 human tracklets and M = 1 object mask to form the scene sequence and optimize our KeyNet model with batch size 64 Data Augmentation In our experiments, we found that data augmentation is a critical component to optimize the performance of our KeyNet. Without the augmentation techniques, KeyNet tends to easily overfit on those majority classes. (e.g. stand, sit, talk to and watch actions " }, { "figure_ref": [], "heading": "Performance on Action Recognition", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "Since recognizing action categories requires the awareness of both spatial and temporal domains, we first conduct experiments on small scale JHMDB dataset to determine the best spatial-temporal configuration for our proposed KeyNet. Then we generalize the task to kinetics dataset with more complex action and validate the effectiveness of using object keypoints to provide context features in videos.\nSpatial Resolution Spatial resolution is a key factor for recognizing human action on small scales. Decreasing the spatial resolution caused the network to lose the fine-grained information but also reduce the computation cost. To determine the trade-off between the recognition performance and the computation cost, we variate the resolution of Position Token and report the performance and the computation cost for KeyNet. According to the statistic information in table 2, the optimal resolution for position token is 32x24.\nTemporal Sequence length. Temporal sequence length indicates the tokens along the temporal dimension which maps to the total number frames that the network processes from the input. Especially for those actions with slow motion (e.g. taichi), it is necessary to increase the temporal sequence lengths to let our model fully capture the features of the entire action; however, this will cause the increases the computation. In table 3, we compare different configurations of temporal sequence lengths for our proposed KeyNet and find that the one with a longer temporal sequence tends to have worse performance indicating the difficulty of transformers in modelling longer sequences. The longer sequence length prevents the self-attention layers in the Transformer unit from learning the representative attention vectors for each type of action. Therefore, in our following experiments, we fix the number of frames in our input as 10 for a lower sequence length for the best performance." }, { "figure_ref": [], "heading": "Effectiveness of Object Keypoints", "publication_ref": [], "table_ref": [], "text": "To demonstrate the effectiveness of our strategy that using object keypoints to compensate the context information, we conduct experiments on JHMDB and Kinetics-16 dataset shown in Table . 4. We use the Kinetics-16 dataset to evaluate object based action recognition while JHMDB is collected to evaluate the action of human body part motion or singleperson action. Our result show that the proposed methods improve the performance on the kinetics-16 dataset (+4.5%) but also hurt the performance over the JHMDB dataset for a small margin (-0.46%). This occurs because JHMDB dataset has been designed for single person action, often with little to no correlation with objects in the scene. As a result for majority of the classes, this additional information, adds complexity to the input space and makes learning difficult. " }, { "figure_ref": [], "heading": "Performance on Action Detection", "publication_ref": [], "table_ref": [], "text": "For this subsection, we describe the details about how to generalize our proposed methods to the action localization scenario to predict action for each actor in scene sequence D. This is analogous to the correlation between sentiment analysis (sentence-level predictions) and Part of Speech Tagging (token-level predictions) in the Natural Language Process field. The implementation can easily be done by replacing the last fully connected layer with a multi-label prediction layer. However, we discover this poses two challenges: First, how to provide sufficient information to learn the complex interaction across each tracklet. Second, how to boost the learning efficiency of a long sequence of keypoints extracted from multi-person and multi-object data annotations.\n1 Frames Per Second. For the first challenge, the most intuitive way to provide additional information is to increase the temporal footprint. As a result, we must address the issue of learning efficiency mentioned in 4.3. Therefore, instead of collecting more frames in a scene sequence, we decrease the sampling rate in videos. Our proposed workflow is described as followed: First, we detect and estimate keypoints for human instances in all video frames. Then, we run a tracking algorithm for each of the detected bounding box starting from the key-frames. Finally, we acquire the tracklets with different temporal footprints by sub-sampling the frames with specific intervals. We report and analyze the performance of KeyNet with different temporal footprint settings to demonstrate the effectiveness of our proposed method. As shown in in Table 5, decreasing FPS from 5 to 1 can lead to +3.24% for person movement actions (P) and +1.8% for person movement and person-person interaction actions (PP).\nHierarchical Self-Attention Layer.. In table 6, we have shown that by learning the person-level and actor- We also provide the performance of different configurations for transformer architecture. According to table 7, the best configuration is using 6 heads, 4 hidden layers and layers with 128 hidden unit. We follow this optimal setting for the following experiments.\nObject Keypoints. We evaluate the effectiveness of object keypoints on all of the selected actions in the AVA dataset, including person movement(P), person-person interaction(PP), and person-object manipulation (PO) action categories. Based on the statistical information in table 8, the model with object keypoints to compensate context information loss has superior performance to the one without object keypoints." }, { "figure_ref": [], "heading": "Context Information Recovery", "publication_ref": [ "b26", "b14" ], "table_ref": [ "tab_8" ], "text": "To demonstrate the effectiveness of using the object keypoints to recover context information in videos, we design a transformer-based RGB baseline to compare the recognition performance with Keypoints-based methods. For the RGB baseline, we directly takes the image-level features from HRNet [27] and Mask-RCNN [15] as actor and context features. Then we feed the actor and context features to the same Action Tagger Network with our KeyNet. In Table 9, it clearly shows that KeyNet, using only keypoints, can achieve better performance than the RGB-baseline and based on our results, we believe that using human and object keypoints as a structure representation has the potential to fully recover the essential context information for action recognition." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we demonstrate that using the object-based keypoints informatio can compensate for accuracy loss due to the missing context information in keypoint-based methods. We also show a method to extract object keypoints from segmentation information and build a structure representation with human keypoints from videos. " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "our proposed KeyNet has superior performance to the RGB baseline, a method based on image-level information, and shows the potential of using only keypoints to recover essential context information for action recognition." } ]
Action recognition is an important problem that requires identifying actions in video by learning complex interactions across scene actors and objects. However, modern deep-learning based networks often require significant computation, and may capture scene context using various modalities that further increases compute costs. Efficient methods such as those used for AR/VR often only use human-keypoint information but suffer from a loss of scene context that hurts accuracy. In this paper, we describe an action-localization method, KeyNet, that uses only the keypoint data for tracking and action recognition. Specifically, KeyNet introduces the use of object based keypoint information to capture context in the scene. Our method illustrates how to build a structured intermediate representation that allows modeling higher-order interactions in the scene from object and human keypoints without using any RGB information. We find that KeyNet is able to track and classify human actions at just 5 FPS. More importantly, we demonstrate that object keypoints can be modeled to recover any loss in context from using keypoint information over AVA action and Kinetics datasets.
Learning Higher-order Object Interactions for Keypoint-based Video Understanding
[ { "figure_caption": "Figure 1 :1Figure 1: Left: RGB based action recognition Right: Proposed Human and object keypoint based action recognition", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Flowchart for our proposed KeyNet.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: We extract object keypoints over masks using Pavidilis algorithm", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "[1, W ],[1, H ]. It reduce the computation cost while preserving the spatial correlation of each keypoints in image. The general expression of Position Token is below, where ρ p t k n indicates the Position Token of the k th keypoint for the n th person in timestamp t.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: The visualization for our proposed Position Token, Type Token, Segment Token and Instance Token. The x, y and z axis represents the value of Position Token, Segment Token and Type token respectively and the color denotes the value of Instance Token.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Scene Sequence. We designed the keypoints-based action representation in KeyNet as a scene sequence D where H i denotes the set of k h keypoints in the i th human tracklets and O j denotes the set of k o keypoints from the j th ob-", "figure_data": "jects.D = (H 1 , H 2 ...H N , O 1 , O 2 ..., O K )H i = (P 1 , P 2 , ..., P k h )O j = (P 1 , P 2 , ...P ko )", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "ρ1 , d ρ2 , ...d ρ N ) Finally, the actor level action classification is performed by linearly project d ρ n", "figure_data": "ρ t n 2 , ...e ρ t n K )eρ t n K = ρp t k n + k p t nh ρ t n = T ransf ormer(E ρ t n )where ρ p t k n is the Position Token and k p t n is the Typetoken.Then, an Actor Encode Transformer will encode the actor-level representation (h ρ 1 n , h ρ 2 n ...h ρ t-T N ) toobtain context sensitive actor-level representations.(d", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Ablation Study of techniques for the data imbalance in AVA dataset. in AVA dataset). To solve this problem, we augment the training data with random flips, crops, and expand and further address the problem of the data imbalance with the W eightedRandomSampler provided by Pytorch to equally sampled action categories in each training iteration before the estimation step. As shown in Table 1, adding data augmentation and re-sampling techniques can lead to a +17.16% performance gain in terms of mean average precision.", "figure_data": "Action Type Data Aug. Weighted Sampler mAPP14.25P20.28P16.42P31.41", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Experiments of temporal footprints on the JH-MDB dataset", "figure_data": "Token Resolution Accuracy32 * 2455.8164 * 4853.5596 * 7250.41128 * 9637.99Table 2: Experiments for token resolution on the JHMDBdatasetN Frames Sequence Length Token Size Accuracy1015032 * 2455.811522532 * 2450.54", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Experiments of using object keypoints to provide context features in action recognition tasks", "figure_data": "DatasetObject Keypoints AccuracyJHMDB55.81JHMDB55.35Kinetics-1645.40Kinetics-1649.90", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Effectiveness of our proposed hierarchical selfattention layer. Noted P denotes the person-movement actions in the AVA dataset.", "figure_data": "Input Modality FPS Temporal FootprintsPP + PPKeypoints52s28.1714.94Keypoints15s31.4116.73Table 5: Comparison of temporal footprint and inputmodality. P denotes the Person Movement actions andP P denotes Person-Person interactive actions in terms ofmean average precision (mAP)KeypointsAction Type mAPTransformerP26.85Hierarchical Self-attentionP31.41", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Experiments for the architecture searching for the proposed transformer-based architecture.", "figure_data": "N Heads N Hidden Hidden Size Int.SizeParam.mAP241281280.91 M 29.47441281280.91 M 29.45641281281.78 M30.5644642561.99 M 24.09441281280.91 M 29.45461281285.97 M 30.41Object Keypoints Action Type mAPP + PP + PO 11.23P + PP + PO 11.45", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Effectiveness of addressing object keypoints to provide contextual features.", "figure_data": "", "figure_id": "tab_7", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "According to our experimental results, we have validated The demonstration of context information recovery by comparing the performance of using full images and keypoints as input modality. P denotes the Person Movement actions and P P denotes Person-Person interactive actions in terms of mean average precision (mAP)", "figure_data": "Input Modality Action Type mAPRGBP18.12KeypointsP31.41RGBP + PP15.85KeypointsP + PP16.73RGBP + PP + PO 9.28KeypointsP + PP + PO 11.45", "figure_id": "tab_8", "figure_label": "9", "figure_type": "table" } ]
Yi Huang; Asim Kadav; Farley Lai; Deep Patel; Hans Peter Graf
[ { "authors": "Theo Pavlidis", "journal": "", "ref_id": "b0", "title": "Algorithms for Graphics and Image Processing in", "year": "1982" }, { "authors": "", "journal": "", "ref_id": "b1", "title": "Slowfast networks for video recognition", "year": "2019-10" }, { "authors": "Zhe Cao; Gines Hidalgo; Tomas Simon; Shih En Wei; Yaser Sheikh", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b2", "title": "OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields", "year": "2021" }, { "authors": "Joao Carreira; Andrew Zisserman", "journal": "", "ref_id": "b3", "title": "Quo vadis, action recognition", "year": "2017" }, { "authors": "Jacob Devlin; Ming Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b4", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Wenbin Du; Yali Wang; Yu Qiao", "journal": "", "ref_id": "b5", "title": "Rpan: An end-toend recurrent pose-attention network for action recognition in videos", "year": "2017" }, { "authors": "Yong Du; Wei Wang; Liang Wang", "journal": "", "ref_id": "b6", "title": "Hierarchical recurrent neural network for skeleton based action recognition", "year": "2015" }, { "authors": "Christoph Feichtenhofer; Axel Pinz; Andrew Zisserman", "journal": "", "ref_id": "b7", "title": "Convolutional Two-Stream Network Fusion for Video Action Recognition", "year": "2016-12" }, { "authors": "Christoph Feichtenhofer; Axel Pinz; Andrew Zisserman", "journal": "", "ref_id": "b8", "title": "Convolutional Two-Stream Network Fusion for Video Action Recognition", "year": "2016-12" }, { "authors": "Yutong Feng; Jianwen Jiang; Ziyuan Huang; Zhiwu Qing; Xiang Wang; Shiwei Zhang; Mingqian Tang; Yue Gao", "journal": "", "ref_id": "b9", "title": "Relation Modeling in Spatio-Temporal Action Localization", "year": "2021" }, { "authors": "Kirill Gavrilyuk; Ryan Sanford; Mehrsan Javan; G M Cees; Snoek", "journal": "", "ref_id": "b10", "title": "Actor-transformers for group activity recognition", "year": "2020" }, { "authors": "Rohit Girdhar; Joao Joao Carreira; Carl Doersch; Andrew Zisserman", "journal": "", "ref_id": "b11", "title": "Video action transformer network", "year": "2019-06" }, { "authors": "Georgia Gkioxari; Ross Girshick; Piotr Dollár; Kaiming He", "journal": "", "ref_id": "b12", "title": "Detecting and recognizing human-object interactions", "year": "2018" }, { "authors": "Chunhui Gu; Chen Sun; David A Ross; George Toderici; Caroline Pantofaru; Susanna Ricco", "journal": "", "ref_id": "b13", "title": "AVA A Video Dataset of Atomic Visual Actions", "year": "2018" }, { "authors": "Kaiming He; Georgia Gkioxari", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b14", "title": "Piotr Dollár, and Ross Girshick", "year": "2020" }, { "authors": "Hueihan Jhuang; Juergen Gall; Silvia Zuffi; Cordelia Schmid; Michael J Black", "journal": "", "ref_id": "b15", "title": "Towards understanding action recognition", "year": "2013" }, { "authors": "Ji Lin; Chuang Gan; Song Han", "journal": "", "ref_id": "b16", "title": "TSM: Temporal shift module for efficient video understanding", "year": "2019-10" }, { "authors": "Ziyu Liu; Hongwen Zhang; Zhenghao Chen; Zhiyong Wang; Wanli Ouyang", "journal": "", "ref_id": "b17", "title": "Disentangling and unifying graph convolutions for skeleton-based action recognition", "year": "2020" }, { "authors": "Chih-Yao Ma; Min-Hung Chen; Zsolt Kira; Ghassan Alregib; C V Mar", "journal": "", "ref_id": "b18", "title": "Exploiting Spatiotemporal Dynamics for Activity Recognition", "year": "" }, { "authors": "Chih-Yao Ma; Asim Kadav; Iain Melvin; Zsolt Kira; Ghassan Alregib; Hans Peter Graf", "journal": "", "ref_id": "b19", "title": "Attend and Interact: Higher-Order Object Interactions for Video Understanding", "year": "" }, { "authors": "Yuya Obinata; Takuma Yamamoto", "journal": "", "ref_id": "b20", "title": "Temporal Extension Module for Skeleton-Based Action Recognition", "year": "2021" }, { "authors": "Junting Pan; Siyu Chen; Mike Zheng Shou; Yu Liu; Jing Shao; Hongsheng Li", "journal": "", "ref_id": "b21", "title": "Actor-Context-Actor Relation Network for Spatio-Temporal Action Localization", "year": "2020" }, { "authors": "Hao Shao; Shengju Qian; Yu Liu", "journal": "", "ref_id": "b22", "title": "Temporal interlacing network", "year": "2020" }, { "authors": "Michael Snower; Asim Kadav; Farley Lai; Hans Peter Graf", "journal": "", "ref_id": "b23", "title": "15 Keypoints Is All You Need", "year": "2019" }, { "authors": "Sijie Song; Cuiling Lan; Junliang Xing; Wenjun Zeng; Jiaying Liu", "journal": "", "ref_id": "b24", "title": "An end-to-end spatio-temporal attention model for human action recognition from skeleton data", "year": "2017" }, { "authors": "Fei Wang; Stanislav Panev; Ziyi Dai; Jinsong Han; Dong Huang", "journal": "", "ref_id": "b25", "title": "Can wifi estimate person pose?", "year": "2019" }, { "authors": "Jingdong Wang; Ke Sun; Tianheng Cheng; Borui Jiang; Chaorui Deng; Yang Zhao; Dong Liu; Yadong Mu; Mingkui Tan; Xinggang Wang; Wenyu Liu; Bin Xiao", "journal": "", "ref_id": "b26", "title": "Deep high-resolution representation learning for visual recognition", "year": "2019-03" }, { "authors": "Limin Wang; Yuanjun Xiong; Zhe Wang; Yu Qiao; Dahua Lin; Xiaoou Tang; Luc Van Gool", "journal": "", "ref_id": "b27", "title": "Temporal segment networks: Towards good practices for deep action recognition", "year": "2016" }, { "authors": "Saiwen Wang; Jie Song; Jaime Lien; Ivan Poupyrev; Otmar Hilliges", "journal": "", "ref_id": "b28", "title": "Interacting with soli: Exploring fine-grained dynamic gesture recognition in the radiofrequency spectrum", "year": "2016" }, { "authors": "Wei Wang; Jinjin Zhang; Chenyang Si; Liang Wang", "journal": "", "ref_id": "b29", "title": "Pose-based two-stream relational networks for action recognition in videos", "year": "2018" }, { "authors": "Zuxuan Wu; Yu Gang Jiang; Xi Wang; Hao Ye; Xiangyang Xue", "journal": "", "ref_id": "b30", "title": "Multi-stream multi-class fusion of deep networks for video classification", "year": "2016" }, { "authors": "Sijie Yan; Yuanjun Xiong; Dahua Lin", "journal": "", "ref_id": "b31", "title": "Spatial temporal graph convolutional networks for skeleton-based action recognition", "year": "2018" } ]
[ { "formula_coordinates": [ 5, 107.06, 302.96, 193.58, 16.7 ], "formula_id": "formula_0", "formula_text": "{ρ p t 1 1 , ρ p t 2 1 • • • ρ p t 1 2 , ρ p t 2 2 ...ρ p t-T K N -1 • • • ρ p t-T K N }(1)" }, { "formula_coordinates": [ 5, 82.96, 473.02, 217.68, 15.48 ], "formula_id": "formula_1", "formula_text": "{1 p t 1 , 2 p t 1 • • • 1 p t 2 , 2 p t 2 • • • (K -1) p t-T N • • • K p t-T N }(2)" }, { "formula_coordinates": [ 5, 102.02, 624.16, 198.62, 16.7 ], "formula_id": "formula_2", "formula_text": "{1 p t 1 1 , 1 p t 2 1 • • • 1 p t 1 2 , 1 p t 2 2 • • • T p t-T K N -1 • • • T p t-T K N }(3)" }, { "formula_coordinates": [ 5, 323.8, 221.44, 215.45, 12.9 ], "formula_id": "formula_3", "formula_text": "{1 p t 1 , 1 p t 2 • • • 2 p t 1 , 2 p t 2 ...(N -1) p t-T K • • • N p t-T K }(4)" }, { "formula_coordinates": [ 6, 101.44, 184.53, 169.75, 25.19 ], "formula_id": "formula_4", "formula_text": "Attention(Q, K, V ) = sof tmax( QK T √ D )" }, { "formula_coordinates": [ 6, 137.27, 401.63, 46.18, 13.77 ], "formula_id": "formula_5", "formula_text": "E ρ t n = (e ρ t" }, { "formula_coordinates": [ 6, 129.07, 580.77, 114.49, 59.95 ], "formula_id": "formula_6", "formula_text": "R ρ t n = (r ρ t n 1 , r ρ t n 2 , ...r ρ t n K ) r ρ t n = h ρ t n + P p t k n + T p k n d ρn = T ransf ormer(R ρn )" } ]
10.3322/caac.21763
[ { "figure_ref": [ "fig_0" ], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b3", "b4", "b11", "b12", "b19", "b12", "b20", "b23", "b24", "b23", "b25", "b29" ], "table_ref": [], "text": "N estimated 186,680 new cases of invasive melanoma (including 89,070 in-situ melanomas) are expected to be diagnosed in 2023 in the United States [1]. Dermoscopy is an imaging adjunct technique for early skin cancer detection, improving diagnostic accuracy compared to visual inspection by a domain expert [2]- [4].\nComputer vision techniques have improved appreciably in recent years [5]- [12] and have been successfully applied to many medical imaging problems [13]- [20]. In the skin cancer domain, deep learning techniques combined with dermoscopy have higher diagnostic accuracy than experienced dermatologists [13], [21]- [24]. Pathan et al. published a recent review detailing both handcrafted and deep learning (DL) techniques for computer-aided diagnosis of skin lesions [25]. Recent studies show that the fusion of deep learning and handcrafted features can improve accuracy in skin cancer diagnosis [24], [26]- [30].\nAlthough convolution neural network (CNN) methods have achieved higher diagnostic accuracy in skin lesion classification, the heatmap visualizations of CNNs have shown that they do not always learn the features from a lesion region in the image; rather from the artifacts present in the image, such as ruler marks, ink marks, stickers, and skin backgrounds. These non-lesional features may serve as information leaks and might potentially cause poor generalization when applied to new test data that are different than training data. Thus, in this study, we propose a novel deep learning method that forces CNN, in particular an EfficientNet-B5 model, to learn the features from the important lesion region in the image during the training. The class activation map (CAM) visualizations in Figure 1 shows that the proposed method prevents CNN model focusing on the artifacts. Furthermore, the test results show the proposed method improves the melanoma classification performance and predicts the classification score with a higher diagnostic confidence. " }, { "figure_ref": [], "heading": "II. MATERIALS AND METHODS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2" ], "heading": "A. Image Datasets", "publication_ref": [], "table_ref": [], "text": "In this study, we used a publicly available ISIC2020 [31] melanoma classification dataset. The dataset has 33,126 skin lesion dermoscopic images of two categories -benign and melanoma. Some of the images have duplicates; we created a curated set of 32701 images after removing the duplicates. The dataset is highly imbalanced with only 581 (1.78%) of total images belongs melanoma category The images have varying resolutions from 480×640 to 4000×6000. Some of the examples are shown in Figure 2. The non-square images were zero padded and resized to 512x512 using a bilinear interpolation. " }, { "figure_ref": [ "fig_3" ], "heading": "C. Proposed Method", "publication_ref": [ "b11", "b32", "b8", "b33" ], "table_ref": [], "text": "The overall flow diagram of the proposed method is shown in Figure 3. It uses a pretrained EfficientNet [12] model as a convolutional neural network (CNN) architecture to classify the skin lesions. It incorporates a novel attention mechanism to force the model to focus more on the lesion region of an image. The proposed attention mechanism, first, computes the class activation map (CAM) [33] to identify the image regions most relevant to the specific class (melanoma in our case) and then uses it with an elliptical lesion mask to compute the attention loss, 𝐿 .The attention loss, 𝐿 , is combined with the classification loss, 𝐿 , to create the composite loss 𝐿 . Finally, the convolutional neural network is trained using this composite loss so that the network emphasizes more on the lesion region in the image rather than the background. For a given image, let 𝑓 (𝑥, 𝑦) represent an activation of a unit 𝑘 in the last convolution layer at a spatial location (𝑥, 𝑦). The CAM for class 𝑐 is given in Equation 1.\n𝑀 (𝑥, 𝑦) = 𝑤 𝑓 (𝑥, 𝑦)(1)\nWhere 𝑤 is the weight corresponding to class 𝑐 for the unit 𝑘.\nTo generate an elliptical lesion mask, 𝑀 , we use an extended bounding box that encloses the skin lesion. The bounding box of the lesion images are auto generated using a separate lesion detection model, which is a ResNet-50 [9] model trained on the ISIC 2018 [34] lesion segmentation dataset and predicts the bounding box coordinates (𝑥 , 𝑦 , 𝑥 , 𝑦 ) of a lesion. The bounding box is extended by 20% area to allow some background around the lesion. The elliptical mask 𝑀 is resized to same size as that of 𝑀 to compute the attention loss. Here, 𝑀 is a binary mask with a pixel value of 0 or 1. Thus, the data range of 𝑀 is also rescaled between 0 and 1 using a division by maximum value. For 𝑁 training images, the attention loss using the Jaccard method is given in Equation 2.\n𝐿 = 1 - ∑ 𝑀 𝑀 + 1 ∑ 𝑀 + 𝑀 + 1(2)\nEquation 3 shows the classification loss 𝐿 computed using a cross-entropy method between a sigmoid output of a fully connected (FC) layer, 𝑆 , and a given ground truth label 𝑌. 𝐿 = 𝐵𝐶𝐸 (𝑆 , 𝑌)\nEquation 4 shows the total composite loss that was used to train the network. Where 𝜆 is the loss weight factor such that 0 < 𝜆 < 1, and was optimized empirically to 0.66.\n𝐿 = (1 -𝜆)𝐿 + 𝜆𝐿(4)\nDuring the inference, the trained CNN model with the fully connected (FC) layer outputs the sigmoid score with a value between 0 and 1, where the score closer to 0 indicates the lesion being benign and the score closer to 1 indicates lesion being melanoma." }, { "figure_ref": [], "heading": "D. Training Details", "publication_ref": [], "table_ref": [], "text": "All models were built using a PyTorch framework in Python 3 and trained using a single 32GB Nvidia V100 graphics card. The network was trained for 30 epochs using a batch size of 6, a constant learning rate of 0.0001, and the stochastic gradient descent (SGD) optimization algorithm. The loss functions were weighted binary cross entropy for a classification loss and Jaccard loss for an attention loss. To reduce overfitting of a deep neural network model, we used data augmentation (see details in section II.B), a dropout layer, and an early stopping technique. The dropout probability of 0.5 was selected for the dropout layer, which was placed before a FC layer. For the early stopping criterion, we used a patience of 5 epochs to stop the model from overtraining." }, { "figure_ref": [ "fig_6", "fig_5" ], "heading": "III. EXPERIMENTAL RESULTS", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "To evaluate the performance of the proposed method, we trained an Efficient-B5 model with our proposed attention mechanism (AM) using 5-fold cross validation. The 32,701 images from the curated ISIC dataset were randomly split into 5 folds with a class label-based stratification. We used the area under the receiver operating characteristic curve (AUC) to measure the classification performance of the proposed model. Table I shows the performance comparison of the proposed method against the baseline model. The baseline model is the Efficient-B5 model without the attention mechanism. The proposed method improved the mean cross-validated AUC of 0.9 to 0.922. In Figures 4,we show the class activation map (CAM) of the proposed method on the test melanoma images. Although the baseline model has a prediction score greater than 0.5 in all three cases, CAM shows the model focuses on the outer regions (example, ruler marks) rather than the lesion region. In contrast, the proposed method focuses mostly inside the lesion bounding box. Also, the prediction scores of 0.818 vs. 0.745, 0.739 vs. 0.703 and 0.90 vs. 0.701 from the proposed model against the baseline model shows that the proposed model is more confident of classifying the melanoma lesions as a melanoma.\nSimilarly, Figure 5 shows the overlays of CAM on the benign images from both proposed and baseline models. The proposed model focuses within the lesion region to extract the important information to classify the sample correctly. Conversely, the baseline model focuses on the image corner or the background regions even though it is correctly predicting the lesions as a benign lesion. Also, the prediction scores are reduced from 0.448 to 0.099, 0.279 to 0.092 and 0.037 to 0.007, showing the improved confidence on its classification score." }, { "figure_ref": [ "fig_6", "fig_5" ], "heading": "IV. DISCUSSION", "publication_ref": [ "b8", "b34", "b35", "b36", "b11", "b37", "b38", "b39", "b32", "b40", "b41", "b42", "b43" ], "table_ref": [], "text": "In this study, we demonstrated that our proposed lesionfocused deep learning method not only improves the melanoma classification performance, but also increases melanoma diagnostic confidence in dermoscopic skin lesion images. As accuracy is not a very useful metric for a binary classification problem in a highly imbalanced dataset scenario, we used an area under ROC curve (AUC) to evaluate the classification performance of the proposed method.\nIn recent ISIC skin lesion classification challenges, DL methods using CNN architectures such as ResNet [9], ResNext [35], SEResNext [36], DenseNet [37] and EfficientNet [12] have dominated the submission leaderboards [38], [39]. Although CNN methods have outperformed the traditional handcrafted features methods in visual recognition tasks, it is little known how they perform very well. Various visualization techniques, including saliency maps [40], CAM [33], Grad-CAM [41], Grad-CAM++ [42], and Score-CAM [43], have been devised to observe the specific regions in an image that played a significant role in the classification of a particular class by convolutional neural networks.\nCassidy et al. [44] recently analyzed the ISIC image datasets using various convolution neural networks. In their study, a Grad-CAM visualization showed the CNN models primarily focus on non-lesion regions in an image such as ruler marks, ink marks, stickers, and skin background. Furthermore, we noticed the similar behavior of the CNN model in this investigation. Despite CNNs focusing on non-lesion regions, they still manage to make the correct predictions as shown in Figures 4 and5. This situation is undesirable since the presence of such artifacts in the training data can lead to information leakage, potentially causing the trained model to perform poorly when applied to new test images from different distributions than the training data. Thus, the CNN methods that focuses on the lesion regions are warranted to develop a better generalized model.\nOur experimental results showed that the proposed attention mechanism forces the CNN model to learn from the important lesion regions in an image. The CNN model trained with the proposed attention mechanism achieves an improved classification performance compared to the plain (no attention component) CNN model. Also, the model with attention predicts the classifications scores with higher confidence than the baseline model. As the proposed CNN model mainly relies on the lesion region in the image to make a final classification prediction, such models can perform well even when clinical artifacts are not present in the test images. However, the generalization capability of the proposed method is not investigated in the current study, as such an investigation can be performed only when new test data with a different distribution than the training data are available in the future. The scores and GT in RED show the proposed method is more confident of classifying the benign lesions as benign. showed that the CNN model with the proposed method makes an accurate melanoma prediction by learning features within a lesion rather than non-lesion regions in the skin background." } ]
Deep learning implemented with convolutional network architectures can exceed specialists' diagnostic accuracy. However, whole-image deep learning trained on a given dataset may not generalize to other datasets. The problem arises because extra-lesional features-ruler marks, ink marks, and other melanoma correlates-may serve as information leaks. These extra-lesional features, discoverable by heat maps, degrade melanoma diagnostic performance and cause techniques learned on one data set to fail to generalize. We propose a novel technique to improve melanoma recognition by an EfficientNet model. The model trains the network to detect the lesion and learn features from the detected lesion. A generalizable elliptical segmentation model for lesions was developed, with an ellipse enclosing a lesion and the ellipse enclosed by an extended rectangle (bounding box). The minimal bounding box was extended by 20% to allow some background around the lesion. The publicly available International Skin Imaging Collaboration (ISIC) 2020 skin lesion image dataset was used to evaluate the effectiveness of the proposed method. Our test results show that the proposed method improved diagnostic accuracy by increasing the mean area under receiver operating characteristic curve (mean AUC) score from 0.9 to 0.922. Additionally, correctly diagnosed scores are also improved, providing better separation of scores, thereby increasing melanoma diagnostic confidence. The proposed lesionfocused convolutional technique warrants further study.
[ { "figure_caption": "Fig. 1 .1Fig. 1. CAM heatmap visualizations of an EfficientNet-B5 model and the proposed method for melanoma classification.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "B. Data Augmentation In this study, we applied the data augmentation during the training of convolutional neural network. It increases the variation in training images by randomly applying various image transformations, which eventually helps the model to generalize better. The image transformations used in this study are as follows:  Transpose  Horizontal or Vertical Flip  Height or width shift with a range of (-0.15, +0.15)  Rotation with range between +90° to -90°  Zoom with a range of (0.85, 1.15)  Brightness with a range of (0.85, 1.15)  Contrast with a range of (0.85, 1.15)  Hue with a range of (0.85, 1.15)  Saturation with a range of (0.85, 1.15)  CLAHE histogram equalization  Gaussian Noise  Motion Blur  Median Blur  Gaussian Blur Furthermore, the image pixel values were rescaled between 0 and 1 and normalized using the ImageNet [32] parameters.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Skin lesion dermoscopy images with ground truth classification labels in the ISIC 2020 skin lesion dataset. The top row shows benign lesions, and the bottom row shows malignant ones.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. The overall flow diagram of a proposed melanoma classification method. During training, the attention mechanism computes the class activation map (CAM) using the feature map after the last convolutional layer, which is further used to compute the attention loss 𝐿 . The classification loss 𝐿 is computed using an output from FC layer and combined to create a composite (total) loss 𝐿 .", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "V. CONCLUSIONIn this study, we propose a novel deep learning technique to force a convolutional neural network (CNN) to learn from an important lesion region in dermoscopic skin lesion images. The proposed method employs a new attention mechanism that uses a class activation map and an elliptical lesion mask to compute an attention loss. The attention loss is combined with the classification loss to train the convolutional neural network. The CNN model trained with the combined loss improved the melanoma classification performance. The class activation map", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. Overlays of the class activation map (CAM) on the test benign lesion images. The bounding box shows lesion location. The CAM shows the proposed method focuses within the lesion region.The scores and GT in RED show the proposed method is more confident of classifying the benign lesions as benign.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Overlays of class activation map (CAM) on the test melanoma lesion images. The bounding box shows lesion location. The CAM shows the proposed method focuses within the lesion region. The scores and GT in RED show the proposed method is more confident of classifying the melanoma lesions as a melanoma.", "figure_data": "", "figure_id": "fig_6", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "PEFORMANCE COMPARISON OF THE PROPOSEDMETHOD AGAINST THE BASELINE MODEL ON ISIC2020DATASETAUCMedianMeanStandard DeviationEfficientNet (baseline)0.8970.90.0106EfficientNet + AM0.9310.9220.0167", "figure_id": "tab_1", "figure_label": "I", "figure_type": "table" } ]
[ { "authors": "R L Siegel; K D Miller; N S Wagle; A ", "journal": "CA Cancer J Clin", "ref_id": "b0", "title": "Cancer statistics, 2023", "year": "2023" }, { "authors": "H Pehamberger; M Binder; A Steiner; K Wolff", "journal": "Journal of Investigative Dermatology", "ref_id": "b1", "title": "In vivo epiluminescence microscopy: Improvement of early diagnosis of melanoma", "year": "1993" }, { "authors": "H P Soyer; G Argenziano; R Talamini; S Chimenti", "journal": "Arch Dermatol", "ref_id": "b2", "title": "Is Dermoscopy Useful for the Diagnosis of Melanoma?", "year": "2001-10" }, { "authors": "R P Braun; H S Rabinovitz; M Oliviero; A W Kopf; J H Saurat", "journal": "Clin Dermatol", "ref_id": "b3", "title": "Pattern analysis: a two-step procedure for the dermoscopic diagnosis of melanoma", "year": "2002-05" }, { "authors": "A Krizhevsky; I Sutskever; G Hinton", "journal": "", "ref_id": "b4", "title": "ImageNet Classification with Deep Convolutional Neural Networks", "year": "2012" }, { "authors": "C Szegedy; V Vanhoucke; S Ioffe; J Shlens; Z Wojna", "journal": "", "ref_id": "b5", "title": "Rethinking the inception architecture for computer vision", "year": "2016" }, { "authors": "K Simonyan; A Zisserman", "journal": "", "ref_id": "b6", "title": "Very deep convolutional networks for large-scale image recognition", "year": "2014" }, { "authors": "I Goodfellow", "journal": "Commun ACM", "ref_id": "b7", "title": "Generative adversarial networks", "year": "2020" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b8", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "A Dosovitskiy", "journal": "", "ref_id": "b9", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "O Ronneberger; P Fischer; T Brox", "journal": "", "ref_id": "b10", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation", "year": "" }, { "authors": "M Tan; Q Le", "journal": "", "ref_id": "b11", "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "year": "2019" }, { "authors": "A Esteva", "journal": "Nature", "ref_id": "b12", "title": "Dermatologist-level classification of skin cancer with deep neural networks", "year": "2017" }, { "authors": "V Gulshan", "journal": "JAMA", "ref_id": "b13", "title": "Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs", "year": "2016" }, { "authors": "S Sornapudi", "journal": "J Pathol Inform", "ref_id": "b14", "title": "Deep learning nuclei detection in digitized histology images by superpixels", "year": "2018" }, { "authors": "G Litjens", "journal": "Med Image Anal", "ref_id": "b15", "title": "A survey on deep learning in medical image analysis", "year": "2017" }, { "authors": "A K Nambisan", "journal": "Intelligent Systems with Applications", "ref_id": "b16", "title": "Deep learning-based dot and globule segmentation with pixel and blob-based metrics for evaluation", "year": "2022" }, { "authors": "N Lama; J Hagerty; A Nambisan; R J Stanley; W Van Stoecker", "journal": "J Digit Imaging", "ref_id": "b17", "title": "Skin Lesion Segmentation in Dermoscopic Images with Noisy Data", "year": "2023" }, { "authors": "A Maurya", "journal": "Skin Research and Technology", "ref_id": "b18", "title": "A deep learning approach to detect blood vessels in basal cell carcinoma", "year": "2022" }, { "authors": "N Lama", "journal": "J Digit Imaging", "ref_id": "b19", "title": "ChimeraNet: U-Net for Hair Detection in Dermoscopic Skin Lesion Images", "year": "2022" }, { "authors": "L K Ferris", "journal": "J Am Acad Dermatol", "ref_id": "b20", "title": "Computer-aided classification of melanocytic lesions using dermoscopic images", "year": "2015-11" }, { "authors": "M A Marchetti", "journal": "J Am Acad Dermatol", "ref_id": "b21", "title": "Results of the 2016 International Skin Imaging Collaboration International Symposium on Biomedical Imaging challenge: Comparison of the accuracy of computer algorithms to dermatologists for the diagnosis of melanoma from dermoscopic images", "year": "2018-02" }, { "authors": "H A Haenssle", "journal": "Annals of Oncology", "ref_id": "b22", "title": "Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists", "year": "2018" }, { "authors": "N C F Codella", "journal": "IBM J. Res. Dev", "ref_id": "b23", "title": "Deep Learning Ensembles for Melanoma Recognition in Dermoscopy Images", "year": "2017-07" }, { "authors": "S Pathan; K G Prabhu; P C Siddalingaswamy", "journal": "Biomed Signal Process Control", "ref_id": "b24", "title": "Techniques and algorithms for computer aided diagnosis of pigmented skin lesions-A review", "year": "2018-01" }, { "authors": "T Majtner; S Yildirim-Yayilgan; J Y Hardeberg", "journal": "", "ref_id": "b25", "title": "Combining deep learning and handcrafted features for skin lesion classification", "year": "2016" }, { "authors": "N Codella; J Cai; M Abedini; R Garnavi; A Halpern; J R Smith", "journal": "Springer International Publishing", "ref_id": "b26", "title": "Deep Learning, Sparse Coding, and SVM for Melanoma Recognition in Dermoscopy Images BT -Machine Learning in Medical Imaging", "year": "2015" }, { "authors": "I González-Díaz", "journal": "IEEE J Biomed Health Inform", "ref_id": "b27", "title": "DermaKNet: Incorporating the Knowledge of Dermatologists to Convolutional Neural Networks for Skin Lesion Diagnosis", "year": "2019" }, { "authors": "J R Hagerty", "journal": "IEEE J Biomed Health Inform", "ref_id": "b28", "title": "Deep Learning and Handcrafted Method Fusion: Higher Diagnostic Accuracy for Melanoma Dermoscopy Images", "year": "2019" }, { "authors": "A K Nambisan", "journal": "Cancers", "ref_id": "b29", "title": "Improving Automatic Melanoma Diagnosis Using Deep Learning-Based Segmentation of Irregular Networks", "year": "2023" }, { "authors": "V Rotemberg", "journal": "Sci Data", "ref_id": "b30", "title": "A patient-centric dataset of images and for identifying melanomas using clinical context", "year": "2021" }, { "authors": "J Deng; W Dong; R Socher; L.-J Li; K Li; L Fei-Fei", "journal": "", "ref_id": "b31", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "B Zhou; A Khosla; A Lapedriza; A Oliva; A Torralba", "journal": "", "ref_id": "b32", "title": "Learning deep features for discriminative localization", "year": "2016" }, { "authors": "N Codella", "journal": "", "ref_id": "b33", "title": "Skin Lesion Analysis Toward Melanoma Detection 2018: A Challenge Hosted by the International Skin Imaging Collaboration (ISIC)", "year": "2019" }, { "authors": "S Xie; R Girshick; P Dollár; Z Tu; K He", "journal": "", "ref_id": "b34", "title": "Aggregated residual transformations for deep neural networks", "year": "2017" }, { "authors": "J Hu; L Shen; G Sun", "journal": "", "ref_id": "b35", "title": "Squeeze-and-excitation networks", "year": "2018" }, { "authors": "G Huang; Z Liu; L Van Der Maaten; K Q Weinberger", "journal": "", "ref_id": "b36", "title": "Densely connected convolutional networks", "year": "2017" }, { "authors": " ", "journal": "", "ref_id": "b37", "title": "ISIC 2019 Leaderboards", "year": "2019-05-14" }, { "authors": " ", "journal": "ISIC", "ref_id": "b38", "title": "Society for Imaging Informatics in Medicine (SIIM) and International Skin Imaging", "year": "2020-05-14" }, { "authors": "K Simonyan; A Vedaldi; A Zisserman", "journal": "", "ref_id": "b39", "title": "Deep inside convolutional networks: Visualising image classification models and saliency maps", "year": "2013" }, { "authors": "R R Selvaraju; M Cogswell; A Das; R Vedantam; D Parikh; D Batra", "journal": "", "ref_id": "b40", "title": "Grad-cam: Visual explanations from deep networks via gradient-based localization", "year": "2017" }, { "authors": "A Chattopadhay; A Sarkar; P Howlader; V N Balasubramanian", "journal": "", "ref_id": "b41", "title": "Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks", "year": "2018" }, { "authors": "H Wang", "journal": "", "ref_id": "b42", "title": "Score-CAM: Score-weighted visual explanations for convolutional neural networks", "year": "2020" }, { "authors": "B Cassidy; C Kendrick; A Brodzicki; J Jaworek-Korjakowska; M H Yap", "journal": "Med Image Anal", "ref_id": "b43", "title": "Analysis of the ISIC image datasets: Usage, benchmarks and recommendations", "year": "2022" } ]
[ { "formula_coordinates": [ 2, 353.64, 247.78, 206.18, 11.89 ], "formula_id": "formula_0", "formula_text": "𝑀 (𝑥, 𝑦) = 𝑤 𝑓 (𝑥, 𝑦)(1)" }, { "formula_coordinates": [ 2, 366.12, 468.7, 193.77, 23.59 ], "formula_id": "formula_1", "formula_text": "𝐿 = 1 - ∑ 𝑀 𝑀 + 1 ∑ 𝑀 + 𝑀 + 1(2)" }, { "formula_coordinates": [ 2, 378.36, 608.02, 181.82, 9.13 ], "formula_id": "formula_3", "formula_text": "𝐿 = (1 -𝜆)𝐿 + 𝜆𝐿(4)" } ]
10.1145/3359190
2023-10-11
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b23", "b22", "b24", "b35", "b29", "b9", "b25", "b4", "b7", "b16", "b19", "b27", "b34", "b40", "b0", "b44" ], "table_ref": [], "text": "Social stereotypes are collectively shared cognitive associations used to draw inferences about people (Hewstone et al., 2002). Social stereotypes are important because they inform how we present ourselves and others, and in turn, our behavior in social settings (Stryker, 1980). Measures of stereotypes held or expressed by a group of people can therefore help us to understand and/or predict their behaviors (Heise, 1987). Social psychologists have developed a host of survey-based methods to measure the stereotypes elicited by particular social identities, the words and phrases we use to label ourselves and others (Hilton and von Hippel, 1996). However, survey-based methods do not scale to the myriad ways people identify themselves (MacKinnon and Heise, 2010), nor can they easily capture differences across subgroups or contexts (Smith-Lovin and Douglas, 1992). This is even more true on social media, where stereotypes emerge in unique and time-evolving language across many different communities (Joseph et al., 2016).\nComputational tools have been developed to address these challenges. Most, though not all (e.g. CH-Wang and Jurgens, 2021;Hoyle et al., 2019;Bamman and Smith, 2014;Field et al., 2019) of these methods function by projecting embeddings from pre-trained distributional semantic models (DSMs; e.g. GloVe or BERT) onto important social dimensions of meaning, such as gender (Caliskan et al., 2022) and race (Field et al., 2021). One can then, for example, measure stereotypes that associate occupational identities to particular races and genders, and study how these stereotypes correlate with discriminatory behavior (Garg et al., 2018).\nThere are, however, reasons to believe that we can improve on existing strategies. Empirically, approaches using static pre-trained embeddings accurately measure stereotypes only for a limited set of identities on a limited set of dimensions (Joseph and Morgan, 2020). Additionally, while part of the promise of these NLP methods is that they might allow us to study new subgroups in new domains, domain adaptation of pre-trained models for stereotype measurement is difficult (Field and Tsvetkov, 2019), requiring innovative approaches to dimension projection and/or potentially prohibitive levels of fine tuning (Lucy et al., 2022).\nTheoretically, DSMs are based on the assumption that contextual similarity-similarity in where phrases appear in text-is a strong proxy for semantic similarity (roughly, synonomy). The idea behind this assumption is that words with high levels of semantic similarity should have similar cognitive associations to other words, and thus high contextual similarity as well (Miller and Charles, 1991). Two words with high levels of semantic similarity could therefore be said to share \"linguistic stereotypes.\" The core distinction between these linguistic stereotypes and the social stereotypes we are interested in measuring here is that linguistic stereotypes represent words with similar associations to similar other words and social stereotypes represent similar associations to similar kinds of people. One ramification of this distinction is that pre-trained DSMs require that we project onto dimensions of social meaning to recover social stereotypes, because DSMs otherwise place identities nearby in embedding space that have similar linguistic stereotypes but distinct social ones, like \"Democrat\" and \"Republican\" (An et al., 2018).\nIt stands to reason, then, that to better measure social stereotypes we should look to data that tell us about social, rather than linguistic, stereotypes, i.e. to data that tell us which identities tend to be applied to the same people. 1 The present work aims to develop such data, and to build models that allow us to measure stereotypes using them. 2 More specifically, we construct two English-language datasets of sets of identities applied to the same person: one derived from over 4 million Twitter profile biographies, and the other from over 400,000 Wikipedia biographies. We then develop and validate three new models for measuring stereotypes built on these data. In sum, the present work provides three contributions to the literature:\n• We introduce two new English-language entity-centric (Field and Tsvetkov, 2019) datasets containing sets of identities that refer to the same person in Twitter and Wikipedia biographies. These data can be used to study social stereotypes and how they inform presentation of the self and other. • We develop new, straightforward methods to measure stereotypes using these data. Specifically, we propose a model that learns stereotypes exclusively from entity-centric data, and two models that use these data as a fine-tuning step for pre-trained DSMs. • We perform an extensive validation, finding that a Sentence-BERT (Reimers and Gurevych, 2019) model fine-tuned on entitycentric data consistently outperforms our two other models, as well as baseline DSMs, on two in-domain evaluation tasks. We also provide a brief case study and error analysis." }, { "figure_ref": [], "heading": "Background and Motivation", "publication_ref": [ "b43", "b31", "b34", "b21", "b1", "b38", "b33", "b4", "b25", "b18", "b5", "b36", "b26", "b12", "b30", "b35", "b27", "b37" ], "table_ref": [], "text": "We study stereotypes elicited by the words and phrases people use to describe themselves and others in natural language. These words and phrases range from unigrams like \"dancer\" to richer, more complex expressions with similar meanings, like \"Avid fan of dancing\". It is therefore more accurate to say that natural language expresses both identities and phrases that signal identity, what Pathak et al. (2021) call personal identifiers. Here, we retain the phrase identity as a familiar and concise shorthand, but note that many phrases we study signal identity in the form of expressed traits, behaviors, and/or interests. A significant literature devoted to measuring stereotypes exists in NLP. The present work is most aligned with efforts that use contextualized embeddings (e.g. Kurita et al., 2019;Lucy et al., 2022;Field et al., 2019;Guo and Caliskan, 2021) to do so. We extend this work by proposing new models informed by entity-centric data; we also provide a new evaluation dataset of social media-relevant identities (see Section 3). In this vein, our work relates to the literature on entity-centric analysis (Field and Tsvetkov, 2019). Entity-centric work predominantly focuses on using phrases that carry known (e.g. from survey data) stereotypes to understand how individuals are portrayed (Antoniak et al., 2019;Mendelsohn et al., 2020;Ziems and Yang, 2021;Lucy et al., 2020).The present work considers the complementary question: what can entity-centric data tell us about the stereotypes of a given social group or in a given social context when no measurements exist?\nAt least three other articles address this question. Bamman and Smith (2014) develop a method to learn character tropes via a latent-variable model of phrases applied to the same character. And Hoyle et al. (2019) and Field and Tsvetkov (2020) learn gender-stereotypical words using statistical models informed by entity-centric data. The present work compliments these efforts by using entity-centric data to 1) produce embeddings that are 2) representative stereotypes along multiple dimensions of social meaning.\nTo this end, it is important to motivate why such an embedding model is useful. It is well established that age, race, and gender (Berger et al., 1972;Mark et al., 2009), and partisanship (Iyengar and Westwood, 2015;DellaPosta, 2020), are critical dimensions along which stereotypes form. However, scholars believe that many other unnamed and more context-specific dimensions likely exist (Kozlowski et al., 2019), that even known stereotypes can vary in importance across social contexts (MacKinnon and Heise, 2010), and that existing methods for measuring stereotypes fail even for important status dimensions like race (Joseph and Morgan, 2020). In turn, many social processes, especially homophily (McPherson et al., 2001), are based on how people use stereotypes to infer similarity with others across all of these sometimes unknown, dynamic, and hard to measure dimensions. Analyses of processes like homophily are therefore enhanced by access to embeddings where proximity in the embedding space relates directly to similarity in social stereotypes, without necessarily needing to know all dimensions of meaning or be able to accurately measure each one. Moreover, as we show in our case study, qualitative analysis of these nearest neighbor results can help us draw insights about important dimensions of meaning that may be useful in understanding stereotypes and how they are used in particular contexts." }, { "figure_ref": [], "heading": "Data", "publication_ref": [ "b43", "b43", "b20" ], "table_ref": [], "text": "We collect and use two novel entity-centric datasets of identities, one from Twitter user profile biographies (hereafter Twitter bios) and the other from Wikipedia biographies (hereafter Wikipedia bios). We introduce limited notation here to describe the data. First, let X denote the full dataset of extracted identities from a given source (Wikipedia or Twitter), where X i = {x i 1 , x i 2 , ..., x i k } represents a set of k identities extracted from a single bio. Second, let V be a vocabulary of all unique identities in the training portion of X.\nTwitter Biographies. Twitter bios are rich sources of information that succinctly represent how people label themselves (Pathak et al., 2021). We use the method from Pathak et al. (2021) to extract identities from 3,534,903 Twitter bios that contain at least two identities, with |V |=22,516. The extraction method is straightforward and consists of two steps, one where Twitter bios are split into chunks using a manually crafted regular expression, and a second cleaning step. For example, from the Twitter bio \"Progressive Christian, wife, I am a proud Canadian,\" their method extracts Progressive Christian, wife and proud Canadian. Further details about the data are in Appendix A.1.\nWikipedia Biographies. Wikipedia bios are a widely studied source of data on (perceptions of) people and their identities (Graells-Garrido et al., 2015;Sun and Peng, 2021;Wagner et al., 2015). To extract identities, we focus on the first sentence of each bio, and extract all compound nouns linked to the biography subject through the verb \"be\" using spacy's dependency parser. For example, the sentence \"Stephen Davis is an American music journalist and historian.\" would result in X i = {music journalist, historian}. We extract identities from 436,132 Wikipedia bios, with |V |=11,074; see Appendix A.2.\nEvaluation Survey Data. Stereotype measurement models in NLP are typically evaluated by projecting embeddings onto particular dimensions (e.g. gender) and then comparing those projections with survey data. However, existing survey data is restricted to identities that are less prevalent in online settings, limiting their applicability to our work. Further, no existing survey data captures stereotypes on partisanship. As such, we opt here to develop a new (and public) survey dataset that captures stereotypes 1) for common phrases in Twitter bios and 2) on the dimension of partisanship, and use this in our evaluation.\nOur survey study was ruled exempt by the IRB at [REMOVED]. We asked 140 respondents on Prolific to rate 250 common identities in Twitter bios on four dimensions: gender, race, age, and partisanship. Each respondent rated between four and seven identities, and each identity was given to at least 3 respondents. We use the mean and standard deviation of ratings for each identity on ech dimension in our analysis. To select identities, we ranked identities in the Twitter dataset by frequency, and then manually selected the first 250 phrases that clearly signaled identity. For gender, age, and race, we followed the approach outlined by Joseph and Morgan (2020) exactly, using the same slider-based Likert scale. For partisanship, we used the same slider-based Likert scale approach, and the ends of the scale were \"Always Democrat\" vs. \"Always Republican.\" See Appendix A.3 for full details." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "In this section we introduce 1) three new methods to leverage entity-centric data to measure stereotypes, and 2) how we evaluate their performance." }, { "figure_ref": [], "heading": "Models", "publication_ref": [], "table_ref": [], "text": "Our first model uses entity-centric data only, helping to assess how well we can measure stereotypes by exploiting only the intuition that entity-centric data capture social stereotypes. The latter two, \"hybrid\" models balance between stereotype information contained in entity-centric data and the semantic information in large, pre-trained DSMs." }, { "figure_ref": [ "fig_0" ], "heading": "Entity Only Model", "publication_ref": [ "b39", "b44" ], "table_ref": [], "text": "The Entity Only (short for Entity-centric Only) model is constructed by applying word2vec (Mikolov et al., 2013) to (the training portion of) X. In the commonly used terminology for word2vec, we treat identities applied to the same person in a given bio as a context. Our intuition is that if the original word2vec model can leverage contextual similarity on the \"word-tolinguistic context\" matrix to identify words with shared \"linguistic stereotypes\", it may also be useful to leverage the \"identity-to-person context\" matrix to identify words with shared social stereotypes. We use word2vec models with an embedding size of 768 to match the other models used below, and train for 300 epochs with a window size of 8 (only .01% of bios contain more than 8 identities). See Appendix B.2 for additional details.\nHybrid BERT Our first hybrid model, Hybrid Bert, fine-tunes a BERT model on entity-centric data. To fine-tune, we use a masked language modeling (MLM) approach, randomly masking one of the identities for each biography. This approach was based on initial findings that forcing the model to predict full identities generated better embeddings in terms of ad hoc nearest neighbor queries than the standard approach to MLM. To prepare our dataset for training, we take each of the instances X i and concatenate the phrases in it to form a full sentence. We then mask one of the identities and fine-tune a BERT-base model for 5 epochs while monitoring 10% of the training set as validation data. We used a learning rate of 2e-5 with a batch Hybrid S-BERT Sentence-BERT produces better representations than BERT for similarity-based queries (Reimers and Gurevych, 2019). Given our interest in these kinds of similarity queries, we therefore construct a second hybrid model, Hybrid S-BERT. Sentence-BERT uses contrastive learning, where the learning setup must be carefully constructed (Schroff et al., 2015). We develop an intuitive but effective approach here.\nIn a contrastive learning framework, each data point is a triplet consisting of an anchor, a positive, and a negative sample. Our goal is to reshape the embedding space through fine-tuning such that for each triplet, the distance between anchor and positive samples is minimized while the distance between anchor and negative samples is maximized. For example, assume assistant professor and Bernie supporter are frequently used together in Twitter bios, but assistant professor nor Bernie supporter frequently co-occur with proud patriot. We would therefore desire these three phrases' embeddings to form a shape similar to Figure 1 in a two dimensional embedding space, where assistant professor and Bernie supporter are close to each other but far from proud patriot.\nWe can frame this contrastive learning problem as a regression task: given a triplet of anchor (X a ), positive (X p ) and negative (X n ) samples and a similarity measure (here, cosine similarity), our objective is for cs(X a , X p ) = 1.0 and cs(X a , X n ) = 0.0 for all training points, where cs stands for cosine similarity. We can then optimize this objective function using mean squared error. The challenge is to construct an effective set of triplets to train on. To do so, we first take an instance X i from training set, and then randomly select an identity from X i to be the positive sample. We name the remaining identities in X i the anchor sample. Finally, we randomly select an identity that never co-occurs with the positive sample in training set as the negative sample. As an example, from the bio [assistant professor, Bernie supporter, #blacklivesmatter] we set assistant professor, #blacklivesmatter as the anchor sample, Bernie supporter as the positive sample, and randomly select a negative sample that never co-occurred with Bernie supporter. We construct a triplet for each X i ∈ X using this method, and uses these to fine-tune an mpnet-base Sentence Bert model. Models were trained for 5 epochs, which took approximately one day using a single A100 GPU. Additional details can be found in Appendix B.3." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [ "b13", "b32", "b44" ], "table_ref": [], "text": "We conduct two kinds of evaluation, an in-domain prediction task where the goal is to predict a heldout identity for an individual given all of their other identities, and a dimension-based task focused on how effectively each model captures stereotypes in survey data. For the prediction task, we study models trained and evaluated separately on Twitter and Wikipedia data; the dimension evaluation is focused on the Twitter models. In all cases, we compare our models to three baseline DSMs used frequently in prior work: BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), and Sentence-BERT (Reimers and Gurevych, 2019). These baselines were selected after experiments with other reasonable approaches, see Appendix B.4." }, { "figure_ref": [], "heading": "Predictive Evaluation", "publication_ref": [ "b28" ], "table_ref": [], "text": "Before training, we randomly hold out 20% of X in each dataset as a test set. For each observation in the test set, we ensure that at least one of the identities is in V . For the ith sample, we then take one identity, X i t , as the hold-out target and and call the rest of the bio X i r . We ensure X i t is in V , i.e. in all cases the target is observed at least once in the training data.\nTo generate predictions, we first generate an embedding for X i r , L i r = embedding(X i r ); details on how embeddings are generated for each model are in Appendix B.5. We then measure the similarity of L i r with the embedding of all identities v ∈ V , Similarity(L i r , L i v ), leaving us with |V | similarity scores to L i r . 3 We evaluate similarity scores returned by each model using three evaluation metrics: average rank, log softmax score and top 1% accuracy. Average rank is computed by, for each test point, finding the ranking of X i t in the scores produced by each model, and taking the average over all test points. The log softmax score draws on prior work (Joseph and Morgan, 2021) and transforms similarity scores into a probability distribution using the softmax, and then take the log of the results. Finally, the top 1% accuracy metric measures the ability of each model to rank the target identity in the top 1% of |V | and is used as a more tangible metric for predictive power.\nFinally, for evaluation, we split our test data into two sets, a main evaluation set (XX of the test data), where X i r also contains at least one identity observed in the training data, and a generalizability set, in which no identities in X i r are seen in the training data. This is necessary to fairly evaluate our Entity Only model, which has a restricted vocabulary, to the other models, each of which are capable of handling out-of-domain text,4 but is also a useful test of the in-domain generalizability of the other models. We evaluate results separately for these two test datasets." }, { "figure_ref": [], "heading": "Dimension-based Evaluation", "publication_ref": [ "b27", "b27", "b14", "b6" ], "table_ref": [], "text": "We evaluate the Twitter-based models on their ability to identity stereotypes associating 250 Twitter-relevant identities with four dimensions of social meaning-age, gender, race, and partisanship-using the survey data introduced in Section 3. Our evaluation setup follows the ranking task outlined by Joseph and Morgan (2020), we review the approach briefly here. First, for each model we produce embeddings of all 250 identities included in the survey. Second, we projected these embeddings onto each of the four dimensions in the survey data; we use the dimension endpoints from Joseph and Morgan (2020) and the projection method from Ethayarajh et al. (2019). Finally, taking the example of the identity \"grad student\" on gender, we calculate the percentage of other identities in V that are stereotyped as more likely to be a woman (man) than \"grad student\" in the survey data, which are also estimated to be more likely to be a woman (man) based on the projected model embedding. Like Joseph and Morgan (2020), we exclude comparisons where mean survey ratings are within a standard deviation of each other. We take the average over all identities to compute a result for each dimension for each model. For race, we compute the average over stereotypical associations to all racial categories in our data (White, Black, Middle Eastern, Hispanic, and Asian).\nFinally, to evaluate cross-domain performance, we compare projections from our Wikipedia trained models to these survey data, and compare projections from Twitter bio models on gender stereotypes for a list of 343 occupational identities from Bolukbasi et al. (2016)." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Predictive Evaluation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1", "fig_1", "fig_2" ], "heading": "Results", "publication_ref": [ "b21" ], "table_ref": [], "text": "The Hybrid S-BERT model consistently outperforms all other models on all three evaluation metrics on the main test dataset. This can be seen in Figure 2, which shows performance on our three metrics separately for Twitter and Wikipedia bios.\nFigure 2 also reveals that the next best model, in all cases, was the Entity Only model, and that the Hybrid BERT model does not show the same jump in performance relative to the baseline BERT model that the Hybrid S-BERT model does. Finally, we see that the baseline S-BERT model outperforms the baseline BERT model. These findings collectively suggest that performance gains cannot only be attributed to fine-tuning on in-domain language, but instead that our contrastive learning setup was effective and that Sentence-BERT is indeed the more effective initial model for fine-tuning.\nFigure 3 shows that the Hybrid S-BERT model also outperforms other models on generalizability test set, but only for Twitter bios. For Twitter, even when the Hybrid model is not exposed to any of the identities in X i r , it improves by nearly 100% over the standard S-BERT model in terms of average rank, and Top 1% accuracy increases by 6 percent in absolute terms. We do not see the same performance gains for the Wikipedia model. We believe this is related to the fact that Sentence-BERT was itself trained in part on Wikipedia, and thus that without additional fine-tuning on these identities, the Hybrid models tended to fall back on pre-trained embeddings which already contained relevant semantic knowledge.\nError Analysis Our understanding of the proposed models is improved by studying where errors occur. Here, we briefly present both quantitative and qualitative reflections on the major sources of error for the Entity Only and Hybrid S-BERT models for Twitter, although claims here generalize to the Wikipedia models as well. Quantitatively, Figure 4 shows that both models performed best, and roughly equally well, on the most frequent identities, but that differences appeared in how the models fared elsewhere.\nThe Entity Only model's ranking distribution (the marginal density plot on the right-hand side of Figure 4) was bimodal, with a large number of high (poor performance) and low (strong performance) ranks for test points. Perhaps unsurprisingly, we find qualitatively that the poor performance of the Entity Only model relative to the Hybrid S-BERT model largely came from an inability 1) to learn from compositional identities or 2) to leverage relevant external knowledge. These issues seemed to impact the model most for moderately frequent target identities, those appearing between 300-10,000 times in the training data. With respect to 1), for example, when provided the Twitter bio \"mother of two, restaurant owner, partly retired, hockey coach\"5 , the Entity Only model ranks the correct held-out identity, \"wife,\" among the least likely. In contrast, the Hybrid S-BERT model correctly ranks \"wife\" in the Top 1%. The core difference is that the Hybrid S-BERT model, but not the Entity Only model, leverages the gender stereotype implied by the \"mother\" portion of the phrase \"mother of two.\" With respect to 2), there were several cases where external knowledge from the pre-trained model benefited the Hybrid models. For example, the Hybrid models, but not the Entity Only models, were able to recognize the similarity between the identities \"follower of ISKSON\" (a Hindu religious organization) and \"proud Hindu.\" Both of these were relatively infrequently used. In contrast, relative to the Entity Only model, Hybrid models struggled with the most infrequent identities, in particular the roughly 18% of identifiers in the test set that occurred fewer than 300 times in the training data. In these cases, as in prior work entity-centric domain adaptive work (Field and Tsvetkov, 2019), the Hybrid models seemed to rely too heavily on knowledge from the pre-trained model and not enough to domain-relevant context. In contrast, the Entity-centric model seemed to benefit on the prediction task from overfitting to stereotypical knowledge for these rarer phrases. This is surprising, but is similar in certain ways to the findings of Wolfe and Caliskan (2021). The Hybrid models also struggled when presented with identities, such as Twitter-specific acronyms, that were likely rare in the DSM data, but more frequent on Twitter. Here, pre-training seemed to induce noise, leading the Hybrid models to predict somewhat randomly." }, { "figure_ref": [], "heading": "Dimension-based Evaluation", "publication_ref": [ "b27", "b3", "b6" ], "table_ref": [], "text": "The Hybrid S-BERT model outperforms all other models on average across all dimensions. Across all identities on all dimensions, the Hybrid S-BERT model had the correct ranking in 67.3% [66.4,68.2] of comparisons, versus 64.5%, 63.8%, and 60.7% for the Entity Only, Hybrid BERT, and S-BERT models, respectively. The other two baselines performed poorly, at nearly chance levels.\nWith respect to specific dimensions, as expected given prior work (Joseph and Morgan, 2020), nearly all of the models performed best on gender. The one exception was the Hybrid BERT model, which performed best on partisanship. Indeed, all three models trained on Twitter bios significantly outperformed the baselines on the partisanship dimension. Given the salience of politics on social media (Bail et al., 2018), this difference reflects the importance of ensuring measures of stereotypes are calibrated to the domain of interest.\nThis point is further enforced by the fact that performance improvements on domain-relevant identities do not extend to out-of-domain identities. On the more traditional identities in the survey data from Bolukbasi et al. (2016), the vanilla S-BERT model correctly predicts 78. 3% [76.3, 80.1] of rankings, compared to approximately 75% for both the Wikipedia and Twitter bio-trained Hybrid S-BERT models. Similarly, our models trained on Wikipedia bios perform on par with or worse than the baseline models when evaluated using the sur-" }, { "figure_ref": [], "heading": "Model", "publication_ref": [ "b34", "b27" ], "table_ref": [], "text": "Ten Nearest Neighbors to \"father of 6\" S-BERT father of 5, father of 4, father of 3, father of five, father of 2, father of four, mother of 6, father of 1, father of three, father of one Entity Only (Twitter) father of 3, father, on gab, father of 4, president trump, father of 5, wawaw, cannabis advocate, husband to one, the9 Hybrid S-BERT (Twitter) blessed husband, father of five, father of four, husband of 1, granddad, husband of one, dad of 4, father of 5, buckeye fan, proud papa Table 1: Top ten nearest neighbors in V to the identity father of 6 for three models vey data we collect here.\nFinally, with the exception of the race dimension, the Entity Only model actually exceeds the performance of the Hybrid S-BERT model. Deeper inspection reveals that the difference comes down to the relative infrequency of the seed words for certain racial categories, in particular for the Hispanic category. As suggested elsewhere (Lucy et al., 2022;Joseph and Morgan, 2020;Field and Tsvetkov, 2019), then, it would appear that the utility of semantic knowledge from pre-trained models is limited for identifying stereotypes of salient identities on salient dimensions of meaning for domain relevant identities." }, { "figure_ref": [], "heading": "Case Study", "publication_ref": [ "b41", "b2" ], "table_ref": [], "text": "As noted in Section 2, better measures of stereotypes can help us 1) in studying similarity-based social processes (e.g. homophily) and 2) to better understand presentation of the self and other in particular social contexts. Here, we present a brief case study on how comparisons across several models of the nearest neighbors for a single identity can help with the latter.\nTable 1 presents the ten nearest neighbors (in terms of cosine similarity) for the identity \"father of 6\" for the S-BERT, Entity Only, and Hybrid S-BERT models. Results for S-BERT are centered largely on linguistic overlap, and include two identities of particular interest: \"mother of 6\" and \"father of one\". In the former case, mothers of six are distinct from fathers of six on implied gender, arguably the most salient dimension of social stereotyping. In the latter, while \"father of one\" and \"father of 6\" have gendered and age-based similarities, those with one (or two, or three) children likely harbor a particular form of incredulity reserved for individuals who can handle six. Qualitatively, then, S-BERT produces nearest neighbors that are linguistically similar and socially relevant, but also some that imply distinct stereotypes on both salient (gender) and less salient (six kids are harder than one kid) dimensions.\nThe Entity Only model, in contrast, has a number of linguistically distinct identities. Instead, it leans towards more frequent phrases on Twitter, and seems to associate \"father of 6\" with rightleaning political identities like \"on gab\" and \"president trump.\"\nFinally, the Hybrid S-BERT model results appear, as would be expected, to fall somewhere in between. The phrases are similar to S-BERT in their linguistic similarities to \"father of six,\" but references to \"mother\" and fathers of fewer than four children are gone. Instead, results are tinged with religious (\"blessed husband\") and much more implicit but still gendered and partisan stereotypes; \"Buckeye fan\" refers to the Ohio State Buckeyes, a University with some of the most popular sports teams in the right-leaning state of Ohio. This aligns with empirical realities: Christian religious identities in the U.S. that would align with having more children (Mosher et al., 1992) also tend to align with right-leaning politics (Armaly et al., 2022).\nThis brief, qualitative case study surfaces the importance in Twitter bios of a rarely studied social dimension, religious affiliation, that requires further study, as well as reminding us of the myriad ways in which partisanship can be implicitly signaled and inferred (DellaPosta, 2020). Critically, our methods also then allow us, with higher fidelity that prior work, to dig further into these dimensions. For example, we could turn to investigating empirically whether or not \"Buckeye fans\" are indeed likely to be perceived as Republicans (using our models, at least, they are), and whether or not these implicit partisan stereotypes drive social processes like homophily on Twitter." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "The present work makes contributions in both data and method to our ability to measure stereotypes implied by the many ways in which people label themselves and others online. Via a brief case study, we show that these contributions lead to insights about subtle and rarely studied dimensions of stereotyping that drive self presentation on Twitter.\nThese contributions come, however, with the caveat that the models we develop improve over baseline performance only within domain; models trained on Wikipedia data are not effective for Twitter data, and vice versa. Moreover, Wikipedia models seems to be limited in their improvements to identities observed in the training data. Given the context-dependent nature of identity and stereotyping, this is not necessarily surprising, but nonetheless requires care and additional study in future work. " }, { "figure_ref": [], "heading": "A Datasets", "publication_ref": [], "table_ref": [ "tab_3", "tab_4" ], "text": "A.1 Twitter Biographies\nThe center column of Table 2 provides summary statistics for the Twitter profile biography data we construct. We begin with a sample of 15,459,872 distinct Twitter bios from users who posted a tweet in 2020 that was found in the Decahose, and who are specified as English-language users by the Twitter API. In order to maintain a focus on culturallyshared stereotypes, we limit the size of the vocabulary to identities used in at least 100 unique Twitter bios in the training set. Further, because we are interested in stereotypical associations between identities, we further remove Twitter bios that contain less than 2 identities. After these cleaning steps, our training data consists of 3,534,903 distinct bios with 22,516 unique identities in the vocabulary.\nThe initial 20% cut of the test dataset includes 3,091,975 bios. After cleaning the test dataeset, which again includes removing bios with less than two identities and keeping only bios which contain at least one identity in V , we have 1,546,001 test bios. We then follow the approach outlined in the main text to produce a the main test dataset and the generalizability test set. Note that the size of each of these splits can be larger than the size of cleaned test dataset, because we can generate multiple instances from a given bio by randomly selecting different targets; i.e. we can generate multiple test instances out of each of profile description by selecting multiple pairs of X i r and X i t . Finally, to provide further insight into the data, Table 3 showcases the top 7 identities in terms of overall frequency in the training data and 7 of the least frequent identities to show that the tail still contains meaningful phrases." }, { "figure_ref": [], "heading": "A.2 Wikipedia Profile Descriptions", "publication_ref": [ "b11" ], "table_ref": [ "tab_3", "tab_6" ], "text": "To collect Wikipedia biographies, we followed the approach specified in prior work (Costa-jussà et al., 2019). Namely, in October of 2021, we crawled all biographies listed on the English version of Wikipedia under the \"Living People\" category. We then use the method described in the main text to extract identities for each bio. We note here also that we attempted to use the entire page, rather than only the first sentence, but found a significant amount of noise in the document-length coreference resolution methods we evaluated for our particular research questions, and thus restricted ourselves to a higher precision, lower recall approach.\nAs with Twitter bios, we filter out identities unlikely to harbor culturally shared stereotypes. Because our extraction method is more straightforward in Wikipedia data, because there are fewer Wikipedia biographies, and because there are fewer concerns over user privacy for Wikipedia data, we find setting a lower threshold is appropriate; we remove identities that occur in fewer than three biographies, and include all others. approach for building test datasets is the same as for the Twitter data, and Table 2 similarly summarizes the data statistics. Table 4 showcases the top 7 identities in terms of overall frequency and 7 identities from the tail of the distribution." }, { "figure_ref": [], "heading": "A.3 Survey Data", "publication_ref": [ "b10" ], "table_ref": [], "text": "We here provide three additional details on our survey data. First, it is of note that in contrast to prior work, we focus explicitly on stereotypes of social media users, asking, for example, \" from Joseph and Morgan (2020).\nSecond, we note that the dataset is a convenience sample of Americans obtained from Prolific. Recent work has suggested that the cost efficiency of convenience samples does not necessarily impact data quality (Coppock et al., 2018). The median age of our sample is 32. Of the 140 respondents, 88 reported their sex as female, 49 as male, and 4 noted other/did not provide. Finally, our sample, like Twitter, was overwhelmingly White; 105 (75%) of the sample reported as White." }, { "figure_ref": [], "heading": "B Modeling", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "B.1 Distributional Semantic Models (Baselines)", "publication_ref": [], "table_ref": [], "text": "For all DSM baseline models except Sentence Bert, including BERT-base, RoBERTa-base and BerTweet-base we experimented using open-source implementations on Hugging Face transformers library6 . For the Sentence-Bert baseline, we used the mpnet-base pre-trained model and the implementation given by open-source Sentence Transformers library.7 ." }, { "figure_ref": [], "heading": "B.2 Entity Only Model", "publication_ref": [], "table_ref": [], "text": "To select hyperparameters, we use 10% of the training data as a validation dataset for the task described in Section 4.2. The primary hyperparameter we tuned was whether to use a Skip-Gram or C-BOW model. We ultimately chose a Skip-Gram model for Twitter and a C-BOW model for Wikipedia, with the other hyperparameters as specified in the main text. Model training took under an hour on a personal laptop. We used the opensource implementation of word2vec in gensim8 for our experiments." }, { "figure_ref": [ "fig_4" ], "heading": "B.3 Hybrid S-BERT", "publication_ref": [], "table_ref": [], "text": "We here briefly provide additional intuition for our Hybrid S-BERT model. Given a dataset of pairs of identity phrases with a label denoting the anchorpositive pair or anchor-negative pairs, we input the pair into the pipeline shown in Figure 6 and extract the latent embeddings of each of the identities. Then we calculate the cosine similarity of the embeddings and backpropagate the mean squared " }, { "figure_ref": [ "fig_5" ], "heading": "B.4 Other Baseline Models", "publication_ref": [ "b42", "b42", "b27" ], "table_ref": [], "text": "In addition to the three baseline models discussed in the text, we also experimented with a pair of other sensible options. First, we expected that a DSM pretrained on Twitter would be a strong baseline to compare to, and thus experimented with additional models pretrained specifically on Twitter data (Nguyen et al., 2020). We use the fine-tuned BERT model on Twitter data proposed by (Nguyen et al., 2020). They propose a BERT-base model fine-tuned using a corpus of 850M English Tweets. However, as shown in Figure 7, model performance was no better than the other, more widely used baseline DSMs we proposed in the main experiments.\nSecond, it seemed reasonable that by first restricting a baseline DSM to known dimensions of social meaning, we could improve their performance. Consequently, we considered baselines where we first projected down all baseline models into the core dimensions of meaning noted by Joseph and Morgan (2020) before the evaluation tasks. In both cases, however, our intuitions did not match empirical reality. These models failed to outperform the baselines used in the main text, and thus we restrict our analysis to the baselines discussed in the main text." }, { "figure_ref": [], "heading": "B.5 Generating Embeddings for the Predictive Experiment", "publication_ref": [], "table_ref": [], "text": "In order to build inputs to the network, since X i r is a list of personal identifiers, to calculate the latent embedding L i r for it, depending on the model, we follow different procedures. For the Entity Only model, we simply measure the average latent vector of all phrases in X i r according to (1). For the hybrid models, as well as the baseline contextualized language models discussed below, we stitch the words in X i r together with comma and create a sentence S i r . We then measure L i r according to Equation (2). Equivalently, this means that for the BERT based models we take the embedding of [CLS] token for pooling and for the Sentence Bert based models we follow the original work and take the average of all token embeddings.\nL i r = v∈X i r (v)/|X i r |(1)\nL i r = P ooling(LM (S i r ))\n(2)" } ]
Social media users on sites like Twitter, Instagram, and Tiktok use the profile description, or bio, field of user profiles to present themselves to the world. In contrast to the "offline" world, where social context often encourages us to adopt a single identity, the profile description is a free-text field in which users are encouraged to present the self using multiple, sometimes conflicting, social identities. While sociologists, social psychologists, sociolinguists, and increasingly computational social scientists, have developed a large and growing array of methods to estimate the meaning of individual social identities, little work has attended to the ways in which social meanings emerge from the collections of social identities present in social media bios. The present work proposes and evaluate three novel, identity-based methods to measure the social dimensions of meaning expressed in Twitter bios. We show that these models outperform reasonable baselines with respect to 1) predicting which sets of identities are more likely to co-occur within a single biography and 2) quantifying perceptions of entire social media biographies along salient dimensions of social meaning on Twitter, in particular partisanship. We demonstrate the utility of our method in a computational social science setting by using model outputs to better understand how self presentation along dimensions of partisanship, religion, age, and gender are related to the sharing of URLs on Twitter from low versus high quality news sites.
Measuring Social Dimensions of Self-Presentation in Social Media Biographies with an Identity-based Approach
[ { "figure_caption": "Figure 1 :1Figure 1: A visualization of the desired embedding space.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure2: Performance of each model (x-axis) on each of our three outcome metrics (separate plot rows) for models trained on both Twitter and Wikipedia biographies (separate columns) for the main test set. Note that for rankings, lower is better.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Performance of each model (x-axis) on each of our three outcome metrics (separate plot rows) for models trained on both Twitter and Wikipedia (separate columns) for the generalizability test set.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure5: On average, across all identities, the percent of other identities that were correctly ranked above or below it (y-axis) on the given dimension of stereotype (separate colored lines) for each model (x-axis). Error bars are 95% bootstrapped confidence intervals.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Training procedure for contrastive learning with regression objective function.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Average rank, log softmax score and top 1 percent accuracy of the target PI given by Bertweet model and all of the projection models derived from main proposed models", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Summary statistics for the entity-centric datasets we develop", "figure_data": "identityNumberoftimesappearedshe352,655her308,829he144,845him144,845they353,4903writer67,824blm63,388mixer streamer freak100published photographer 100sophomore100micah 6:8100public health specialist 100britishindependence100vikings fan100", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Examples of the most and (some of the) least frequent identities in the Twitter dataset", "figure_data": "", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Examples of the most and (some of the) least frequent identities in the Wikipedia dataset", "figure_data": "", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" } ]
Navid Madani; Rabiraj Bandyopadhyay; Briony Swire-Thompson; Michael Miller Yoder; Kenneth Joseph
[ { "authors": "Jisun An; Haewoon Kwak; Yong-Yeol Ahn", "journal": "", "ref_id": "b0", "title": "SemAxis: A Lightweight Framework to Characterize Domain-Specific Word Semantics Beyond Sentiment", "year": "2018" }, { "authors": "Maria Antoniak; David Mimno; Karen Levy", "journal": "", "ref_id": "b1", "title": "Narrative Paths and Negotiation of Power in Birth Stories", "year": "2019" }, { "authors": "T Miles; David T Armaly; Adam M Buckley; Enders", "journal": "Political Behavior", "ref_id": "b2", "title": "Christian Nationalism and Political Violence: Victimhood, Racial Identity, Conspiracy, and Support for the Capitol Attacks", "year": "2022" }, { "authors": "Christopher A Bail; Lisa P Argyle; Taylor W Brown; John P Bumpus; M B Haohan Chen; Jaemin Fallin Hunzaker; Marcus Lee; Friedolin Mann; Alexander Merhout; Volfovsky", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b3", "title": "Exposure to opposing views on social media can increase political polarization", "year": "2018" }, { "authors": "David Bamman; Noah A Smith", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b4", "title": "Unsupervised discovery of biographical structure from text", "year": "2014" }, { "authors": "Joseph Berger; Bernard P Cohen; Morris Zelditch", "journal": "American Sociological Review", "ref_id": "b5", "title": "Status characteristics and social interaction", "year": "1972" }, { "authors": "Tolga Bolukbasi; Kai-Wei Chang; James Y Zou; Venkatesh Saligrama; Adam T Kalai", "journal": "", "ref_id": "b6", "title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings", "year": "2016" }, { "authors": "Aylin Caliskan; Pimparkar Parth Ajay; Tessa Charlesworth; Robert Wolfe; Mahzarin R Banaji", "journal": "", "ref_id": "b7", "title": "Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics", "year": "2022" }, { "authors": "Aylin Caliskan; Molly Lewis", "journal": "", "ref_id": "b8", "title": "Social biases in word embeddings and their relation to human cognition", "year": "2020" }, { "authors": "Ch-Wang Sky; David Jurgens", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "Using sociolinguistic variables to reveal changing attitudes towards sexuality and gender", "year": "2021" }, { "authors": "Alexander Coppock; Thomas J Leeper; Kevin J Mullinix", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b10", "title": "Generalizability of heterogeneous treatment effect estimates across samples", "year": "2018" }, { "authors": "Marta R Costa-Jussà; Pau Li Lin; Cristina España-Bonet ", "journal": "", "ref_id": "b11", "title": "GeBioToolkit: Automatic Extraction of Gender-Balanced Multilingual Corpus of Wikipedia Biographies", "year": "2019" }, { "authors": "Daniel Dellaposta", "journal": "American Sociological Review", "ref_id": "b12", "title": "Pluralistic Collapse: The \"Oil Spill\" Model of Mass Opinion Polarization", "year": "2020" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b13", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Kawin Ethayarajh; David Duvenaud; Graeme Hirst", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Understanding Undesirable Word Embedding Associations", "year": "2019" }, { "authors": "Anjalie Field; Gayatri Bhat; Yulia Tsvetkov", "journal": "Proceedings of the International AAAI Conference on Web and Social Media", "ref_id": "b15", "title": "Contextual Affective Analysis: A Case Study of People Portrayals in Online #MeToo Stories", "year": "2019" }, { "authors": "Anjalie Field; Su Lin Blodgett; Zeerak Waseem; Yulia Tsvetkov", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "A Survey of Race, Racism, and Anti-Racism in NLP", "year": "2021" }, { "authors": "Anjalie Field; Yulia Tsvetkov", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Entity-Centric Contextual Affective Analysis", "year": "2019" }, { "authors": "Anjalie Field; Yulia Tsvetkov", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Unsupervised Discovery of Implicit Gender Bias", "year": "2020" }, { "authors": "Nikhil Garg; Londa Schiebinger; Dan Jurafsky; James Zou", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b19", "title": "Word embeddings quantify 100 years of gender and ethnic stereotypes", "year": "2018" }, { "authors": "Eduardo Graells-Garrido; Mounia Lalmas; Filippo Menczer", "journal": "ACM Press", "ref_id": "b20", "title": "First Women, Second Sex: Gender Bias in Wikipedia", "year": "2015" }, { "authors": "Wei Guo; Aylin Caliskan", "journal": "Association for Computing Machinery", "ref_id": "b21", "title": "Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases", "year": "2021" }, { "authors": "David R Heise", "journal": "The Journal of Mathematical Sociology", "ref_id": "b22", "title": "Affect control theory: Concepts and model", "year": "1987" }, { "authors": "Miles Hewstone; Mark Rubin; Hazel Willis", "journal": "Annual review of psychology", "ref_id": "b23", "title": "Intergroup bias", "year": "2002" }, { "authors": "James L Hilton; William Von Hippel", "journal": "Annual Review of Psychology", "ref_id": "b24", "title": "Stereotypes", "year": "1996" }, { "authors": "Alexander Miserlis Hoyle; Lawrence Wolf-Sonkin; Hanna Wallach; Isabelle Augenstein; Ryan Cotterell", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Unsupervised Discovery of Gendered Language through Latent-Variable Modeling", "year": "2019" }, { "authors": "Shanto Iyengar; Sean J Westwood", "journal": "American Journal of Political Science", "ref_id": "b26", "title": "Fear and loathing across party lines: New evidence on group polarization", "year": "2015" }, { "authors": "Kenneth Joseph; Jonathan H Morgan", "journal": "", "ref_id": "b27", "title": "When do word embeddings accurately reflect surveys on our beliefs about people", "year": "2020" }, { "authors": "Kenneth Joseph; Jonathan Howard; Morgan ", "journal": "The Journal of Mathematical Sociology", "ref_id": "b28", "title": "Friend or foe: A review and synthesis of computational models of the identity labeling problem", "year": "2021" }, { "authors": "Kenneth Joseph; Wei Wei; Kathleen M Carley", "journal": "", "ref_id": "b29", "title": "Exploring patterns of identity usage in tweets: A new problem, solution and case study", "year": "2016" }, { "authors": "Austin C Kozlowski; Matt Taddy; James A Evans", "journal": "American Sociological Review", "ref_id": "b30", "title": "The Geometry of Culture: Analyzing the Meanings of Class through Word Embeddings", "year": "2019" }, { "authors": "Keita Kurita; Nidhi Vyas; Ayush Pareek; Alan W Black; Yulia Tsvetkov", "journal": "", "ref_id": "b31", "title": "Measuring bias in contextualized word representations", "year": "2019" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b32", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "Li Lucy; Dorottya Demszky; Patricia Bromley; Dan Jurafsky", "journal": "AERA Open", "ref_id": "b33", "title": "Content Analysis of Textbooks via Natural Language Processing: Findings on Gender, Race, and Ethnicity in Texas U.S. History Textbooks", "year": "2020" }, { "authors": "Li Lucy; Divya Tadimeti; David Bamman", "journal": "", "ref_id": "b34", "title": "Discovering Differences in the Representation of People using Contextualized Semantic Axes", "year": "2022" }, { "authors": "Neil J Mackinnon; David R Heise", "journal": "Palgrave Macmillan", "ref_id": "b35", "title": "Self, Identity, and Social Institutions", "year": "2010" }, { "authors": "Noah P Mark; Lynn Smith-Lovin; Cecilia L Ridgeway", "journal": "American Journal of Sociology", "ref_id": "b36", "title": "Why do nominal characteristics acquire status value? A minimal explanation for status construction", "year": "2009" }, { "authors": "M Mcpherson; Lynn Smith-Lovin; J Cook", "journal": "Annual Review of Sociology", "ref_id": "b37", "title": "Birds of a Feather: Homophily in Social Networks", "year": "2001" }, { "authors": "Julia Mendelsohn; Yulia Tsvetkov; Dan Jurafsky", "journal": "Frontiers in artificial intelligence", "ref_id": "b38", "title": "A framework for the computational linguistic analysis of dehumanization", "year": "2020" }, { "authors": "Tomas Mikolov; Kai Chen; Greg Corrado; Jeffrey Dean", "journal": "", "ref_id": "b39", "title": "Efficient estimation of word representations in vector space", "year": "2013" }, { "authors": "George A Miller; Walter G Charles", "journal": "Language and cognitive processes", "ref_id": "b40", "title": "Contextual correlates of semantic similarity", "year": "1991" }, { "authors": "William D Mosher; Linda B Williams; David P Johnson", "journal": "Demography", "ref_id": "b41", "title": "Religion and fertility in the United States: New patterns", "year": "1992" }, { "authors": "Thanh Dat Quoc Nguyen; Anh Tuan Vu; Nguyen", "journal": "Association for Computational Linguistics", "ref_id": "b42", "title": "BERTweet: A pre-trained language model for English tweets", "year": "2020" }, { "authors": "Arjunil Pathak; Navid Madani; Kenneth Joseph", "journal": "", "ref_id": "b43", "title": "A method to analyze multiple social identities in twitter bios", "year": "2021" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "", "ref_id": "b44", "title": "Sentence-bert: Sentence embeddings using siamese bert-networks", "year": "2019" } ]
[ { "formula_coordinates": [ 14, 136.51, 330.65, 153.35, 27.15 ], "formula_id": "formula_0", "formula_text": "L i r = v∈X i r (v)/|X i r |(1)" } ]
10.48550/arXiv.1810.04805
[ { "figure_ref": [], "heading": "I. Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b3", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16" ], "table_ref": [], "text": "In recent years, deep learning has revolutionized the field of Natural Language Processing (NLP) with the development of powerful sentence representation techniques like sentence embeddings. These techniques enable NLP models to capture contextual information about words and their relationships within sentences, making them useful for various artificial intelligence (AI) applications such as semantic search, semantic textual similarity (STS), sentiment analysis, and machine translation.\nTwo popular approaches for learning sentence embeddings are supervised and unsupervised learning. Supervised learning methods exploit labels for sentence pairs which provide the information about the relation between the sentences, while unsupervised methods rely on large amounts of unannotated data to learn sentence representations without explicit guidance.\nSupervised methods include the well-known Sentence Bidirectional Encoder Representations from Transformers (SBERT) [1], which uses Siamese [2] and triplet network structures to derive semantically meaningful sentence embeddings. High-quality sentence embeddings can be derived via supervised training; however, the labeling cost is a major concern in practice, especially for specialized domains. In contrast, unsupervised methods do not need data labels, and have been dominant in sentence embedding learning. There are several types of unsupervised methods, including flow-based, contrastive learning, denoise autoencoder, and prompt-based methods. Flow-based methods include BERT-flow [3] and BERT-whitening [4]. BERT-flow transforms the BERT [5] sentence embedding distribution into a smooth and isotropic Gaussian distribution through normalizing flow [6]. BERT-whitening [4] uses a whitening post-processing method to transform the BERT-based sentence to a standard orthogonal basis while reducing its size.\nContrastive learning methods are popular in sentence embedding learning. The Contrastive Framework for Self-Supervised SEntence Representation Transfer (ConSERT) adopts contrastive learning to fine-tune BERT in an unsupervised way. ConSERT solves the collapse issue [7] of BERT-derived sentence representations to make them more applicable for downstream tasks. Contrastive Tension (CT) [8] treats identical and different sentences as positive and negative pairs and constructs the training objective as a noise-contrastive task between the final layer representations of two independent models, in turn forcing the final layer representations suitable for feature extraction. The Simple Contrastive Learning of Sentence Embeddings (SimCSE) [9] uses contrastive learning to learn sentence embedding from either unlabeled or labeled datasets. SimCSE uses dropout to create identical sentence pairs. Enhanced SimCSE (ESimCSE) [10] further improves the unsupervised learning capability of SimCSE by carefully crafting positive and negative pairs. Difference-based Contrastive Learning for Sentence Embeddings (DiffCSE) [11] learns sentence embeddings from the difference between an original and edited sentence, where the edited sentence is created by stochastically masking out the original sentence and then sampling from a masked language model. Information-aggregated Contrastive learning of Sentence Embeddings (InfoCSE) [12] also derives the sentence embeddings with an additional masked language model task and a well-designed network. Contrastive learning for unsupervised Sentence Embedding with Soft Negative samples (SNCSE) [13], takes the negation of original sentences as soft negative samples and adds Bidirectional Margin Loss (BML) into the traditional contrastive learning framework. The Entity-Aware Contrastive Learning of Sentence Embedding (EASE) [14] learns sentence embeddings via contrastive learning between sentences and their related entities. Contrastive learning method with Prompt-derived Virtual semantic Prototypes (ConPVP) [15] constructs virtual semantic prototypes for each instance, and derives negative prototypes by using the negative form of the prompts. ConPVP uses a prototypical contrastive loss to drive the anchor sentence embedding closer to its corresponding semantic prototypes, and further away from the negative prototypes and the prototypes of other sentences.\nDenoise autoencoder and prompt are also be used for unsupervised sentence representation learning. For example, Transformers and Sequential Denoising AutoEncoder (TSDAE) [16], was designed to encode corrupted sentences into fixed-sized embedding vectors and then let the decoder reconstruct the original sentences from this sentence embedding in an unsupervised way. PromptBERT [17] uses prompts to improve BERT sentence embeddings.\nIt should be mentioned that the models mentioned above were trained on general corpora without considering specific domains, resulting in poor performance when applied directly to domains like aviation. This work seeks to resolve this issue by tailoring pretrained sentence transformers for the aviation domain. Aviation text data are characterized by numerous intricacies like technical jargon, unconventional grammar, and inconsistent abbreviations. In addition, aviation text data have no labels. With those limitations in mind, we designed a two-stage approach comprising pre-training and fine-tuning. We leverage TSDAE during pre-training to enhance the base model's capabilities before refining it further via fine-tuning on the Natural Language Inference (NLI) dataset. By doing so, we ensure better performance than general-purpose pre-trained sentence transformers while minimizing overfitting concerns. Our experiments demonstrate the efficacy of our technique, paving the way for more sophisticated NLP solutions for the aviation sector. We hope that our findings foster further investigation in this promising direction.\nThe remainder of this paper is organized as follows: Section II gives a short introduction to our input data sources used in the research. Section III provides details of our adaptation modeling process. The results are shown in Section IV. Finally, we conclude in Section V." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "II. Data Sources and Pre-processing", "publication_ref": [ "b17", "b18" ], "table_ref": [], "text": "In aviation, various types of text data are accumulated to support safety and daily operations as depicted in Fig. 1. For example, Federal Aviation Administration (FAA) has a Comprehensive Electronic Data Analysis and Reporting (CEDAR) database [18], which provides access to several principal aviation safety data and information sources. The Electronic Occurrence Report (EOR) [19] provides an alert identified by an automated system such as Traffic Analysis and Review Program (TARP) or Operational Error Detection Patch (OEDP) that automatically uploads into the CEDAR tool. The Mandatory Occurrence Report (MOR) [19] reports an occurrence involving air traffic services for which collecting associated safety-related data and conditions is mandatory. Notices to Air Men (NOTAM) [20] are electronic communications to alert aircraft pilots of potential hazards along a flight route or at a location that could affect the safety of the flight. METeorological Aerodrome Report (METAR) [21] reports hourly airport surface weather observations. These datasets can generally be classified into two main categories based on their linguistic characteristics: domain-specific and everyday language (see Fig. 1). The first group consists of texts written in specialized language often containing technical terms, abbreviations, and acronyms commonly used within the aviation industry, as shown in Table 1. In contrast, the second category encompasses texts that adhere to standard writing conventions without excessive use of jargon or unusual abbreviations. " }, { "figure_ref": [], "heading": "Table 1 Abbreviated aviation text data example", "publication_ref": [], "table_ref": [], "text": "Our study focuses on analyzing domain specific texts. Given this focus, we chose the Digital Automatic Terminal Information Service (DATIS) as our primary training data source because it consists exclusively of abbreviated texts from the aviation domain. Since DATIS lacks labels, making supervised fine-tuning impossible, we decided to supplement it with a Natural Language Inference (NLI) dataset. The NLI dataset serves as input during the fine-tuning process, helping us overcome potential overfitting issues. In the subsequent sections, we will describe both datasets in more detail. " }, { "figure_ref": [], "heading": "A. Digital Automatic Terminal Information Service (DATIS) Dataset", "publication_ref": [ "b19", "b20" ], "table_ref": [ "tab_0" ], "text": "DATIS systems are widely utilized in busy airports to disseminate information quickly and efficiently [22]. Supported by ARINC [23], DATIS digitally transmits essential Air Traffic Information System (ATIS) notifications, presenting them in an easily comprehensible, written form to flight crews. By doing so, DATIS supports safe and efficient aircraft operation in challenging aeronautical environments.\nDATIS communications primarily relay important airport circumstances, such as available landing and departing runways, current meteorological updates, runway closures, taxiway closures, malfunctioning equipment, surface conditions like ice, and other relevant alerts about birds, construction cranes, drones, lasers, etc. This information is combined into a centralized dataset with associated metadata, including timestamps, originating sources, and event dates. This integrated view enables researchers to explore patterns in DATIS usage and assess its effectiveness for various purposes.\nData residing within the MITRE DATIS archive come directly from the Federal Aviation Administration (FAA) via ARINC. Hourly updates take place around the clock, with a one-hour time lag relative to live events. MITRE's database maintains files containing 300 to 400 entries per hour. Table 2 shows examples extracted directly from the logs. This information is a crucial resource for subsequent analysis and investigations related to the use of DATIS information within complex aeronautic contexts. To gain an in-depth understanding of the DATIS dataset, we performed exploratory data analysis (EDA) for the year 2022. This allowed us to assess the characteristics of the data, identify patterns and trends, and determine any potential issues that might affect our analysis and interpretation. Through this process, we were able to obtain valuable insights into the properties of the data and develop informed hypotheses about its structure. Our findings from this EDA will serve as a foundation for further analysis and modeling efforts. As shown in Fig. 2, the EDA analysis entailed examining 208 airports featured in the 2022 DATIS dataset. Notably, we observed variations in reporting frequency among the airports, with some updating every 20-30 minutes and others updating their messages irregularly. For example, Hong Kong International Airport did not generate any additional datasets after February 2022. Additionally, there are three primary categories of DATIS messages: combined, arrival, and departure. Smaller airports frequently integrate both arrival and departure details into a single consolidated message, while larger airports, like the Hartsfield-Jackson Atlanta International Airport (ATL), generate separate messages for arrival and departure information." }, { "figure_ref": [ "fig_2" ], "heading": "Fig. 2 DATIS airports in 2022.", "publication_ref": [ "b21", "b22" ], "table_ref": [ "tab_0" ], "text": "As raw DATIS messages are manually entered by air traffic controllers, they can often contain transcription mistakes. Such errors may result from misspellings, inconsistent abbreviation (e.g., interchangeable use of RY, RWY, or RUNWAY), formatting irregularities (e.g., RWY32L, 18 L, or NOSIG=), improper grammar, extraneous spaces, or omissions. To ensure successful model training using these messages as input, one must thoroughly scrub and cleanse the data prior to analysis.\nWe developed a set of error correction rules summarized in the green section of Fig. 3. These rules use Python's re module [24] to locate specific patterns and make corrections where appropriate. As shown in Table 3, the preprocessing steps lead to cleaner and better organized data, resulting in a significant improvement over the raw messages presented in Table 2. The enhanced quality of the data allows for more accurate and efficient processing and analysis, ultimately leading to better outcomes. These improvements highlight the importance of effective preprocessing techniques when working with text data. After that, we employed the spaCy library [25] to segment DATIS messages into individual sentences, allowing us to gather a corpus consisting of roughly 2,624,012 distinct sentences drawn from the 2022 data files. These sentences constitute our training dataset for future machine learning initiatives. " }, { "figure_ref": [], "heading": "B. Natural Language Inference (NLI) Dataset", "publication_ref": [ "b23", "b24" ], "table_ref": [], "text": "Natural Language Inference (NLI) involves assessing the truth value of hypotheses based on provided premises. Specifically, NLI categorizes each hypothesis as true (entailment), false (contradiction), or neutral (undetermined). For this study, we obtained the NLI dataset from https://sbert.net/datasets/AllNLI.tsv.gz. This collection contains unions of Stanford Natural Language Inference (SNLI) [26] and MultiNLI [27], resulting in a comprehensive resource with 961,725 records. Having readied the necessary datasets, we proceeded to the next step of model training, detailed in the next section." }, { "figure_ref": [ "fig_3" ], "heading": "III. Modeling Method", "publication_ref": [ "b15" ], "table_ref": [], "text": "DATIS text data have no labels. With that limitation in mind, we followed the paradigm model training process: pre-training plus fine-tuning (see Fig. 4). During pre-training, we used Transformers and sequential Denoising Auto Encoder (TSDAE) to enhance the base model's capabilities on our aviation dataset. We choose TSDAE because of its relatively better performance reported in [16]. For fine-tuning, we used SBERT to tune sentence transformers with the NLI dataset. This ensures that we achieve better performance than general-purpose pre-trained sentence transformers while minimizing overfitting problems. " }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "A. TSDAE", "publication_ref": [ "b25", "b26" ], "table_ref": [], "text": "TSDAE is an unsupervised sentence embedding method; it uses a denoise autoencoder [28] as the architecture (see Stage 1 of Fig. 4). During training, TSDAE adds noise to the original sentence, and then feeds it to an encoder which transforms the corrupted sentence into a fixed-sized sentence embedding vector (indicated by yellow in Stage 1 of Fig. 4). Then, the decoder reconstructs the original sentence from this sentence embedding. A good reconstruction denotes that the sentence embedding from the encoder captures the sentence's semantics well. During inference, the encoder is only used for creating sentence embeddings.\nTSDAE has modified the conventional encoder-decoder transformer [29]. In TSDAE, the key and value of the cross-attention are both confined to the sentence embedding. Formally, the formulation of the modified cross-attention is:\n𝐻 (\") = 𝐴𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛(𝐻 (\"$%) , [𝑆 & ], [𝑆 & ]) 𝐴𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛(𝑄, 𝐾, 𝑉) = 𝑠𝑜𝑓𝑡𝑚𝑎𝑥( 𝑄𝐾 & √𝑑 )𝑉\nwhere 𝐻 (\") ∈ ℝ '×) represents the decoder hidden state at time step 𝑡 at the 𝑘-th layer; 𝑑 is the dimension size of sentence embedding vector; [𝑆 & ] ∈ ℝ %×) is sentence embedding vector; and 𝑄, 𝐾, 𝑉 are query, key, and value, respectively. TSDAE determined an effective approach for training based on three components: (1) using deletion with a deletion ratio of 0.6 as the input noise; (2) employing the output from the [CLS] token as a fixed-size sentence representation; and (3) tying encoder and decoder weights during training. This combination has proven to be highly successful in promoting learning." }, { "figure_ref": [ "fig_4" ], "heading": "B. Sentence-BERT (SBERT)", "publication_ref": [ "b0", "b27", "b28" ], "table_ref": [], "text": "The Sentence-BERT (SBERT) [1] model was developed by modifying the pre-trained BERT network [30]. S-BERT involves training the model on a labeled dataset like NLI to generate sentence embeddings that are more accurate and efficient than those produced by standard BERT or RoBERTa [31] models. Specifically, SBERT uses a combination of Siamese and triplet network architecture to create semantically meaningful sentence representations, as shown in Fig. 5. Using SBERT can significantly decrease inference time from approximately 65 hours with BERT or RoBERTa to just 5 seconds without sacrificing accuracy. We fine-tuned the sentence transformers with the labeled NLI dataset to overcome potential overfitting problems resulting from stage 1 of pre-training. " }, { "figure_ref": [], "heading": "IV. Results", "publication_ref": [], "table_ref": [], "text": "In this section, we present the results of our experiments in applying the aviation sentence transformer to several tasks including STS, clustering, semantic search, and paraphrase mining." }, { "figure_ref": [], "heading": "A. Pretrained Sentence Transformers STS Evaluation", "publication_ref": [], "table_ref": [ "tab_1", "tab_1", "tab_1" ], "text": "We tested the suitability of pre-trained general-purpose sentence transformer models from the Hugging Face website (https://huggingface.co./sentence-transformers) for use on our selected aviation domain text data. We sought to find the best performing model based on its ability to discern differences between sets of similar or dissimilar sentences. For evaluation purposes, we constructed four test cases in the aviation domain, and computed the cosine similarity score for each sentence pair.\nWe compiled the resulting scores into Table 4. The bert-base-cased model did not effectively differentiate between sentences in the aviation corpus. As such, it did not meet our requirements, so we excluded it from further consideration. The bert-base-nli-mean-tokens model also fell short of expectations due to its tendency to treat disparate sentences (the Index 2 row in Table 4) with a high cosine similarity score. Conversely, the Index 3 row in Table 4 had highly comparable phrasing, thus providing an ideal test case to measure the capability of the remaining models to generate analogous output. All-MiniLM-L6-v2, all-distilroberta-v1, and all-mpnet-base-v2 also underperformed in this case and were therefore eliminated. Therefore, all-MiniLM-L12-v2 is the final candidate for aviation domain adaptation. The following sections contain additional details about the adaptation experiments and their corresponding results. " }, { "figure_ref": [], "heading": "B. Experiment Settings", "publication_ref": [], "table_ref": [], "text": "In this section, we describe the training environment used for our model. Table 5 lists our hardware equipment setup. We cloned the entire sentence transformers development package from https://github.com/UKPLab/sentencetransformers. These resources enabled us to effectively train our model and achieve the desired results." }, { "figure_ref": [], "heading": "Table 5 Experimental hardware environment", "publication_ref": [ "b29" ], "table_ref": [], "text": "Prior to beginning the training process, we prepared the DATIS training data by formatting each sentence onto a separate line, as needed by the software package being used. We used <sentencetransformers/examples/unsupervised_learning/TSDAE/train_tsdae_from_file.py> as our training script, and we adjusted the training parameters according to those presented in the second column of Table 6. With this configuration, we began the stage 1 training phase.\nAfter completing stage 1, we proceeded to stage 2 of fine-tuning using NLI dataset, using the script <sentencetransformers/examples/training/nli/training_nli_v2.py> and the parameters listed in the third column of Table 6. The script uses the Multiple Negative Ranking Loss strategy [32] where entailment pairs are considered positive while contradictions are treated as hard negatives. Every 10% of the training process, we evaluated the performance of the model on the STS benchmark dataset. When stage 2 was complete, the model was ready to be applied to practical tasks." }, { "figure_ref": [], "heading": "Table 6 Training parameter settings C. Adapted Sentence Transformer STS Evaluation", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "After completing the two-part training process, we applied the aviation variant of the sentence transformer, named aviation-all-MiniLM-L12-v2, to the same set of text data used in Table 4. The results, listed in Table 7, demonstrate that the adapted aviation-all-MiniLM-L12-v2 model outperforms the general-purpose all-MiniLM-L12-v2. This shows that the adaptation process effectively tailored the model for the domain-specific language patterns prevalent in aviation text." }, { "figure_ref": [], "heading": "Table 7 Adapted model performance comparison D. Clustering Results", "publication_ref": [ "b23" ], "table_ref": [ "tab_3" ], "text": "We next used aviation-all-MiniLM-L12-v2 model to perform clustering on the DATIS sentences about NOTAM reports from January 1, 2022 to January 9, 2022. The resulting clusters are detailed in Table 8 and visualized using a t-Distributed Stochastic Neighbor Embedding (t-SNE) [26] plot in Error! Reference source not found., which fully demonstrates our adapted sentence transformer was able to identify meaningful patterns in the data. For instance, cluster 0 focuses on runway surface conditions (RSC), while cluster 2 highlights bird activities. Cluster 3 deals with equipment being out of service (OTS), cluster 4 pertains to tower operations, cluster 5 discusses closed taxiways, cluster 6 centers around runway closures, cluster 7 concerns hazardous weather situations, cluster 8 alerts pilots that the tower must call for release from other facilities before allowing them to depart, cluster 9 warns about possible threats from lasers shining into aircraft windows, and cluster 10 provides information on snow. Cluster 1 is a miscellaneous category, containing a broad range of uncommon messages. " }, { "figure_ref": [], "heading": "E. Semantic Search", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "In addition to clustering, we used our newly adapted aviation-all-MiniLM-L12-v2 model to perform semantic searches. By providing a query sentence such as \"BIRD ACTIVITY IN THE VICINITY OF THE AIRPORT,\" the model rapidly identified the ten most similar sentences within the dataset based on their cosine similarity scores; the count column in Table 9 represents how many of the same sentences are included in the searched dataset. Notably, the use of our adapted model allowed for more precise and accurate retrieval of relevant sentences, reflecting its enhanced comprehension of domain-specific language patterns. Furthermore, it underscores the variety of language expressions in the aviation domain. " }, { "figure_ref": [], "heading": "F. Paraphrase Mining", "publication_ref": [ "b30" ], "table_ref": [], "text": "To perform paraphrase mining of DATIS messages, we again turned to our aviation-all-MiniLM-L12-v2 model. Unlike previous methods involving brute force comparison, our approach uses the power of the sentence transformer package to quickly and accurately identify duplicate content across larger datasets. Our implementation is guided by the principles introduced in [33]. Table 10 demonstrates the efficacy of this approach, where the scores represent cosine similarity values. When score equals to 1, it means that two messages are identical. This refinement process enables us to streamline the detection of repetitive information while accounting for industry-specific jargon and nuances." }, { "figure_ref": [], "heading": "Table 10 DATIS message paraphrase mining examples", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "V. Summary", "publication_ref": [], "table_ref": [], "text": "This study describes our novel two-stage training approach utilizing TSDAE and SBERT models to adapt sentence transformers for use on aviation domain text datasets. Experimental evaluation demonstrates significant improvements in various NLP tasks such as STS, clustering, semantic search, and paraphrase mining from methods using generalpurpose sentence transformers. Specifically, the adapted model effectively parses DATIS messages, enabling updates regarding weather conditions and other critical landing and departure information to be processed more efficiently. Our experiment results confirm that the adapted model performs well in extracting comprehensible information from text that is dense with abbreviations and domain-specific jargon. Our ongoing research is focused on using the adapted model to support applications that can continuously check for spatial and temporal patterns in reported events to enhance situational awareness and enable proactive mitigation strategies for potential threats to aviation safety. Our proposed adaptation methodology could also be applied to other areas that use a lot of domain-specific language. The contents of this document reflect the views of the authors and do not necessarily reflect the views of the Federal Aviation Administration (FAA) or the Department of Transportation (DOT). Neither the FAA nor the DOT makes any warranty or guarantee, expressed or implied, concerning the content or accuracy of these views. " }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "The authors thank Dr. Jonathan Hoffman, Dennis Sawyer, Dr. Craig Wanke, Dave Hamrick, Dr. Tom Becher, Mike Robinson, Dr. Lixia Song, Erik Vargo, Matt Yankey, Mahesh Balakrishna, Huang Tang, Shuo Chen, Tao Yu, Michele Ricciardi, and Anahita Imanian of the MITRE Corporation for their support, valuable discussions, and insights." } ]
Learning effective sentence representations is crucial for many Natural Language Processing (NLP) tasks, including semantic search, semantic textual similarity (STS), and clustering. While multiple transformer models have been developed for sentence embedding learning, these models may not perform optimally when dealing with specialized domains like aviation, which has unique characteristics such as technical jargon, abbreviations, and unconventional grammar. Furthermore, the absence of labeled datasets makes it difficult to train models specifically for the aviation domain. To address these challenges, we propose a novel approach for adapting sentence transformers for the aviation domain. Our method is a two-stage process consisting of pre-training followed by fine-tuning. During pre-training, we use Transformers and Sequential Denoising AutoEncoder (TSDAE) with aviation text data as input to improve the initial model performance. Subsequently, we fine-tune our models using a Natural Language Inference (NLI) dataset in the Sentence Bidirectional Encoder Representations from Transformers (SBERT) architecture to mitigate overfitting issues. Experimental results on several downstream tasks show that our adapted sentence transformers significantly outperform general-purpose transformers, demonstrating the effectiveness of our approach in capturing the nuances of the aviation domain. Overall, our work highlights the importance of domain-specific adaptation in developing high-quality NLP solutions for specialized industries like aviation.
Adapting Sentence Transformers for the Aviation Domain
[ { "figure_caption": "Fig. 11Fig. 1 Aviation domain text data sources.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "W SPECI 162359 -EXPECT VECTORS FOR INDEPENDENT PARALLEL ILS APPROACH -RWY 26R 26L -NEW ATC SYSTEM IN OPERATION, EXPECT POSSIBLE DELAY -TRL 60 -RWY 26 LEFT CLSD FM 2100 TILL 0400 UTC ,,", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 33Fig. 3 DATIS dataset cleaning preprocessing. Table 3 Cleaned message data examples Cleaned Message QU ANPDAXA, .ANCATXA 010000, TIS, AD ANC OS CM2353, ANC ATIS INFO M 2353Z. 35006KT 2SM SNOW BKN017 OVC032 M06/M08 A2929 (TWO NINER TWO NINER) RMK SFC VIS 4. 7R, 7L APPROACH IN USE ARRIVING RWY 7R, 7L, DEPARTING RWY 7L, 33. NOTAMS. AD WIP SNOW REMOVAL ALL RWYS ALTERNATELY CLOSED. HAZD WX INFO FOR ANC AREA AVBL FM FSS. RWY 7R 5 5 5 2111Z, RWY 7L 5 5 5 2230Z, RWY 33 5 5 5 2112Z. ADVS YOU HAVE INFO M. QU ANPDAXA, .PYCONN2 010000, TIS, AD TNCA OS CT0000, TNCA ARR ATIS T 0000Z. TNCA 010000Z WIND RWY 11 TDZ 110/14KT END 100/14KT VIS 10KM CLD BKN 1800FT T27 DP22 QNH 1012HPA TREND NOSIG. EXP ILS/DME OR VISUAL APPROACH, RWY 11 IN USE. RWY CONDITION REPORT NOT AVBL. TRANSITION LEVEL FL40. NOTAM, ABA VOR/DME FREQ 112.5 MHZ OUT OF SER UFN. ADZ ON INITIAL CTC YOU HAVE INFO T.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 44Fig. 4 Aviation sentence transformer training pipeline.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 55Fig. 5 SBERT architecture with classification objective function [30].", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 t6Fig. 6 t-SNE plot of sentence embedding.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "ã2023 The MITRE Corporation. All Rights Reserved. Approved for Public Release, Distribution Unlimited. PRS Case 23-1705.", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": ".ANCATXA 010000\\r\\nTIS\\r\\nAD ANC /OS CM2353\\r\\n-ANC ATIS INFO M 2353Z. 35006KT 2SM -SN BKN017 OVC032 M06/M08 A2929 (TWO NINER TWO NINER) RMK SFC VIS 4. 7R, 7L APPROACHES IN USE.. LANDING RWY 7R, 7L, DEPARTING RWY 7L, 33. NOTAMS... AD WIP SNOW REMOVAL ALL RWYS ALTERNATELY CLSD.. HAZD WX INFO FOR ANC AREA AVBL FM FSS. RWY 7R 5 5 5 2111Z, RWY 7L 5 5 5 2230Z, RWY 33 5 5 5 2112Z. ...ADVS YOU HAVE INFO M.\\r\\n\\n ANPDAXA\\r\\n.PYCONN2 010000\\r\\nTIS\\r\\nAD TNCA/OS CT0000\\r\\n-\\r\\nTNCA ARR ATIS T\\r\\n0000Z\\r\\nEXP ILS/DME OR VISUAL APPROACH, RWY 11 IN USE.\\r\\nRUNWAY CONDITION REPORT NOT AVBL.\\r\\nTRANSITION LEVEL FL40.\\r\\nNOTAM, ABA VOR/DME FREQ 112.5 MHZ OUT OF SER UFN. \\r\\nTNCA 010000Z WIND RWY 11 TDZ 110/14KT END 100/14KT VIS 10KM CLD BKN 1800FT T27 DP22 QNH 1012HPA TREND NOSIG=\\r\\nADZ ON INITIAL CTC YOU HAVE INFO T.\\r\\n\\r\\n\\n", "figure_data": "", "figure_id": "tab_0", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": "Index Sentence1Sentence2all-bert-base-bert-all-all-all-MiniLM-nli-mean-base-MiniLM-distilroberta-mpnet-L6-v2tokenscasedL12-v2v1base-v20NOTAMS.NOTICE TO AIR0.1180.3060.7470.2070.3640.297MISSIONS.1TDWR OTS.RWY 2R GS OTS.0.5430.7880.8820.5800.6600.5722HAZDUS WXCLEARANCE0.2010.5740.9140.3110.2990.355INFO FOR PHXFREQUENCY ISAREA AVBL ON121.9.FSS FREQS.3BIRD ACTIVITYWARNING, BIRD0.7300.8900.9470.7560.6440.710INVOF ARPT.ACTIVITY IN VCYOF ARPT", "figure_id": "tab_1", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "", "figure_data": "Sentence", "figure_id": "tab_3", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "", "figure_data": "QuerySentenceScoreCountBIRD ACTIVITY IN VCY OF ARPT BIRD ACTIVITY IN THE VCNTY OF THE ARPT0.9744 2BIRD ACTIVITY IN VCY OF ARPT BIRD ACTIVITY RPTD IN THE VC OF THE ARPT 0.9661 1BIRD ACTIVITY IN VCY OF ARPT BIRD ACTIVITY VC OF ARPT0.9604 2BIRD ACTIVITY IN VCY OF ARPT BIRD ACTIVITY VCNTY ARPT0.9596 2BIRD ACTIVITY IN VCY OF ARPT BIRD ACTIVITY VICINITY ARPT0.9067 2BIRD ACTIVITY IN VCY OF ARPT BIRD ACTIVITY VICINITY OF ARPT0.8973 1BIRD ACTIVITY IN VCY OF ARPT BIRD ACTIVITY INVOF ARPT0.8806 1BIRD ACTIVITY IN VCY OF ARPT BIRD ACTIVITY0.8518 2BIRD ACTIVITY IN VCY OF ARPT BIRD ACTIVITY VICINITY DAL ARPT0.8353 1BIRD ACTIVITY IN VCY OF ARPT BIRD ACTIVITY VICINITY ALB ARPT0.8196 2", "figure_id": "tab_4", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "This work was sponsored by MITRE's Independent Research and Development Program.", "figure_data": "NOTICEIdx1Idx2Message1Message2Score35971 35972 QU ANPDAXA, .CHIXCXA 050345, FFQU ANPDAXA, .CHIXCXA 050345, FF KANPXAAD,1.0000KANPXAAD, 050345 YMENATIS, ATIS YMEN050345 YMENATIS, ATIS YMEN K 050345. WIND:K 050345. WIND: 090/15 MAX XW 15 KTS090/15 MAX XW 15 KTS MAX TW 3 KTS VIS: GTMAX TW 3 KTS VIS: GT 10KM CLD: FEW03010KM CLD: FEW030 SCT042 TMP: 27 QNH: 1007.SCT042 TMP: 27 QNH: 1007. RWY: 17.RWY: 17.54515 52842 QU ANPDAXA, .YQMATXA 070527, TIS, ADQU ANPDAXA, .YQMATXA 070106, TIS, AD CYQM0.9635CYQM OS CZ0512, CYQM ATIS INFO Z 0500Z.OS CV0106, CYQM ATIS INFO V 0100Z. 33007KT07006KT 15SM SHSN BKN025 BKN04515SM BKN025 BKN040 M00/M04 A2982.M02/M05 A2991. APPROACH RNAV ZULUAPPROACH RNAV ZULU RWY 29. INFORMRWY 29. INFORM MONCTON CENTER ONMONCTON CENTER ON FREQUENCY 124.4 OFFREQUENCY 124.4 OF REQUESTEDREQUESTED APPROACH ON INITIAL CONTACT.APPROACH ON INITIAL CONTACT.ARRIVING AND DEPARTING RWY 29. RSC RWYARRIVING AND DEPARTING RWY 29. RSC06, RSC 6 6 6 10% ICE, 100% DRY, 100% ICE, VALIDRWY 06, RSC 6 6 6 10% ICE, 100% DRY, 100%AT 2329Z. RSC RWY 29, RSC 6 6 6 100% DRY, 100%ICE, VALID AT 2329Z. RSC RWY 29, RSC 6 6 6DRY, 10% ICE, VALID AT 2335Z. INFORM CYQM100% DRY, 100% DRY, 10% ICE, VALID ATATC ATIS V.2335Z. INFORM CYQM ATC ATIS Z.6592QU ANPDAXA, .BKKATXA 010010, TIS, ADQU ANPDAXA, .LASATXA 010018, TIS, AD LAS OS0.3117VTSS OS CA0000, VTSS ARR ATIS A 0012Z.CY2356, LAS ATIS INFO Y 2356Z. 24009KT 10SM0000Z WIND 100/4KT VIS 8000M FBL RA CLDFEW060 13/00 A2951 (TWO NINER FIVE ONE). ILSFEW CB 1800FT SCT 2000FT BKN 2500FT T23APPROACH RWY 26L, VISUAL APPROACH INDP23 QNH 1012HPA TREND NOSIG. RNP 08USE. ARRIVING RWYS 26L AND 19R. DEPARTING12312335 08 5 5 5 100/100/100 NR/NR/NRRWYS 26R, 19R AND 19L. SIMUL APPROACH TOWET/WET/WET. ADZ CONTROLLER WHENCROSSING AND PARALLEL RWYS IN USE,INITIAL CONTACT YOU HAVE INFO A.CONVERGING RWY OPERATIONS IN EFFECT.NOTAMS. TWY DELTA BETWEEN SIERRA ANDMIKE IS RESTRICTED TO MAX WINGSPAN 1 3 5FEET. HAZD WX INFO AVAILABLE ON HIWAS,FSS FREQ. GC COMBINED ON 121.1, HELICOPTORCONTROL OPEN ON 118.75. ADVS YOU HAVEINFO Y.", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" } ]
Liya Wang; Jason Chou; David Rouck; Alex Tien; Diane M Baumgartner
[ { "authors": "N Reimers; I Gurevych", "journal": "", "ref_id": "b0", "title": "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", "year": "2019-08-27" }, { "authors": "", "journal": "Wikipedia", "ref_id": "b1", "title": "Siamese neural network", "year": "2023-02-28" }, { "authors": "B Li; H Zhou; J He; M Wang; Y Yang; L Li", "journal": "", "ref_id": "b2", "title": "On the Sentence Embeddings from Pre-trained Language Models", "year": "2020-11-02" }, { "authors": "J Su; J Cao; W Liu; Y Ou", "journal": "", "ref_id": "b3", "title": "Whitening Sentence Representations for Better Semantics and Faster Retrieval", "year": "2021-03-28" }, { "authors": "J Devlin; M.-W Chang; K Lee; K Toutanova", "journal": "", "ref_id": "b4", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "year": "2019-05-24" }, { "authors": "L Dinh; D Krueger; Y Bengio", "journal": "", "ref_id": "b5", "title": "NICE: Non-linear Independent Components Estimation", "year": "2015-04-10" }, { "authors": "V Kothapalli", "journal": "", "ref_id": "b6", "title": "Neural Collapse: A Review on Modelling Principles and Generalization", "year": "2023-04-11" }, { "authors": "F Carlsson; A C Gyllensten; E Gogoulou; E Y Hellqvist; M Sahlgren", "journal": "", "ref_id": "b7", "title": "Semantic Re-tuning with Contrastive Tension", "year": "2023-01" }, { "authors": "T Gao; X Yao; D Chen", "journal": "", "ref_id": "b8", "title": "SimCSE: Simple Contrastive Learning of Sentence Embeddings", "year": "2022-05-18" }, { "authors": "X Wu; C Gao; L Zang; J Han; Z Wang; S Hu", "journal": "", "ref_id": "b9", "title": "ESimCSE: Enhanced Sample Building Method for Contrastive Learning of Unsupervised Sentence Embedding", "year": "2022-09-11" }, { "authors": "Y.-S Chuang", "journal": "", "ref_id": "b10", "title": "DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings", "year": "2022-04-21" }, { "authors": "X Wu; C Gao; Z Lin; J Han; Z Wang; S Hu", "journal": "", "ref_id": "b11", "title": "InfoCSE: Information-aggregated Contrastive Learning of Sentence Embeddings", "year": "2022-10-13" }, { "authors": "H Wang; Y Li; Z Huang; Y Dou; L Kong; J Shao", "journal": "", "ref_id": "b12", "title": "SNCSE: Contrastive Learning for Unsupervised Sentence Embedding with Soft Negative Samples", "year": "2022-02" }, { "authors": "S Nishikawa; R Ri; I Yamada; Y Tsuruoka; I Echizen", "journal": "", "ref_id": "b13", "title": "EASE: Entity-Aware Contrastive Learning of Sentence Embedding", "year": "2022-05-09" }, { "authors": "J Zeng; Y Yin; Y Jiang; S Wu; Y Cao", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Contrastive Learning with Prompt-derived Virtual Semantic Prototypes for Unsupervised Sentence Embedding", "year": "2022-12" }, { "authors": "K Wang; N Reimers; I Gurevych", "journal": "", "ref_id": "b15", "title": "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning", "year": "2021-09-10" }, { "authors": "T Jiang", "journal": "", "ref_id": "b16", "title": "PromptBERT: Improving BERT Sentence Embeddings with Prompts", "year": "2022-10-13" }, { "authors": "", "journal": "Federal Aviation Administration: NOTAM Search", "ref_id": "b17", "title": "", "year": "2023-05-06" }, { "authors": "N N W S ; -A W C Homepage", "journal": "", "ref_id": "b18", "title": "AWC -Aviation Weather Center", "year": "2023-05-06" }, { "authors": "", "journal": "", "ref_id": "b19", "title": "DATIS Dataset -mitrepedia", "year": "2023-04-30" }, { "authors": "", "journal": "", "ref_id": "b20", "title": "ARINCDirect", "year": "2023-04-30" }, { "authors": "", "journal": "", "ref_id": "b21", "title": "re -Regular expression operations", "year": "2023-04-30" }, { "authors": "", "journal": "", "ref_id": "b22", "title": "spaCy • Industrial-strength Natural Language Processing in Python", "year": "2023-05-10" }, { "authors": "", "journal": "", "ref_id": "b23", "title": "The Stanford Natural Language Processing Group", "year": "2023-04-30" }, { "authors": "", "journal": "", "ref_id": "b24", "title": "MultiNLI", "year": "2023-04-30" }, { "authors": "P Vincent; H Larochelle; I Lajoie; Y Bengio; P.-A Manzagol", "journal": "J. Mach. Learn. Res", "ref_id": "b25", "title": "Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion", "year": "2010-12" }, { "authors": "A Vaswani", "journal": "", "ref_id": "b26", "title": "Attention Is All You Need", "year": "2017-05" }, { "authors": "J Devlin; M.-W Chang; K Lee; K Toutanova", "journal": "", "ref_id": "b27", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "year": "2018-10-11" }, { "authors": "Y Liu", "journal": "", "ref_id": "b28", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach", "year": "2019-07-26" }, { "authors": "M Henderson", "journal": "", "ref_id": "b29", "title": "Efficient Natural Language Response Suggestion for Smart Reply", "year": "2017-05-01" }, { "authors": "", "journal": "", "ref_id": "b30", "title": "Paraphrase Mining -Sentence-Transformers documentation", "year": "2023-05-08" } ]
[ { "formula_coordinates": [ 6, 222.86, 429.52, 166, 50.8 ], "formula_id": "formula_0", "formula_text": "𝐻 (\") = 𝐴𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛(𝐻 (\"$%) , [𝑆 & ], [𝑆 & ]) 𝐴𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛(𝑄, 𝐾, 𝑉) = 𝑠𝑜𝑓𝑡𝑚𝑎𝑥( 𝑄𝐾 & √𝑑 )𝑉" } ]
2023-08-17
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b48", "b35", "b40", "b8", "b4", "b28", "b29", "b30" ], "table_ref": [], "text": "Autonomous agents rely typically on explicit representations of the environment for localization and navigation, such as point clouds [5,49], or voxels [3,36]. However, such approaches lack topological or semantic information, struggle to generalize to changes to novel viewpoints, and do not scale properly to tasks that require reasoning about 3D geometry and affordances. Autonomous agents require semantic, meaningful, and informative representations to properly understand and interact with their environment and perform complex tasks [4,41].\nImplicit representations are better suited to reasoning and are then relevant, as they capture in a continuous space the main high-level features of the scene. Many approaches focus on 3D geometry without topological restrictions using learned occupancy or signed distance functions [9,15,29,30,37,38]. Nevertheless, the recent success of neural fields to encode the tridimensional geometry and lighting of a scene has revolutionized the field [50]. Although Neural Radiance Fields (NeRFs) focused initially on learning colour and occupancy models in a 3D space [31]," }, { "figure_ref": [ "fig_0" ], "heading": "Pixel querying", "publication_ref": [ "b0", "b46", "b1", "b20", "b34", "b45", "b24", "b38", "b54", "b50", "b53", "b25", "b31", "b56", "b25", "b50", "b53", "b42", "b55", "b58", "b11" ], "table_ref": [], "text": "Ray-Patch querying (ours) they have demonstrated promise in a wide array of tasks such as scene segmentation [6, 22, 60], depth estimation [18], SLAM [1,47,61], scene editing [2,17,21,24,35,46], and many more [50].\nThe main limitations of neural rendering are 1) the exhaustive querying of the model that is required to recover each pixel of a specific viewpoint, and 2) the need to fit the NeRF model for each scene. Several approaches reduce the 3D querying cost using depth [13,25,39,55], geometry [8,51,54,58], or changing the discretization [26,32,57], and avoid per-scene optimization using latent vectors [8,18,24,26,51,54,58]. Among them, the extensions of Light Field Networks (LFNs) [45] with transformers (Light Field Transformers or LFTs) [18,42,43] have shown potential to solve both limitations, although they are constrained by the quadratic scaling of attention. Despite recent attempts to reduce it, these either modify the attention algorithm for a less expensive but less effective version [48,56,59]; or are based on extensive optimization of last generation hardware and software [11,12]. Therefore, despite significant advances in both qualitative performance and efficiency, all these approaches are still far from being scalable to real scenarios with real-time performance.\nIn this work we propose Ray-Patch, a novel decoding method that reduces the computation and memory load of LFTs up to one and two orders of magnitude respectively, while keeping the quality of the output. We developed Ray-Patch as a generic decoder that can be implemented on any LFT architecture in the literature. Instead of the typical per-pixel querying, we group all pixels in a square patch, as shown in Fig. 1, and compute a set of feature vectors, which are then grouped and decoded into the target viewpoint. Specifically, it combines a transformer decoder with convolutional neural networks to reduce the cost of the decoder processing. This results in a drastic reduction in the number of queries, which impacts quadratically in the cost, allowing to decode high-resolution images while keeping and sometimes even improving the training convergence." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b30", "b25", "b31", "b50", "b53", "b24", "b42", "b51", "b19", "b54", "b38", "b24", "b0" ], "table_ref": [], "text": "A NeRF [31] is an implicit representation of a scene that is learnt from a sparse set of multiple views of such scene, annotated with their corresponding camera poses. NeRFs encode a continuous volumetric model of a scene that can be used to render photorealistic novel views from arbitrary viewpoints. The rendering process involves projecting pixels into rays, sampling 3D positions along the rays, and querying a Multilayer Perceptron (MLP) network that predicts the colour and occupancy of the sampled 3D points. Despite its versatility and impressive results in various applications [50], NeRFs suffer from two major limitations: exhaustive 3D sampling is required to decode each pixel, and a new model must be trained for each new scene.\nMulti-scene implicit representations. One of the most promising approaches to enable the generalization of neural fields across multiple scenes is conditioning the output of the MLP to a latent vector that is optimized for each scene at test time. NSVF [26] discretizes the 3D space into a sparse voxel octree associating each voxel with a feature vector that guides the sampling of 3D points. Control-NeRF [24] also utilizes voxel features, but employs a multiresolution incremental training of the full feature volume. Nice-SLAM [61] leverages a multi-resolution feature grid to encode the scene while simultaneously performing camera tracking. InstantNGP [32] implements multi-resolution voxelization as a hash encoding, where the MLP is responsible for avoiding hash collisions, resulting in remarkable improvements in reconstruction quality and convergence.\nOther approaches involve using an encoder architecture to compute latent vectors, and use these to condition the NeRF decoder. GRF [51] projects sampled 3D points into feature maps of the input views computed with a CNN encoder-decoder. These are first processed by shared MLP to condition on the sampled 3D point, and then aggregated using an attention module. The final feature vector is fed to a final MLP to estimate the 3D point colour and density. PixelNeRF [58] extends this approach by adding the feature vector as a residual in each layer of the MLP and using a simple average pooling instead of an attention module. IBRNet [54] also projects 3D points into nearby views to estimate the conditioning feature vectors, although it relies on a differentiable rendering algorithm rather than a neural representation to estimate the final colour and depth. MVS-NeRF [8] proposes the use of a CNN and homography reprojections to build a cost volume and compute the features. ENeRF [25] builds on MVS approaches to build the cost volume, guide the sampling, and condition the reconstruction. However, these methods that rely on homographies are limited to a small range of views in front of the reference camera, require accurate camera poses, and are not robust to occlusions.\nSRT [43] introduced a Transformer [52] to encode and decode the scene, performing self-attention between the features of different points of view. It generates a latent representation of the scene, which is decoded using a lightfield cross-attention module. OSRT [42] improves SRT by disentangling the object of the scene and improving its control. DeFiNe [18] replaces the basic Transformer for a Per-ceiverIO [20] reducing the cost of self-attention and scaling to bigger resolutions. Despite the low rendering time achieved for new scenes and novel points of view, these methods computation cost scales poorly due to the use of attention. Consequently, they may not be suitable for large scenes with high-resolution images.\nRendering from implicit neural representations. To render a pixel, NeRF evaluates 3D coordinates sampled along a ray combining both uniform and stratified distribution. This random sampling results in a great number of evaluation wasted on empty space. DONeRF [34] trains an oracle network supervised on dense depth map on synthetic data. Although it achieves outstanding results, the setup is hard to generalize to real environments. Nerfin-gMVS [55] trains a monocular depth estimation network, supervised with sparse depth from Structure from Motion (SfM), to guide the sampling and reduce empty queries both at training and test. Roessle et al. [39] instead uses a depth completion network to estimate depth and uncertainty from the sparse SfM prior at training time, improving performance while still requiring multiple sampling at test time. In contrast, ENeRF [25] uses an estimated cost volume to predict the depth and guide the sampling without any explicit depth supervision nor structure from motion. Other alternative like Neural RGB-D [1] and Mip-NeRF RGB-D [14], directly use RGB-D sensor as prior for the depth sampling. Despite reducing the number of samples, most of these approaches still perform multiple samples per pixel due to the error range of depth data and estimation. Instead, Light Field Network [45] directly evaluate the unprojected pixels, parameterized as a ray, reducing the model query to the number of pixels. Although, this approach is able to perform real-time rendering of novel views without requiring heavy optimizations, it does not generalize yet to high resolution scenes." }, { "figure_ref": [], "heading": "Preliminaries: Light Field Transformers", "publication_ref": [ "b42" ], "table_ref": [], "text": "While NeRFs learn a scene representation associated to a continuous space of 3D points, Light Field Networks (LFNs) [45] rely on 3D rays parametrized with Plücker coordinates to learn similar representations. This subtle difference reduces significantly the cost to decode a view of the scene, from several samples to a single sample per pixel. Despite this, LFNs are limited to simple settings, do not enforce geometric consistency and their ray parameterization is not robust to occlusions. Light Field Transformers (LFTs) are an extension of LFNs which use a transformer architecture, a ray parametrization robust to occlusions, enforce geometric consistency through the training procedure, and encode and decode points of view of a scene without per-scene optimization [18,42,43]." }, { "figure_ref": [], "heading": "Transformers", "publication_ref": [ "b51" ], "table_ref": [], "text": "Transformers [52] are deep encoder-decoder neural models that incorporate attention mechanisms in their architecture. The encoder first performs self-attention on a set of tokens to extract common features. The decoder then uses crossattention between the extracted features and a set of queries to generate an output per query. The attention block consists of a Multi-Head Attention (MHA) layer, followed by a Feed-Forward (FF) layer, with a skip-connection and layer normalization after each of them. In each head h, MHA operates in parallel a Scaled Dot-Product attention\nAttention h (Q, K, V ) = softmax QK T √ d k V,(1)\nover a set of the three inputs: keys (K), values (V ), and queries (Q). Each head linearly projects the inputs to reduced dimensions, d k for Q and K and d v for V , performs the attention operation, and then projects the output back to its original dimension. To perform self-attention Q = K = V are the tokens to encode. Instead for crossattention K = V are the extracted features, while Q is the queries to decode.\nComputational complexity. Linear projections have a complexity of O(nd 0 d p ) with n the length of the sequence and d 0 and d p the dimensions before and after the projection. Instead, the scaled-dot product has O(n q n kv d k ) complexity, being n q and n kv the number of queries and keys/values respectively. For self-attention, n q = n kv and then the complexity is O(n 2 q d k ). " }, { "figure_ref": [], "heading": "Scene Representation Transformer", "publication_ref": [ "b42", "b26" ], "table_ref": [], "text": "The Scene Representation Transformer (SRT) [43] is an encoder-decoder LFT, which parametrizes rays with its 3D coordinates and their origin position. Given a set of N input views {I n } 1 , and their relative camera poses {P n } with camera Instrinsic parameters {K n }, the encoder E generates a set-latent scene representation (SLSR)\nZ = E ({I n , P n }) ,(2)\nTo decode a view of the scene, the light-field based decoder is queried. Each query refers to the ray direction and camera center for a given pixel, and recovers its RGB values. To decode a full view, as many queries as pixels are needed. The encoder is made of two parts. First, a convolutional network extracts features from the scene images. Then a set self-attention blocks computes common features between the multiple views of the scene to generate a SLSR. The decoder is a two-blocks cross-attention module. It performs attention between the ray queries and the SLSR to generate the RGB pixel values. SRT has been extended by OSRT [42] to disentangle its latent representation by integrating it with Slot-Attention [27] and designing the Slot Mixer Decoder. Using Slot Mixer attention weights, OSRT is able to generate unsupervised segmentation masks.\nAttention cost. With a convolutional encoder which halves the resolution (divides by four the number of queries) three times, n q = n kv = N h×w 64 for the encoder selfattention block. Therefore the complexity is\nO N hw 64 2 d k .(3)\nInstead, for the decoder cross-attention block to decode an image, n q = h × w and n kv = N h×w 64 , therefore the com- 1 We abuse notation here for simplicity,\n{•n} ≡ {• 1 , . . . , • N } plexity is O N (hw) 2 64 d k .(4)\nDoubling the resolution will increase by a factor of 4 the total number of pixels h × w, and by 16 the computational complexity. As a consequence, both SRT and OSRT are limited due to the quartic scaling of the attention cost with respect to the resolution of the images, and to the quadratic cost with respect to the number of input images N ." }, { "figure_ref": [], "heading": "Depth Field Network", "publication_ref": [ "b19" ], "table_ref": [], "text": "The Depth Field Network (DeFiNe) [18] can be considered as an extension of SRT. As its main novelties, the convolutional encoder is a pretrained ResNet-18, the cross-attention decoder is reduced from two blocks to one, and a set of geometric data augmentations are proposed for stereo and video depth training. The main contribution is the use of a PerceiverIO [20] instead of the self-attention encoder to use a SLSR with a fixed size n l .\nAttention cost. With n kv = n l in both the encoder and decoder, the quadratic scaling with respect to the resolution of the images is reduced to\nO N hw 64 n l d k (5)\nfor the encoder attention process, and to\nO (hwn l d k ) ,(6)\nfor decoding an image. These improvements reduce the cost considerably for hw >> n l , although there is still a quadratic dependence with the number of pixels (quartic with resolution) that limit the model's use." }, { "figure_ref": [ "fig_2" ], "heading": "Our Method: The Ray-Patch Decoding", "publication_ref": [ "b30", "b34" ], "table_ref": [], "text": "We propose the Ray-Patch querying to attenuate the quartic complexity of Light Field Transformers with respect to image resolution. Instead of using a ray to query the crossattention decoder and generate a pixel value, we use a ray to compute a feature vector of a square patch of pixels. Then a transposed convolutional decoder unifies the different patches' feature vectors and recovers the full image.\nOur approach reduces the number of queries to hw k 2 and the cross-attention cost by the same factor.\nParametrization. To decode a target view I t ∈ R h×w×c of the scene, the view is split into hw k 2 square patches of size [k, k], being the split image now defined as\n{I tp ∈ R h k × w k ×3 }.\nEach patch p is parametrized by the location of the camera o t , and the ray r tp that passes both by the camera position and the center of the patch. Given the camera intrinsic K t and extrinsic parameters W T Ct = [R t |o t ] ∈ SE(3), the ray r tp is computed as the unprojection of the center of patch p in the 2D camera plane. For each patch center in homogeneous coordinates x tp = (u tp , v tp , 1)\nT , it is first unprojected in the the camera reference frame C t ,\nr Ct tp = K -1 n • x tp = [x tp /z tp , y tp /z tp , 1, 1] T ,(7)\nand after that it is translated to the world reference W ,\nr W tp = W T Ct • r Ct tp .(8)\nUsing Fourier positional encoding [31], the parametrization of each patch is mapped to a higher frequency, to generate a set of queries for the decoder.\n{Q tp } = {γ (o t ) ⊕ γ (r tp )}(9)\nDecoder. The decoder D is a composition\nD = (D CNN • D A )(10)\nof an attention decoder D A , followed by a convolutional decoder block D CNN . The attention decoder performs crossattention between the queries {Q tp } and the SLSR Z, to compute a set of feature vectors\n{Z tp } = D A ({Q tp }, Z)(11)\nwith dimension f . These vectors ensemble a feature map\nZ t ∈ R h k × w\nk ×f , which is decoded by the convolutional decoder into the target image\nÎt = D CNN (Z t ) ,(12)\nas shown in Fig. 3. We use a vanilla convolutional decoder D CN N based on GIRAFFE's decoder [35]. It is a combination of upsampling blocks with convolutions and preliminary outputs. The number of channels of the output, c, will vary depending on the desired task, e.g. c = 3 for RGB colour image, or c = 1 for depth estimation. \nThe optimization process does not change. The model's parameters θ are optimized on a collection of images from different scenes minimizing the Mean Squared Error (MSE) of the generated novel-views for RGB images\nL rgb = 1 hw ij Ît -I t 2 ,(14)\nand minimizing the absolute log difference for depth maps\nL d = 1 n tp tp | log Dtp -log D tp |,(15)\nDepth optimization is performed only over the subset {tp} ⊂ {ij} of target pixels with depth info, giving freedom to the model to generalize to unseen parts.\nAttention cost. The proposed Ray-Patch querying reduces the complexity of the decoders to\nO N (hw) 2 64k 2 d k ,(16)\nInput Target SRT RP-SRT k=4 RP-SRT k=8 for models with the basic Transformer, like SRT and OSRT; and to\nO hw k 2 n l d k ,(17)\nfor PerceiverIO based models, like DeFiNe. Although there is still a quadratic dependency on the resolution, the attenuation introduced by the Ray-Patch querying can reduce the number of queries in up to two orders of magnitudes for high resolutions." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [ "b42", "b42" ], "table_ref": [], "text": "We evaluate Ray-Patch using two setups of different complexity. Firstly, we integrate Ray-Patch into both SRT and OSRT for novel view synthesis on the MulstiShapeNet-Easy (MSN-Easy) dataset. Given input images, the model encodes a representation of the scene, and its goal is decoding the other two viewpoints. In this dataset we asses the impact of different patch sizes at different resolutions on SRT implementation, and its integration on OSRT. After that, we evaluate its ability to generalize to more challenging scenes and textures in a stereo depth task. Secondly, we also implemented Ray-Patch into DeFiNe and evaluated on ScanNet. Given two images, the model encodes a representation, and the goal is recovering RGB and depth from the same point of view. Following Sajjadi et al. [42,43], rendered views are benchmarked with PSNR, SSIM, and LPIPS; and segmentation masks with FG-ARI. Following Guizilini et al. [18], depths are benchmarked with Absolute Relative Error (Abs.Rel), Square Relative Error (Sq.Rel) and Root Mean Square Error (RMSE). Computational aspects are evaluated measuring peak RAM usage, image rendering speed as in Sajjadi et al. [43], training time, and Float Point Operations (FLOPs) needed to encode and render an image. We assume the use of float-32 data, and report time metrics from a GPU NVIDIA Tesla V100. Further design and implementation details are provided on the supplementary material." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b45" ], "table_ref": [], "text": "MulstiShapeNet-Easy [46] has 70K training scenes and 10K test scenes with resolution 240 × 320. Due to the high cost of training both SRT and OSRT, we work at 60×80 and 120 × 160. In each scene there are between 2 and 4 objects of 3 different classes: chair, table, or cabinet. The object shapes are sampled from the ShapeNetV2 dataset [7]. Each scene has 3 views sampled at 120 • steps on a circle around the center of the scene, with extrinsics and intrinsics camera annotations. For each training step, one image is used as input and the other two are used as target to be reconstructed.\nScanNet [10] is a collection of real indoor scenes with RGB-D and camera pose information. It has 1.2K different scenes with a total of 90K views. We follow DeFiNe's [18] stereo setup: RGB input images are downscaled to a resolution of 128 × 192; and a custom stereo split is used [23], resulting in 94212 training and 7517 test samples." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "Computational performance", "publication_ref": [], "table_ref": [], "text": "While our Ray-Patch querying still has quadratic scaling with n q , the reduction we achieve in the number of queries results in a notable boost in rendering speed, as can be seen in Tab. 1 and Fig. 2b. Furthermore, when increasing the resolution the patch can also be increased, keeping an appropriate rendering speed at higher resolutions. Comparing rendering speeds for different patches and resolutions in Fig. 2b, it can be observed how the improvement tends to saturate for big patch sizes. As a consequence of reducing the number of queries, its impact on the scaled-dot product complexity will be out-weighted by n kv . For n q << n kv , n kv will set a minimum cost and increasing the patch size over this limit will not be reflected on the rendering speed.\nIt is also worth of attention that the biggest patch does not have the lower rendering time. When n q << n kv , increasing the patch size also adds more convolutions and interpolations to the convolutional decoder, hence increasing the deconvolutional overhead without reducing the cost of the attention decoder. Finally, the decrease in n q implies a smaller memory peak in the softmax of the decoder attention, see Fig. 2a. This matrix is n q × n kv . As an illustrative example, for DeFiNe, decoding a single 960 × 1280 image, with n kv = 2048, requires 75 GBytes of GPU memory, almost two full A100 GPUs. Instead, for the Ray-Patch querying with k = 16, it is reduced to only 0.3 GBytes. This notable reduction allows to increase parallelization, improving even more the rendering speed for scene reconstruction tasks." }, { "figure_ref": [ "fig_4", "fig_3" ], "heading": "Novel view synthesis", "publication_ref": [ "b5", "b39" ], "table_ref": [], "text": "On MSN-Easy, for SRT we evaluate two different patch sizes for each resolution: k = {2, 4} for 60 × 80; and k = {4, 8} for 120 × 160. Instead for OSRT we only evaluate at 120 × 160 with a patch size k = 8. As reported in Tab. 1 and Fig. 5, the experiment metrics for RP-SRT shows that the size of the patch impacts on the model, with smaller patches having better rendering quality on both resolutions. For smaller patches, the first decoder focus attention on less pixels than for a bigger patch, each feature vector is up-sampled less, and more information is recovered from the same amount of data. Therefore, excessively increasing the patch reduces the quality of reconstructed views, as shown by RP-SRT with k = 8 for 120 × 160, which slightly underperforms the baselines's PSNR (32.3 vs 32.8). Nevertheless, Ray-Patch querying is still able to match rendering quality of both SRT and OSRT at 120 × 160, with k = 4 and k = 8 respectively; and outperform at 60 × 80, with k = 2. Furthermore, for similar performance our approach improves rendering speed ×10 for the highest resolution (275 vs 30 fps, and 278 vs 21 fps), and reduces training time almost ×4 (see Fig. 4 and Tab. 1). This is thanks to scaling the attenuation factor k together with resolution, compensating for the increasing number of queries. Regarding RP-OSRT's unsupervised segmentation, we up-sample the 120 8 × 160 8 attention weights of the Slot Mixer Decoder to generate a 120 × 160 segmentation map, achieving only slightly worse metrics than OSRT (0.914 vs 0.958 FG-ARI). Finally, note in Tab. 1 that even if increasing resolution improves rendering quality (higher PSNR and SSIM) for all models, the perceptual similarity metric gets worse (higher LPIPS). This implies that when working at low resolution, LPIPS is not able to appropriately evaluate the model representation as perceptual inconsistencies from the 3D representation are hard to distinguish due to the poor quality. Therefore the usefulness of Ray-Patch querying increases. Reducing the computational cost of LFTs not only speeds-up training and inference, it also opens the possibility to work with more expensive loss functions rather than simple L1 or L2 losses, e.g. using perceptual losses or adversarial discriminators, following current state of the art in image generation [16,40]." }, { "figure_ref": [], "heading": "Stereo depth", "publication_ref": [], "table_ref": [], "text": "Based on the results of the previous section, the integration with DeFiNe to render at 480 × 640 has be done with k = 16. This value is chosen to have n q close to n kv , improving the computation efficiency without compressing too much the information. Our results shows that Ray-Patch improves the convergence of the model. This configuration not only reduces the computational cost, but also improves the view reconstruction and depth stereo estimation in all metrics reported in Tab. 2. Despite taking as input a 128 × 192 image, reconstructions are closer to the 480 × 640 target output recovering a similar quality while DeFiNe's look diffused and blurred (see Fig. 6). It can also be observed how the estimated depth is smoother, with less abrupt changes, while still preserving clear depth discontinuities. Regarding the computation, the evaluated configu- ration reduces FLOPs ×10, and increases rendering prediction of novel depth maps from 7 frames per second to 208." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b11", "b55", "b58" ], "table_ref": [], "text": "Our proposed decoder reduces the complexity problem of decoding images with Transformers. Despite that, we cannot decode single pixels and performance may depend on choosing an appropriate patch size. As a simple heuristic to choose the patch, we propose to keep n q ∼ n kv , as it has been shown that 1) rendering speed saturates for bigger patches, and 2) too much compression reduces decoding performance. Nevertheless, hyper-parameter tuning may be needed to find the best patch size for each model. Regarding unsupervised segmentation, we observed that RP-OSRT has fallen into a tessellation failure mode, already observed by Sajjadi [42] et al. This failure is dependent on architectural choices, and further experimentation would be required to address it. Also note that we have only evaluated square patches. Nevertheless, our method could also be used with rectangular patches to obtain an intermediate number of queries. Finally, notice that the Ray-Patch does not attempt to solve the base attention's quadratic cost scaling. Rather, its focus on reducing the number of queries makes it compatible with other less expensive alternatives to vanilla attention [11,12,48,56,59]." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b32", "b52" ], "table_ref": [], "text": "In this paper we propose Ray-Patch querying, which reduces significantly the cost associated to Light Field Transformer's decoder. Our Ray-Patch does not only reduce significantly the training time and improve convergence, but could also be generalized to different tasks using transformers to decode scenes. We validate experimentally our approach and its benefits by integrating it into three recent LFT models for two different tasks and in two different datasets. The models with our Ray-Patch querying match or even outperform the baseline models in photometric and depth metrics, while at the same time reducing the computation and memory load in one and two orders of magnitude respectively. In addition, this is achieved with a minimum modification to the implementation of given baselines. Reducing the computational footprint of LFTs is essential for to continue its development and for deployment in constrained platforms such as mobile devices or robots, in the same line than works such as [33,53] did for other architectures and tasks.\nRay-Patch: An Efficient Querying for Light Field Transformers Supplementary Material" }, { "figure_ref": [], "heading": "Model and training details", "publication_ref": [ "b34", "b43", "b5", "b39", "b18" ], "table_ref": [], "text": "All models were trained using Pytorch 2 and Pytorch Lightning on a distributed set-up comprising 4 Nvidia Tesla V100 GPUs. We intend to release both the code and checkpoints upon acceptance.\nConvolutional Decoder with Ray-Patch Querying. The convolutional decoder D CN N is based on GIRAFFE's decoder [35]. It is composed of a concatenation of upsampling blocks, each incorporating preliminary outputs. The main block consist of nearest neighbour up-sampling, a convolutional layer, batch normalization, and a leaky ReLU activation function. Starting with 128 channels, each block doubles the feature map size while halving the channel count. For a patch k = 8 = 2 3 there will be 3 up-sampling blocks.\nTo enhance the multiscale resolution, a convolutional layer preceding each block generates a preliminary output with c-channels. This output is added to the preliminary output from the preceding block, and is then up-scaled doubling its dimensions. Subsequently, a final convolutional layer transforms the output from the last block into a c-channel result, which is added to the preliminary output to generate the ultimate output Ît . Nevertheless, it is with noting that Ray-Patch querying could be employed with alternative up-sampling decoders, e.g. a learned up-sampling decoder [44] or an attention upsampler [16,40].\nImplementation of SRT and OSRT. SRT and OSRT trained models were based on the implementation by K. Stelzner. The original implementations employed batch size of 256 on 64 TPUv2 [42]. However, when we reduced the batch size to accommodate our hardware limitations, we observed a decline in the models' stability and convergence rate. To address this issue, we introduced a batch normalization [19] layer after each convolutional layer in the encoder. This adjustment led to improved convergence and performance, as evidenced in Tab. 3. Finally, both models were trained querying 9600 rays at each optimization step.\nMSN-Easy experiments. SRT and RP-SRT were trained using a batch size of 32, while OSRT and RP-OSRT employed a batch size of 64. OSRT and RP-OSRT followed the original training regime [42]. They were trained with an initial learning rate 1 × 10 -4 , which linearly decayed to 1.6 × 10 -5 over 4M steps, also incorporating a warmup phase of 2.5k steps. On the other hand, for SRT and RP-SRT we adapted the training schedule to accelerate convergence with a linear decay to 4 × 10 -5 over 300k steps. Since all models had reached a plateau by 300k steps, we concluded the training and evaluated the best checkpoint for each model. It is important to emphasize that the experiments are focused to compare a baseline model to its adapted version integrating Ray-Patch querying. Therefore, the different training schedules for SRT and OSRT do not exert any influence on the results and conclusions of the proposed method.\nScannet experiments. DeFiNe was implemented following the original paper [18]. Both DeFiNe and RP-DeFiNe were trained using virtual cameras projection and canonical jittering where projection noise was set to σ v = 0.25 and canonical jittering noise was set to σ t = σ r = 0.1. Regarding the training loss, we adopted DeFiNe training loss\nL = L d + λ rgb L rgb + λ v (L d,v + λ rgb L rgb,v ) ,\nwhere sub-index v is for virtual cameras, with λ rgb = 5.0 and λ v = 0.5. We employed AdamW optimizer[28] with β 1 = 0.99, β 2 = 0.999, weight decay w = 10 -4, and an initial learning rate of 2 × 10 -4 . We trained for 200 epochs (600k steps) halving the learning rate every 80 epochs. We did not fine tune the model at higher accuracy.\nTo ensure convergence stability, 1) we use gradient clipping with norm 1; and 2) we trained with a batch size of 16 and gradient accumulation 2 to simulate a batch size of 32. Given the 128 × 192 stereo input images, RP-DeFiNe was trained to directly generate two 480 × 640 output images. Instead DeFiNe, was trained querying 32768 of the total rays due to accommodate memory constraints. " }, { "figure_ref": [], "heading": "Additional images", "publication_ref": [], "table_ref": [], "text": "" } ]
In this paper we propose the Ray-Patch querying, a novel model to efficiently query transformers to decode implicit representations into target views. Our Ray-Patch decoding reduces the computational footprint and increases inference speed up to one order of magnitude compared to previous models, without losing global attention, and hence maintaining specific task metrics. The key idea of our novel querying is to split the target image into a set of patches, then querying the transformer for each patch to extract a set of feature vectors, which are finally decoded into the target image using convolutional layers. Our experimental results, implementing Ray-Patch in 3 different architectures and evaluating it in 2 different tasks and datasets, demonstrate and quantify the effectiveness of our method, specifically a notable boost in rendering speed for the same task metrics.
Ray-Patch: An Efficient Querying for Light Field Transformers
[ { "figure_caption": "Figure 1 .1Figure 1. Light Field Networks sample a ray per pixel to render the target image (left). Our Ray-Patch (right) groups pixels in k × k patches and samples a ray per patch, reducing the querying cost by a factor of k 2 without loosing accuracy.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. (a) Peak GPU vRAM usage due to attention for decoding a single image. vRAM usage scales linearly with the number of pixels (quadratically with resolution). Ray-Patch querying reduces ×10 required resources on standard resolutions. Note that x-axis is in logarithmic scale. (b) Single image rendering speed scaling. The use of the Ray-Patch decoder increase rendering speed at high resolutions up to real-time for DeFiNe. To keep a fixed rendering speed, the patch size should increase at the same pace as the number of pixels.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Ray-Patch querying. Given a latent representation of a scene Z, in order to render an image It of shape h × w, a query is performed for each patch p. Each patch is parametrized with a ray rtp that passes through it and the the camera position ot. The queries are encoded with multiple Fourier frequencies, and fed to the attention decoder to compute a feature vector per query. The feature vectors of the image are re-shaped as a rectangle and forward-passed through the convolutional decoder to obtain the target image render Ît", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Training time on one V100 comparison. For both 60 × 80 (△) and 120 × 160 (•) resolution, Ray-Patch (blue) configurations, k = {2, 4} and k = {4, 8} respectively, achieve similar or better rendering performance than SRT (red), with 60-70% cost reduction.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Novel view synthesis results on MSN-Easy. Given an input image, the models are queried to decode target images at 120º (first row) and 240º (second row). Both SRT and Ray-Patch models (RP-SRT k = {4, 8}) encode a coherent representation, with slight differences on colour and edges. For RP-SRT it can be seen how the bigger the patch size the more diffuse the image looks.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 7 .Figure 8 .78Figure 7. Novel view synthesis results on MSN-Easy for SRT and RP-SRT.", "figure_data": "", "figure_id": "fig_5", "figure_label": "78", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Quantitative results on MSN-Easy. Evaluation of new scene novel view synthesis and computational performance on a simple dataset. While SRT's performance is surpassed only by the configuration with patch size k = 2, Ray-Patch increases ×3 and ×10 the rendering speed with minimum impact.", "figure_data": "MSN-Easy60 × 80120 × 160120 × 160SRT 30.98 0.903 0.173 -5.6 days 1.7 day 0.7 days RP-SRT k = 2 k = 4 31.16 30.92 0.906 0.901 0.163 0.175 --48.2 15.8 7.3 ↑ Rendering speed 117 fps 288 fps 341 fps ↑ PSNR ↑ SSIM ↓ LPIPS ↑ FG-ARI ↓ Training time ↓ Giga FLOPsSRT 32.842 0.934 0.250 -7.4 days 1.7 days RP-SRT k = 4 k = 8 32.818 32.306 0.935 0.929 0.254 0.274 --1 day 192.1 28.5 19.7 30 fps 275 fps 305 fpsOSRT 30.95 0.916 0.287 0.958 25 days 278.6 21 fpsRP-OSRT k = 8 31.03 0.915 0.303 0.914 3.7 days 24.7 278 fps", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Stereo depth results on ScanNet. Given two input images, models decode both input images and an estimation of corresponding depth maps. Note how our k = 16 Ray-Patch querying (RP-DeFiNe) generates sharper edges in both RGB and depth images. Quantitative results on ScanNet. Evaluation of stereo depth and RGB rendering on a realistic dataset. The integration of a Ray-Patch decoder with patch size k = 16 increases rendering speed by 2 orders of magnitude, while also outperforming DeFiNe's rendering and depth metrics.", "figure_data": "InputDeFiNeRP-DeFiNeTargetDeFiNeRP-DeFiNeFigure 6. DeFiNe ↑ PSNR 23.46 ↑ SSIM 0.783 ↓ LPIPS 0.495 ↓RMSE 0.275 ↓Abs.Rel 0.108 ↓Sq.Rel 0.053 ↓ Giga FLOPs 801 ↑ Rendering speed 7 fpsRP-DeFiNe k=16 24.54 0.801 0.453 0.263 0.103 0.050 81 208 fps", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Quantitative results of using Batch Normalization on MSN-Easy at 60 × 90 resolution with batch 32. For the three different configurations using normalization after the convolutional layers speeds-up convergence and elevates the plateu value.", "figure_data": "with Batch Norm SRT RP-SRT k = 2 k = 4 32 ↑ PSNR 30.98 31.16 30.92 Batch ↑ SSIM 0.903 0.906 0.901 ↓ LPIPS 0.173 0.163 0.175MSN-Easy SRT 29.549 29.576 29.576 w/o Batch Norm RP-SRT k = 2 k = 4 32 0.875 0.875 0.875 0.237 0.230 0.224SRT 256 29.32 0.876 0.200", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" } ]
Tomás Berriel Martins; Javier Civera
[ { "authors": "Dejan Azinović; Ricardo Martin-Brualla; Dan B Goldman; Matthias Nießner; Justus Thies", "journal": "", "ref_id": "b0", "title": "Neural rgb-d surface reconstruction", "year": "2022" }, { "authors": "Miguel Angel Bautista; Pengsheng Guo; Samira Abnar; Walter Talbott; Alexander Toshev; Zhuoyuan Chen; Laurent Dinh; Shuangfei Zhai; Hanlin Goh; Daniel Ulbricht", "journal": "", "ref_id": "b1", "title": "A neural architect for immersive 3d scene generation", "year": "2022" }, { "authors": "Michel Breyer; Jen ; Jen Chung; Lionel Ott; Roland Siegwart; Juan Nieto", "journal": "PMLR", "ref_id": "b2", "title": "Volumetric grasping network: Real-time 6 dof grasp detection in clutter", "year": "2021" }, { "authors": "Cesar Cadena; Luca Carlone; Henry Carrillo; Yasir Latif; Davide Scaramuzza; José Neira; Ian Reid; John J Leonard", "journal": "IEEE Transactions on robotics", "ref_id": "b3", "title": "Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age", "year": "2016" }, { "authors": "Carlos Campos; Richard Elvira; Juan J Gómez Rodríguez; José Mm Montiel; Juan D Tardós", "journal": "IEEE Transactions on Robotics", "ref_id": "b4", "title": "Orb-slam3: An accurate open-source library for visual, visual-inertial, and multimap slam", "year": "2021" }, { "authors": "Anh-Quan Cao; Raoul De Charette", "journal": "", "ref_id": "b5", "title": "Monoscene: Monocular 3d semantic scene completion", "year": "2022" }, { "authors": "Thomas Angel X Chang; Leonidas Funkhouser; Pat Guibas; Qixing Hanrahan; Zimo Huang; Silvio Li; Manolis Savarese; Shuran Savva; Hao Song; Su", "journal": "", "ref_id": "b6", "title": "Shapenet: An information-rich 3d model repository", "year": "2015" }, { "authors": "Anpei Chen; Zexiang Xu; Fuqiang Zhao; Xiaoshuai Zhang; Fanbo Xiang; Jingyi Yu; Hao Su", "journal": "", "ref_id": "b7", "title": "Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo", "year": "2021" }, { "authors": "Zhiqin Chen; Hao Zhang", "journal": "", "ref_id": "b8", "title": "Learning implicit fields for generative shape modeling", "year": "2019" }, { "authors": "Angela Dai; X Angel; Manolis Chang; Maciej Savva; Thomas Halber; Matthias Funkhouser; Nießner", "journal": "", "ref_id": "b9", "title": "Scannet: Richly-annotated 3d reconstructions of indoor scenes", "year": "2017" }, { "authors": "Tri Dao", "journal": "", "ref_id": "b10", "title": "Flashattention-2: Faster attention with better parallelism and work partitioning", "year": "2023" }, { "authors": "Tri Dao; Dan Fu; Stefano Ermon; Atri Rudra; Christopher Ré", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b11", "title": "Flashattention: Fast and memory-efficient exact attention with io-awareness", "year": "2022" }, { "authors": "Kangle Deng; Andrew Liu; Jun-Yan Zhu; Deva Ramanan", "journal": "", "ref_id": "b12", "title": "Depth-supervised nerf: Fewer views and faster training for free", "year": "2022" }, { "authors": "Arnab Dey; Yassine Ahmine; Andrew I Comport", "journal": "", "ref_id": "b13", "title": "Mipnerf rgb-d: Depth assisted fast neural radiance fields", "year": "2022" }, { "authors": "Danilo Sm Ali Eslami; Frederic Jimenez Rezende; Fabio Besse; Ari S Viola; Marta Morcos; Avraham Garnelo; Andrei A Ruderman; Ivo Rusu; Karol Danihelka; Gregor", "journal": "Science", "ref_id": "b14", "title": "Neural scene representation and rendering", "year": "2018" }, { "authors": "Patrick Esser; Robin Rombach; Bjorn Ommer", "journal": "", "ref_id": "b15", "title": "Taming transformers for high-resolution image synthesis", "year": "2021" }, { "authors": "Jiatao Gu; Lingjie Liu; Peng Wang; Christian Theobalt", "journal": "", "ref_id": "b16", "title": "Stylenerf: A style-based 3d-aware generator for high-resolution image synthesis", "year": "2021" }, { "authors": "Vitor Guizilini; Igor Vasiljevic; Jiading Fang; Rares Ambrus; Greg Shakhnarovich; Matthew Walter; Adrien Gaidon", "journal": "", "ref_id": "b17", "title": "Depth field networks for generalizable multi-view scene representation", "year": "2022" }, { "authors": "Sergey Ioffe; Christian Szegedy", "journal": "", "ref_id": "b18", "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "year": "2015" }, { "authors": "Andrew Jaegle; Sebastian Borgeaud; Jean-Baptiste Alayrac; Carl Doersch; Catalin Ionescu; David Ding; Skanda Koppula; Daniel Zoran; Andrew Brock; Evan Shelhamer", "journal": "", "ref_id": "b19", "title": "Perceiver io: A general architecture for structured inputs & outputs", "year": "2021" }, { "authors": "Wonbong Jang; Lourdes Agapito", "journal": "", "ref_id": "b20", "title": "Codenerf: Disentangled neural radiance fields for object categories", "year": "2021" }, { "authors": "Abhijit Kundu; Kyle Genova; Xiaoqi Yin; Alireza Fathi; Caroline Pantofaru; Leonidas J Guibas; Andrea Tagliasacchi; Frank Dellaert; Thomas Funkhouser", "journal": "", "ref_id": "b21", "title": "Panoptic neural fields: A semantic object-aware neural scene representation", "year": "2022" }, { "authors": "Uday Kusupati; Shuo Cheng; Rui Chen; Hao Su", "journal": "", "ref_id": "b22", "title": "Normal assisted stereo depth estimation", "year": "2020" }, { "authors": "Verica Lazova; Vladimir Guzov; Kyle Olszewski; Sergey Tulyakov; Gerard Pons-Moll", "journal": "", "ref_id": "b23", "title": "Control-nerf: Editable feature volumes for scene rendering and manipulation", "year": "2022" }, { "authors": "Haotong Lin; Sida Peng; Zhen Xu; Yunzhi Yan; Qing Shuai; Hujun Bao; Xiaowei Zhou", "journal": "", "ref_id": "b24", "title": "Efficient neural radiance fields for interactive free-viewpoint video", "year": "2022" }, { "authors": "Lingjie Liu; Jiatao Gu; Kyaw Zaw Lin; Tat-Seng Chua; Christian Theobalt", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b25", "title": "Neural sparse voxel fields", "year": "2020" }, { "authors": "Francesco Locatello; Dirk Weissenborn; Thomas Unterthiner; Aravindh Mahendran; Georg Heigold; Jakob Uszkoreit; Alexey Dosovitskiy; Thomas Kipf", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b26", "title": "Objectcentric learning with slot attention", "year": "2020" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b27", "title": "Decoupled weight decay regularization", "year": "2017" }, { "authors": "Lars Mescheder; Michael Oechsle; Michael Niemeyer; Sebastian Nowozin; Andreas Geiger", "journal": "", "ref_id": "b28", "title": "Occupancy networks: Learning 3d reconstruction in function space", "year": "2019" }, { "authors": "Mateusz Michalkiewicz; K Jhony; Dominic Pontes; Mahsa Jack; Anders Baktashmotlagh; Eriksson", "journal": "", "ref_id": "b29", "title": "Implicit surface representations as layers in neural networks", "year": "2019" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "Springer", "ref_id": "b30", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2020" }, { "authors": "Thomas Müller; Alex Evans; Christoph Schied; Alexander Keller", "journal": "ACM Trans. Graph", "ref_id": "b31", "title": "Instant neural graphics primitives with a multiresolution hash encoding", "year": "2022" }, { "authors": "Thomas Müller; Alex Evans; Christoph Schied; Alexander Keller", "journal": "ACM Transactions on Graphics (ToG)", "ref_id": "b32", "title": "Instant neural graphics primitives with a multiresolution hash encoding", "year": "2022" }, { "authors": "Thomas Neff; Pascal Stadlbauer; Mathias Parger; Andreas Kurz; H Joerg; Mueller; R Alla Chakravarty; Anton S Chaitanya; Markus Kaplanyan; Steinberger", "journal": "Computer Graphics Forum", "ref_id": "b33", "title": "DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks", "year": "2021" }, { "authors": "Michael Niemeyer; Andreas Geiger", "journal": "", "ref_id": "b34", "title": "Giraffe: Representing scenes as compositional generative neural feature fields", "year": "2021" }, { "authors": "Helen Oleynikova; Zachary Taylor; Marius Fehr; Roland Siegwart; Juan Nieto", "journal": "IEEE", "ref_id": "b35", "title": "Voxblox: Incremental 3d euclidean signed distance fields for on-board mav planning", "year": "2017" }, { "authors": "Jeong Joon Park; Peter Florence; Julian Straub; Richard Newcombe; Steven Lovegrove", "journal": "", "ref_id": "b36", "title": "Deepsdf: Learning continuous signed distance functions for shape representation", "year": "2019" }, { "authors": "Songyou Peng; Michael Niemeyer; Lars Mescheder; Marc Pollefeys; Andreas Geiger", "journal": "Springer", "ref_id": "b37", "title": "Convolutional occupancy networks", "year": "2020" }, { "authors": "Barbara Roessle; Jonathan T Barron; Ben Mildenhall; Matthias Pratul P Srinivasan; Nießner", "journal": "", "ref_id": "b38", "title": "Dense depth priors for neural radiance fields from sparse input views", "year": "2022" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b39", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Kevin J David M Rosen; Antonio Terán Doherty; John J Espinoza; Leonard", "journal": "Annual Review of Control, Robotics, and Autonomous Systems", "ref_id": "b40", "title": "Advances in inference and representation for simultaneous localization and mapping", "year": "2021" }, { "authors": "S M Mehdi; Daniel Sajjadi; Aravindh Duckworth; Mahendran; Filip Sjoerd Van Steenkiste; Mario Pavetić; Leonidas J Lučić; Klaus Guibas; Thomas Greff; Kipf", "journal": "", "ref_id": "b41", "title": "Object scene representation transformer", "year": "2008" }, { "authors": "S M Mehdi; Henning Sajjadi; Etienne Meyer; Urs Pot; Klaus Bergmann; Noha Greff; Suhani Radwan; Mario Vora; Daniel Lučić; Alexey Duckworth; Dosovitskiy", "journal": "", "ref_id": "b42", "title": "Scene representation transformer: Geometry-free novel view synthesis through set-latent scene representations", "year": "2022" }, { "authors": "Wenzhe Shi; Jose Caballero; Ferenc Huszár; Johannes Totz; Rob Andrew P Aitken; Daniel Bishop; Zehan Rueckert; Wang", "journal": "", "ref_id": "b43", "title": "Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network", "year": "2016" }, { "authors": "Semon Vincent Sitzmann; Bill Rezchikov; Josh Freeman; Fredo Tenenbaum; Durand", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b44", "title": "Light field networks: Neural scene representations with single-evaluation rendering", "year": "2021" }, { "authors": "Karl Stelzner; Kristian Kersting; Adam R Kosiorek", "journal": "", "ref_id": "b45", "title": "Decomposing 3d scenes into objects via unsupervised volume segmentation", "year": "2021" }, { "authors": "Edgar Sucar; Shikun Liu; Joseph Ortiz; Andrew J Davison", "journal": "", "ref_id": "b46", "title": "imap: Implicit mapping and positioning in real-time", "year": "2021" }, { "authors": "Yi Tay; Mostafa Dehghani; Samira Abnar; Yikang Shen; Dara Bahri; Philip Pham; Jinfeng Rao; Liu Yang; Sebastian Ruder; Donald Metzler", "journal": "", "ref_id": "b47", "title": "Long range arena: A benchmark for efficient transformers", "year": "2020" }, { "authors": "Zachary Teed; Jia Deng", "journal": "Advances in neural information processing systems", "ref_id": "b48", "title": "Droid-slam: Deep visual slam for monocular, stereo, and rgb-d cameras", "year": "2021" }, { "authors": "Ayush Tewari; Justus Thies; Ben Mildenhall; Pratul Srinivasan; Edgar Tretschk; W Yifan; Christoph Lassner; Vincent Sitzmann; Ricardo Martin-Brualla; Stephen Lombardi", "journal": "Wiley Online Library", "ref_id": "b49", "title": "Advances in neural rendering", "year": "2022" }, { "authors": "Alex Trevithick; Bo Yang", "journal": "", "ref_id": "b50", "title": "Grf: Learning a general radiance field for 3d representation and rendering", "year": "2021" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b51", "title": "Attention is all you need", "year": "2017" }, { "authors": "Chien-Yao Wang; Alexey Bochkovskiy; Hong-Yuan Mark Liao", "journal": "", "ref_id": "b52", "title": "Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors", "year": "2023" }, { "authors": "Qianqian Wang; Zhicheng Wang; Kyle Genova; P Pratul; Howard Srinivasan; Jonathan T Zhou; Ricardo Barron; Noah Martin-Brualla; Thomas Snavely; Funkhouser", "journal": "", "ref_id": "b53", "title": "Ibrnet: Learning multi-view image-based rendering", "year": "2021" }, { "authors": "Yi Wei; Shaohui Liu; Yongming Rao; Wang Zhao; Jiwen Lu; Jie Zhou", "journal": "", "ref_id": "b54", "title": "Nerfingmvs: Guided optimization of neural radiance fields for indoor multi-view stereo", "year": "2021" }, { "authors": "Yunyang Xiong; Zhanpeng Zeng; Rudrasis Chakraborty; Mingxing Tan; Glenn Fung; Yin Li; Vikas Singh", "journal": "", "ref_id": "b55", "title": "Nyströmformer: A nyström-based algorithm for approximating self-attention", "year": "2021" }, { "authors": "Bangbang Yang; Yinda Zhang; Yinghao Xu; Yijin Li; Han Zhou; Hujun Bao; Guofeng Zhang; Zhaopeng Cui", "journal": "", "ref_id": "b56", "title": "Learning object-compositional neural radiance field for editable scene rendering", "year": "2021" }, { "authors": "Alex Yu; Vickie Ye; Matthew Tancik; Angjoo Kanazawa", "journal": "", "ref_id": "b57", "title": "pixelnerf: Neural radiance fields from one or few images", "year": "2021" }, { "authors": "Manzil Zaheer; Guru Guruganesh; Avinava Kumar; Joshua Dubey; Chris Ainslie; Santiago Alberti; Philip Ontanon; Anirudh Pham; Qifan Ravula; Li Wang; Yang", "journal": "Advances in neural information processing systems", "ref_id": "b58", "title": "Big bird: Transformers for longer sequences", "year": "2020" }, { "authors": "Shuaifeng Zhi; Tristan Laidlow; Stefan Leutenegger; Andrew J Davison", "journal": "", "ref_id": "b59", "title": "In-place scene labelling and understanding with implicit scene representation", "year": "2021" }, { "authors": "Zihan Zhu; Songyou Peng; Viktor Larsson; Weiwei Xu; Hujun Bao; Zhaopeng Cui; Martin R Oswald; Marc Pollefeys", "journal": "", "ref_id": "b60", "title": "Nice-slam: Neural implicit scalable encoding for slam", "year": "2022" } ]
[ { "formula_coordinates": [ 3, 335.91, 458.31, 209.2, 25.41 ], "formula_id": "formula_0", "formula_text": "Attention h (Q, K, V ) = softmax QK T √ d k V,(1)" }, { "formula_coordinates": [ 4, 129.59, 361.5, 156.77, 12.05 ], "formula_id": "formula_1", "formula_text": "Z = E ({I n , P n }) ,(2)" }, { "formula_coordinates": [ 4, 125.45, 634.12, 160.91, 25.51 ], "formula_id": "formula_2", "formula_text": "O N hw 64 2 d k .(3)" }, { "formula_coordinates": [ 4, 187.32, 262.82, 357.79, 450.68 ], "formula_id": "formula_3", "formula_text": "{•n} ≡ {• 1 , . . . , • N } plexity is O N (hw) 2 64 d k .(4)" }, { "formula_coordinates": [ 4, 392.85, 583.92, 152.27, 22.31 ], "formula_id": "formula_4", "formula_text": "O N hw 64 n l d k (5)" }, { "formula_coordinates": [ 4, 399.83, 641.8, 145.28, 12.05 ], "formula_id": "formula_5", "formula_text": "O (hwn l d k ) ,(6)" }, { "formula_coordinates": [ 5, 50.11, 239.46, 236.25, 22.89 ], "formula_id": "formula_6", "formula_text": "{I tp ∈ R h k × w k ×3 }." }, { "formula_coordinates": [ 5, 77.6, 351.79, 208.77, 13.59 ], "formula_id": "formula_7", "formula_text": "r Ct tp = K -1 n • x tp = [x tp /z tp , y tp /z tp , 1, 1] T ,(7)" }, { "formula_coordinates": [ 5, 129.71, 388.39, 156.65, 13.02 ], "formula_id": "formula_8", "formula_text": "r W tp = W T Ct • r Ct tp .(8)" }, { "formula_coordinates": [ 5, 111.76, 448.33, 174.6, 12.05 ], "formula_id": "formula_9", "formula_text": "{Q tp } = {γ (o t ) ⊕ γ (r tp )}(9)" }, { "formula_coordinates": [ 5, 130.5, 492.17, 155.87, 12.66 ], "formula_id": "formula_10", "formula_text": "D = (D CNN • D A )(10)" }, { "formula_coordinates": [ 5, 118.54, 564.06, 167.82, 12.05 ], "formula_id": "formula_11", "formula_text": "{Z tp } = D A ({Q tp }, Z)(11)" }, { "formula_coordinates": [ 5, 50.11, 593.54, 49.22, 12.54 ], "formula_id": "formula_12", "formula_text": "Z t ∈ R h k × w" }, { "formula_coordinates": [ 5, 135.04, 623.89, 151.32, 12.78 ], "formula_id": "formula_13", "formula_text": "Ît = D CNN (Z t ) ,(12)" }, { "formula_coordinates": [ 5, 369.89, 519.07, 175.22, 26.87 ], "formula_id": "formula_15", "formula_text": "L rgb = 1 hw ij Ît -I t 2 ,(14)" }, { "formula_coordinates": [ 5, 355.74, 571.71, 189.37, 26.35 ], "formula_id": "formula_16", "formula_text": "L d = 1 n tp tp | log Dtp -log D tp |,(15)" }, { "formula_coordinates": [ 5, 387.66, 687.02, 157.45, 25.28 ], "formula_id": "formula_17", "formula_text": "O N (hw) 2 64k 2 d k ,(16)" }, { "formula_coordinates": [ 6, 136.43, 317.32, 149.93, 22.31 ], "formula_id": "formula_18", "formula_text": "O hw k 2 n l d k ,(17)" }, { "formula_coordinates": [ 12, 331.55, 488.67, 190.88, 9.65 ], "formula_id": "formula_19", "formula_text": "L = L d + λ rgb L rgb + λ v (L d,v + λ rgb L rgb,v ) ," } ]
10.18653/v1/2020.coling-main.343
2023-05-16
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b4", "b0", "b5", "b18", "b17", "b42", "b23", "b27", "b16", "b9", "b10", "b9", "b2", "b37", "b11", "b1", "b22", "b3" ], "table_ref": [], "text": "Event extraction is an essential yet challenging task for natural language understanding. Given a piece of text, event extraction systems discover the event mentions and then recognize event triggers and their event arguments according to pre-defined event schema (Doddington et al., 2004;Ahn, 2006). As shown in Figure 1, the sentence \"Capture of the airport by American and British troops in a facility that has been airlifting American troops to Baghdad.\" contains two events, a Movement:Transport event triggered by \"airlifting\" and a Transaction:Transfer-Ownership event triggered by \"Capture\".\nIn the Movement:Transport event, three event roles are involved, i.e., Artifact, Destination, Capture of the airport would give American and British troops a facility for airlifting equipment and troops to Baghdad. and Origin, and their arguments are troops, airports, and Baghdad, respectively. As to the Transaction:Transfer-Ownership event, the event roles are Beneficiary, Origin, and Artifact. Accordingly, the arguments are troops, Baghdad, and airports.\nTraditional event extraction methods regard the task as a trigger classification sub-task and several arguments classification sub-tasks (Du and Cardie, 2020;Liu et al., 2020;Lin et al., 2020;Zhang and Ji, 2021;Nguyen et al., 2021Nguyen et al., , 2022a,b),b), while some of the recent research casting the task as a sequence generation problem (Paolini et al., 2021;Li et al., 2021;Hsu et al., 2022;Huang et al., 2023). Compared with classification-based methods, the latter line is more data-efficient and flexible. Whereas, the data containing event records are scarce, and the performance is influenced by the amount of data as the results shown in Hsu et al. (2022).\nAs constructing large-scale labeled data is of great challenge, data augmentation plays an important role here to alleviate the data deficient prob-lem. There are three main augmentation methods, i.e., Rule-based augmentation method (Wei and Zou, 2019b;Dai and Adel, 2020), generative method (Wu et al., 2019;Kumar et al., 2020;Anaby-Tavor et al., 2020;Wei and Zou, 2019a;Ng et al., 2020), and text-aware method (Ding et al., 2020). However, they have different drawbacks. 1) Grammatical Incorrectness. Rule-based methods expand the original training data using automatic heuristic rules, such as randomly synonyms replacement, which effectively creates new training instances. As the example of Rule-based Aug illustrated in Figure 1, these processes may distort the text, making the generated syntactic data grammatically incorrect. 2) Structure Misalignment. Triggers and arguments are key components of event records, whether for both the original one and the augmented one. Nonetheless, triggers and arguments may not always exist in previous augmentation methods. As the example of Generative Aug illustrated in Figure 1, even though the meaning of the generated augmented sentence is quite similar to the original one, the important argument \"airport\" is missing. This may mislead the model to weaken the recognition of the DESTINATION role.\n3) Semantic Drifting. Another important aspect of data augmentation is semantic alignment. The generated text needs to express the original event content without semantic drifting. However, this problem is commonly met in the Text-aware Aug method. As the example illustrated in Figure 1, the sentence completely contains all the triggers and arguments. But instead of Baghdad, Iraq is regarded as the ORIGIN in generated sentences, which may confuse the model to recognize the correct ORIGIN role.\nIn order to solve the aforementioned problem when applying data augmentation to event extraction, we proposed a denoised structure-to-text augmentation framework for event extraction (DAEE). For structure misalignment problems, a knowledgebased structure-to-text generation model is proposed. It is equipped with an additional argumentaware loss to generate augmentation samples that exhibit features of the target event. For the Semantic Drift problem, we designed a deep reinforcement learning (RL) agent. It distinguishes whether the generated text expresses the corresponding event based on the performance variation of the event extraction model. At the same time, the agent further guides the generative model to pay more attention to the samples with the Structure Misalignment and Grammatical Incorrectness problems and thus affords the Event-aware Aug text that both contain important elements and represent appropriate semantics. Intuitively, our agent is able to select effective samples from the combination of generated text and its event information to maximize the reward based on the event extraction model.\nThe key contributions of this paper are threefold:\n• We proposed a denoised structure-to-text augmentation framework. It utilizes an RL agent to select the most effective subset from the augmented data to enhance the quality of the generated data.\n• Under the proposed framework, a knowledgebased structure-to-text generation model is proposed to satisfy the event extraction task, which generates high-quality training data containing corresponding triggers and arguments.\n• Experimental results on widely used benchmark datasets prove that the proposed method achieves superior performance over stateof-the-art event extraction methods on one dataset and comparable results on the other datasets.\n2 Related Work" }, { "figure_ref": [], "heading": "Event Extraction", "publication_ref": [ "b26", "b32", "b40", "b31", "b20", "b17", "b15", "b39", "b27", "b21", "b16", "b9", "b19", "b6" ], "table_ref": [], "text": "Many existing methods use classification-based models to extract events (Nguyen et al., 2016;Wang et al., 2019;Yang et al., 2019;Wadden et al., 2019;Liu et al., 2018). And some global features are introduced to make an enhancement for joint inference (Lin et al., 2020;Li et al., 2013;Yang and Mitchell, 2016). With the large-scale use of PLMs, some of the researchers dedicated to developing generative capabilities for PLMs in event extraction, i.e., transforming into translation tasks (Paolini et al., 2021), generating with constrained decoding methods (Lu et al., 2021), and template-based conditional generation (Li et al., 2021;Hsu et al., 2022;Liu et al., 2022;Du et al., 2022). Compare with the above method directly uses a limited number of the training set, we use a denoised structure-to-text augmentation method to alleviate the problem of insufficient data." }, { "figure_ref": [], "heading": "Data Augmentation", "publication_ref": [ "b1", "b7", "b38", "b37", "b11", "b33", "b34", "b3" ], "table_ref": [], "text": "Rather than starting from an existing example and modifying it, some model-based data augmentation approaches directly estimate a generative process produce new synthetic data by masking randomly chosen words from the training set and sample from it (Anaby-Tavor et al., 2020;Hou et al., 2018;Xia et al., 2019;Wu et al., 2019;Kumar et al., 2020).\nOther research design prompt (Wang et al., 2022(Wang et al., , 2021) ) or use conditional generation (Ding et al., 2020) for the data augmentation. However, the above methods are mainly applied to generation tasks or comprehension tasks with simpler goals, such as text classification. When faced with complex structured extraction tasks, post-processing screening becomes a cumbersome problem. Inspired by RL, we use a policy model to automatically sift through the generated data for valid and semantically consistent samples." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "In this paper, we focus on generating the additional training set from structured event records for augmentation. Previous augmentation methods usually have Structure Misalignment and Grammatical Incorrectness, and Semantic Drifting problems as mentioned in the introduction. Instead, we introduce a policy-based RL strategy to select intact augmentation sentences." }, { "figure_ref": [], "heading": "Task Definition", "publication_ref": [], "table_ref": [], "text": "In The components of our proposed method will be described in the following." }, { "figure_ref": [], "heading": "Reinforcement Learning components", "publication_ref": [], "table_ref": [], "text": "The definitions of the fundamental components are introduced in the following. The States include the information from the current sentence and the corresponding golden event records. These two parts are both converted to the sentence vector through PLMs for the decision of action. We update states after re-generate the text guided by the previous action probability. At each iteration, the Actions decided by the policy model is whether to remove or retain the generated instance according to whether the sentences generated do express the corresponding event records. We use the enhancement of the F1 score as the Rewards for the actions decided by the policy model. Specifically, the F1 score of argument classification F i at i-th epoch on the development set is adopted as the performance evaluation criterion. Thus, the reward R i can be formulated as the difference between the adjacent epochs:\nR i = α(F i -F i-1 ),(1)\nwhere α is a scaling factor to convert the reward into a numeric result for RL agent. " }, { "figure_ref": [], "heading": "Structure-to-text Generation Model", "publication_ref": [], "table_ref": [], "text": "Capture of the airport would give American and British troops a facility for airlifting equipment and troops to Baghdad." }, { "figure_ref": [], "heading": "Generate Noval samples", "publication_ref": [], "table_ref": [], "text": "Entity & Relation Description Event Description\nMasked Background " }, { "figure_ref": [], "heading": "Event Extraction Model", "publication_ref": [ "b19", "b16" ], "table_ref": [], "text": "We use the generation-based method GTEE-BASE (Liu et al., 2022) is the corresponding separate marker. Following (Li et al., 2021) to reuse the predefined argument templates, the prompt P e contains the type instruction and the template, and the event records are parsed by template matching and slot mapping according to their own event description template." }, { "figure_ref": [ "fig_3" ], "heading": "Structure-to-text Generation Model", "publication_ref": [ "b29" ], "table_ref": [], "text": "As to the structure-to-text generation model, T5 (Raffel et al., 2020) is used because of its outstanding generation performance. Similar to its original setting, we define the task as a sequence transformation task by adding the prefix \"translate knowledge into sentence\" at the beginning as P g to guide the generation model. It is difficult to directly generate text from structured event records with limited training data, so we randomly mask the original sentence with the special token [M] to produce the masked sentence C , and the mask rate is λ. C is used as the background in the input of the generation model X g . As shown in Figure 3, the structured information annotated in the training set is transformed into event description D g and relation description R g , respectively. They are further used as background knowledge to assist in the structure-to-text generation and the original sentence C is regarded as the generation target Y g . Given the previously generated tokens y <s and the input X g . It is notable that the entire probability p(Y g | X g ) is calculated as:\np(Y g | X g ) = |Yg| s=1 p (y s | y <s , X g ) X g = P g ; D g ; R g ; C . (2\n)\nIn addition, an argument-aware loss L a is added to enforce the model to help the model to pay more attention to the event arguments during the generation process. For all event arguments that have not been generated, we search for text spans in the generated text most similar to the remaining event arguments. Detailly, we aggregate the triggers and arguments which not included in the generated text. These triggers and arguments are transformed into a one-hot embedding set A and each element is denoted as a m ∈ A denote. And the probability of selecting the token at each position in the generation model is extracted for matching the optimalrelated position. By setting the window size to the number of words in a m , we divide the probability sequence into pieces using the sliding window and obtain all the candidate set K m for each a m in A. We first calculate the L1 distance between a m and each element in K m as the distance score between them. Then, all distance scores are mixed together in the back of completely traversing A. in the case of avoiding the conflict of matching positions, greedy search is finally utilized to check each element in A to the position with the lowest distance score. Together with the original language model loss function L lm , the loss function of the generation model L g is defined as:\nL lm = |Yg| s=1 y s log p(y s | y <s , X g ) L a = T t=1 k t k=kt y k log p(y k | y <k , X g ) L g = - 1 N N n=1 (βL lm + γL a ) (3)\nwhere N is the number of instances, T is the number of elements contained in the current unmatched set, k t and k t denote the start and end position of t-th unmatched element in the original sentence, and y k is the k-th corresponding trigger or argument word." }, { "figure_ref": [], "heading": "Policy Model", "publication_ref": [], "table_ref": [], "text": "For each input sentence, our policy model is required to determine whether it expresses the target event records. Thus, the policy model makes a removal action if it is irrelevant to the target event records and it is analogous to a binary classifier. For each generated sentence G ∈ G i , the input of the policy model X p consists of G and corresponding event description D g . The symbolic representation of input is formulated as \nX p = [D g ; [SEP]; G]\nL p = - 1 N N n=1 y n log p(y n | X p ),(4)\nwhere y n is the golden action for n-th sample, and N is the number of instances." }, { "figure_ref": [], "heading": "Training Strategy", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Pre-training", "publication_ref": [], "table_ref": [], "text": "The three components, i.e., event extraction model, structure-to-text generation model, and policy model, are pre-trained with different strategies. Since the policy model has no task-specific information at the very beginning, the generation model is trained for several epochs at first to establish the training set for the policy model. We stop training the generation model until more than 70% of the trigger and arguments could be generated. The generated sentences containing their corresponding triggers and arguments are considered positive samples for the policy model, while the others are treated as negative samples. To get a balance between positive and negative samples, we randomly select some event descriptions and sentences irrelevant to the event descriptions as negative samples as well. We early stop training the policy model when the precision reaches 80% ∼ 90%. This can preserve the information entropy of the result predicted by the policy model, and extend the exploration space. Then we continue to pre-train the generation model and the event extraction model with the original training set for fixed epochs. These two pre-trained models are used as our initialized generation model and extraction model in the retraining process, respectively." }, { "figure_ref": [], "heading": "Retraining with Rewards", "publication_ref": [ "b28", "b41", "b31", "b21", "b19" ], "table_ref": [], "text": "For i-th epoch in retraining the agent, the policy model selects actions for each element in generated dataset G i . According to the actions, G i is divided into negative samples N i and positive samples set P i . Then we sample a subset from the original training data, and T o is mixed with P i as the reconstructed training set T i and used to retrain the event extraction model. Except for the improvement of argument F1 score, the growth on trigger F1 is also beneficial for the model. Therefore, we updated the checkpoint while either the trigger or argument F1 score improved to avoid falling into a local optimum. Following (Qin et al., 2018), we employ two sets for training the policy model,\nD i-1 = N i-1 -(N i-1 ∩ N i ) D i = N i -(N i-1 ∩ N i )\n.\n(5)\nSince we can't explore all directions to get the maximum reward for a single step, we select a constant number of samples from D i-1 and D i for training, respectively, named D i-1 and D i . Referring to Equation ( 6), the retraining loss function of our policy model L p is defined as:\nL p = D i y n log p(y n | X p )R i + D i-1 y n log p(y n | X p )(-R i ).(6)\nThe probability of being considered an invalid sample is taken as the weight for retraining the corresponding instance in the generation model. So we use the probability of removing the sample w n = 1 -log p(y n | X p ) as the sample weight and retrain the generation model with the following retraining loss function L g referring to Equation (3):\nL g = - 1 N N n=1 (βw n L n lm + γw n L n a )(7)\nwhere L n lm and L n a are the language model loss and argument-aware loss for n-th sample, respectively. The detail of the retraining algorithm is shown in Appendix A. Following previous work (Zhang et al., 2019;Wadden et al., 2019), we use precision (P), recall (R), and F1 scores to evaluate the performance. More specifically, we report the performance on both trigger classification (Trig-C) and argument classification (Arg-C). In the task of trigger classification, if the event type and the offset of the trigger are both correctly identified, the sample is denoted as correct. Similarly, correct argument classification means correctly identifying the event type, the role type, and the offset of the argument. Following (Lu et al., 2021;Liu et al., 2022), the offset of extracted triggers is decoded by string matching in the input context one by one. For the predicted argument, the nearest matched string is used as the predicted trigger for offset comparison. " }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [ "b31", "b41", "b17", "b5", "b13", "b27", "b16", "b21", "b9", "b19" ], "table_ref": [], "text": "We illustrate the event extraction results between our proposed DAEE and the baselines conducted in two categories, i.e., classification-based models and generation-based models.\nThe first category is classification-based models, DYGIE++ (Wadden et al., 2019): a joint model with contextualized span representations. GAIL (Zhang et al., 2019): an RL model jointly extracting entity and event. ONEIE (Lin et al., 2020): a joint neural model for information extraction task with several global features and beam search. BERT_QA (Du and Cardie, 2020): a method using separated question-answering pairs for event extraction. MQAEE (Li et al., 2020): a question answering system with multi-turn asking.\nThe other category is generation-based methods, and our proposed DAEE belongs to this one. TANL (Paolini et al., 2021): a method that use translation tasks modeling event extraction in a trigger-argument pipeline. BART-GEN (Li et al., 2021): a document-level event extraction method through conditional generation. TEXT2EVENT (Lu et al., 2021): a method directly generates structure from the text. DEGREE-E2E (Hsu et al., 2022): a method using discrete prompts and end-to-end conditional generation to extract event. GTEE-DYNPREF (Liu et al., 2022): a generative template-based event extraction method using dynamic prefix-tuning." }, { "figure_ref": [], "heading": "Results and Analysis", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [ "tab_3", "tab_4" ], "text": "The performance comparison on dataset ACE05-E + is shown in cates that DAEE is able to guide the generation model to generate the text containing events and select suitable samples to improve the effectiveness of the event extraction model.\nTable 2 presents the performance of baselines and DAEE on ERE-EN. The performance of DAEE decreases compared with GTEE-DYNPREF, but the performance is still higher than other methods, which may be affected that ERE-EN contains more pronoun arguments. The pronoun roles would offer less information for the generation model thus reducing the role of structured text in guiding the generation model.\nComparing the results on ACE05-E as Table 3 shows, we gain an improvement of 1.1% on Trg-C and a competitive F1 score on Arg-C with the SOTA classification-based method ONEIE, outperforming the others. This observation supports that structured information used in the knowledgebased generation model makes up for the information gap used by multi-task extraction." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "We further conducted an ablation study by removing each module at a time. The experimental results on ACE05-E + are presented in Table 4. We can see that the F1 score of Arg-C decreases by 0.4% and 0.8% when removing the argument-aware loss L a and stopping retraining the generation model, respectively. The results indicate that the deployment of argument-aware loss and retraining strategy is conducive to the generation module in our framework. Then, we remove the RL strategy, which means that the generated samples are directly mixed with the original training samples for training the event extraction model from scratch. The F1 score of Trg-C and Arg-C decreases by 1.6% and 1.0%, respectively. This demonstrates that the RL strategy could ensure that the generated data is more suitable for downstream event extraction tasks and guide the improvement on both Trg-C " }, { "figure_ref": [ "fig_5" ], "heading": "Iterative Generation Discussion", "publication_ref": [ "b30", "b2" ], "table_ref": [], "text": "To illustrate our framework is able to enhance the quality of generated sentences, we calculate the masked language model score pseudolog-likelihood scores (PLLs)1 following (Salazar et al., 2020) for each training epoch. The token w s in the sentence is masked and predicted using all past and future tokens W \\s := (w 1 , . . . , w s-1 , w s+1 , . . . , w |W | ), and the PLLs for each sentence is calculated as\nPLLs(W ) := 1 |W | |W | t=1 log P MLM (w s | W \\s ; Θ).\nThe results for each epoch are the average of sentence scores over the entire training set as shown in Figure 4. PLLs is declining with the iterative process, which demonstrates that DAEE enhances the fluency of generated data and improves the effect of event extraction under the guidance of RL agent. Furthermore, we compare DAEE with a rule-based sequence labeling data augment method SDANER (Dai and Adel, 2020). SDANER contains four rule-based augmentation methods. Synonym replacement is selected according to its lowest average PLLs. DAEE generates sentences with lower PLLs compared with the rule-based method.\nThe results demonstrate that DAEE generates more fluency and grammatically correct data." }, { "figure_ref": [], "heading": "Argument Loss Analysis", "publication_ref": [], "table_ref": [], "text": "To verify the effectiveness of argument-aware loss L a in reducing mismatches triggers and arguments, we alter the hyperparameter γ and explore the change of the unmatched number of arguments Meanwhile, the number of unmatched arguments converges around 30 after adding L a , while the number converges to around 120 without L a ." }, { "figure_ref": [], "heading": "Diversity Analysis", "publication_ref": [ "b14" ], "table_ref": [ "tab_9" ], "text": "Intuitively, diverse sentence description in the training set is able to enhance the model performance. We thus verify the diversity of the generated text. The degree of diversity is reported by calculating the number of distinct bigrams and trigrams in the generated text which has not appeared in the original text and the results are shown in Table 6. In the following, we use GENERATION MODEL to represent the directly trained structure-to-text generation model. Referring to the indicators proposed in (Li et al., 2016), The diversity, the argument-aware loss L a helps the GENERATION MODEL to produce more diverse synthetic data, which is because the argument-aware loss makes the model focus more on retaining the triggers and arguments rather " }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "This work was supported by the Joint Funds of the National Natural Science Foundation of China (Grant No. U19B2020). We would like to thank the anonymous reviewers for their thoughtful and constructive comments. " }, { "figure_ref": [], "heading": "A Details of Methods", "publication_ref": [], "table_ref": [], "text": "The detail of the retraining algorithm is shown in Algorithm 1." }, { "figure_ref": [], "heading": "B Details of Experiments B.1 Data Statistics", "publication_ref": [], "table_ref": [ "tab_10" ], "text": "In this paper, we use the three datasets to verify our proposed method, the statistics of the datasets are shown in Table 7." }, { "figure_ref": [], "heading": "B.2 Implementation Details", "publication_ref": [], "table_ref": [ "tab_11" ], "text": "All experiments were conducted with NVIDIA A100 Tensor Core GPU 40GB. For the pre-trained language model, we reuse the three English models released by Huggingface2 . Specifically, γ and β are set to 0.1 and 0.9 in Equation ( 2), respectively, the RL training epoch is set to 80, the reward scale α is set to 10, the sample ratio from original event extraction training set is set to 0.5, the negative sample ratio for GTEE-BASE in training is set to 12% for event extraction, and the other hyperparameters used are shown in Table 8." }, { "figure_ref": [], "heading": "B.3 Generation Reliability Discussion", "publication_ref": [], "table_ref": [], "text": "To verify the verifies the convince of the generated data, we train GTEE-BASE through the samples Retrain policy through D i and D i-1 according Equation 621:\nUpdate training weight 1 -log p(Y p | X p ) → w n for each sample in Y g , 22:\nRetrain the generation model through weighted Y g according Equation 323:\nUpdate θ g and generate G i 24: end for" }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "facility that airlifting has American troops to Baghdad.\nTroops in a facility that has been airlifting American military supplies to Baghdad. Capture of the airport by Iraqi forces." } ]
Event extraction aims to recognize pre-defined event triggers and arguments from texts, which suffer from the lack of high-quality annotations. In most NLP applications, involving a large scale of synthetic training data is a practical and effective approach to alleviate the problem of data scarcity. However, when applying to the task of event extraction, recent data augmentation methods often neglect the problem of grammatical incorrectness, structure misalignment, and semantic drifting, leading to unsatisfactory performances. In order to solve these problems, we propose a denoised structure-to-text augmentation framework for event extraction (DAEE), which generates additional training data through the knowledgebased structure-to-text generation model and selects the effective subset from the generated data iteratively with a deep reinforcement learning agent. Experimental results on several datasets demonstrate that the proposed method generates more diverse text representations for event extraction and achieves comparable results with the state-of-the-art.
Boosting Event Extraction with Denoised Structure-to-Text Augmentation
[ { "figure_caption": "Figure 1 :1Figure 1: Example of text data augmentation methods.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The proposed policy-based RL framework.cording to the action selected by the policy-based agent. Thus, we obtain the denoised augmentation training data for event extraction model. We use the filtered training data to retrain the event extraction model and the enhancement of the F1 score is regarded as a reward to retrain the policy model. The guidance of the event extraction model further helps the policy model select efficient samples. Finally, the generation model is retrained according to the weighted training data, and the weight is the removing action probability calculated by the retrained policy model. The retraining captain the generation model produces superior-quality sentence and consequently help the other components. The components of our proposed method will be described in the following.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "of the [M] would give [M] and British [M] a facility [M] [M] [M] and troops to Baghdad.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Example of structured information representations and structure-to-text generation.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "with the trained irrelevance classifiers as the event extraction model. The event extraction model is based on BART (Lewis et al., 2020), the entire probability p(Y e | X e ) is calculated through formulated input X e = [P e ; [SEP]; C], where [ ; ] denotes the sequence concatenation operation, and [SEP]", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Results for the PLLs of DAEE and SDANER on ACE05-E + .", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Unmatched arguments numbers of different training epochs.", "figure_data": "", "figure_id": "fig_7", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "...............Policy Model...the generation-based event extraction task, theextraction process is divided into several subtasksaccording to event types E. For each event type e ∈E, the purpose of the event extraction model is togenerate Y e according to the predefined prompt P eand context C, where Y e is the answered promptscontaining extracted event records. Except for theoriginal data T o , we use a policy model as RL agentto select the effective subset P i from the generateddata G i in the i-th epoch, thus improving the dataefficiency by filtering the generated samples.3.2 FrameworkOur proposed denoised structure-to-text augmenta-tion framework is mainly composed of the event ex-traction model, structure-to-text generation model,and policy model. As the policy-based RL processshown in Figure 2, the event record is first fed intothe structure-to-text generation model to obtain theadditional training data. Then they are filtered ac-", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "with the separate marker[SEP]. We fine-tune the BERT model by feeding the [CLS] vector into the MLP layer. And then a softmax function is utilized to calculate the decision probability for retaining the sample G. A binary cross-entropy loss function is introduced for this classifier,", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "±0.4 75.1 ±5.0 76.9 ±0.4 58.5 ±1.5 54.4 ±0.4 56.3 ±0.2 Results on ACE05-E + . We reported the average result of eight runs with different random seeds, our results are like \"a ±b \", where \"a\" and \"b\" represents the mean and the variance, respectively. We bold the highest scores and underline the second highest scores.", "figure_data": "ModelPTrg-C RF1PArg-C RF1ONEIE72.173.672.855.454.354.8TEXT2EVENT71.272.571.854.054.854.4DEGREE-E2E --72.7--55.0GTEE-DYNPREF 67.383.074.349.860.754.7DAEE 78.8 Model PTrg-C RF1PArg-C RF1ONEIE58.459.959.151.849.250.5TEXT2EVENT59.259.659.449.447.248.3DEGREE-E2E --57.1--49.6GTEE-DYNPREF 61.972.866.951.958.855.1DAEE68.7 ±0.8 61.6 ±0.5 65.0 ±0.4 57.7 ±0.8 46.7 ±0.4 51.6 ±0.3", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results on ERE-EN.", "figure_data": "4 Experiments4.1 Experimental Settings4.1.1 Datasets and Evaluation MetricsFollowing the previous work (Zhang et al., 2019;Wadden et al., 2019; Du and Cardie, 2020; Lu et al.,2021; Hsu et al., 2021; Liu et al., 2022), We prepro-cess the two widely used English event extractionbenchmarks, ACE 2005 (LDC2006T06) and ERE(LDC2015E29, LDC2015E68, and LDC2015E78)into ACE05-E and ERE-EN. ACE 2005 is furtherpreprocessed into ACE05-E + following (Lin et al.,2020). Statistics of the datasets are further shownin Appendix B.1.", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Results on ACE05-E. The first group is the classification-based methods and the second group is the generation-based methods.", "figure_data": "ModelPTrg-C RF1PArg-C RF1DYGIE++--69.7--48.8GAIL74.869.472.061.645.752.4ONEIE--74.7--56.8BERT_QA71.173.772.356.850.253.3MQAEE--71.7--53.4TANL--68.5--48.5BART-GEN69.572.871.156.051.653.7TEXT2EVENT67.571.269.246.753.449.8DEGREE-E2E --70.9--54.4GTEE-DYNPREF 63.784.472.649.064.855.8DAEE75.1 ±1.7 76.6 ±4.1 75.8 ±0.6 55.9 ±3.6 57.2 ±1.8 56.5 ±0.3", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "It can be observed that DAEE achieves the SOTA F1 score on ACE05-E + and obtain 1.1% and 0.7% gain of F1 scores forTrg-C and Arg-C, respectively. The improvement indi-", "figure_data": "ModelPTrg-C RF1PArg-C RF1DAEE78.8 75.1 76.9 58.5 54.4 56.3w/o AL 78.0 75.5 76.7 56.2 55.6 55.9w/o RG 78.8 75.5 77.1 56.2 54.9 55.5w/o RL 79.0 71.9 75.3 56.3 54.3 55.3", "figure_id": "tab_5", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Ablation Study on ACE05-E + for event extraction. AL denotes the argument-aware loss L a , RG denotes the process of retraining the generation model, and RL denotes the reinforcement learning strategy.", "figure_data": "", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "we got uh purchased by our strategic partner, so um GENERATION MODEL (w/o La) yeah , we bought from our partner, um, um GENERATION MODEL well , we purchased our partner purchased, um DAEE yeah, we got uh purchased by our partner, Event type Life:Die & Conflict:Attack Original sentence the iraqi government reports 1252 civilians have been killed in the war. GENERATION MODEL (w/o La) the iraqi government says more than 200 civilians have been killed in this war . GENERATION MODEL the iraqi government killed civilians in the war . DAEE the iraqi government says more than 200 civilians have been killed the war .", "figure_data": "Event typeTransaction:Transfer-OwnershipOriginal sentenceyes,", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Efficient generated synthetic data from our proposed methods and simple generated Sentence. Text chunks in Blue and Red are the event triggers for different event type, text chunks in Green are the event arguments.", "figure_data": "", "figure_id": "tab_8", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Results of diversity analysis on ACE05-E + .than generating more similar content to the original text. The diversity is affected by the RL strategy due to the concentration on the effect of event extraction. Horizontally compared to Table4, the experimental results demonstrate that diversified text can enable the model to obtain more information based on similar event records.", "figure_data": "Modelbigrams trigramsGENERATION MODEL0.1600.398GENERATION MODEL (w/o La)0.1250.323DAEE0.1430.3654.2.6 Synthetic Data Case StudyTable 5 shows representative examples generatedby our proposed DAEE and other methods and wecan see the following comparative phenomena. Inthe case of comparing whether to add the argument-aware loss, the GENERATION MODEL generatesall the triggers and arguments in three examples,which demonstrate the generation model withoutL a shuffles the text leaking problem. There is amisalignment in the first example for the text gener-ated through GENERATION MODEL. The originalsentence contains two roles, i.e., ARTIFACT andBUYER, and their arguments are we and partner,but the two arguments have been swapped in thesynthetic text. In the second example, the govern-ment should play the role of AGENT in LIFE:DIEevent according to the output of GENERATIONMODEL, which is not appeared in the golden eventrecord and resulting in redundancy. Neither of theabove errors occurs in DAEE shown in the table,which proves the RL strategy could also be guid-ance for improving the effectiveness of generativemodels.", "figure_id": "tab_9", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Dataset statistics.", "figure_data": "DatasetSplit #Sents #Events #RolesTrain 17,1724,2024,859ACE05-EDev923450605Test832403576Train 19,2164,4196,607ACE05-E +Dev901468759Test676424689Train 14,7366,2088,924ERE-ENDev1,209525730Test1,163551822NameEEPOLICYGENlearning rate (pretrain)1e-51e-53e-5learning rate (retrain)1e-61e-63e-5train batch size32*23232epochs (pretrain)15-20epochs (retrain)211weight decay (pretrain)1e-51e-51e-5gradient clip5.05.05.0warm-up ratio (pretrain)10%--optimizerAdamWAdamAdam", "figure_id": "tab_10", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Hyperparameter setting for our models, EE denotes the event extraction model, POLICY denotes the policy model, GEN denotes the generation model.", "figure_data": "", "figure_id": "tab_11", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "The experimental results on ACE05-E + ,DD denotes using the generated data though DAEE, while GD denotes the data from GENERATION MODEL without RL, DD denotes the data from the original training set.with event record, which is because that only the samples with event record are used for data augmentation. The results are shown in Table9. The F 1 score trained on DD increases by 1.1% and 2.5% compared with the results trained on OD and GD, respectively. The data generated by DAEE achieves a closer effect to original data, which thus could be utilized for training the competitive event extraction models. Algorithm 1 The process of retraining the reinforcement learning framework. Parameter:The original event extraction training set T o , parameters of policy model θ p , event extraction model θ e , generation model θ g , generated sentence set, n-th generated sentence G n , positive samples set P i , negative samples set N i 1: Initialize trigger F 1 score F t max and role F 1 score F a max through θ e 2: for epoch i in 1 → K do 3: for G n in G i-1 do 4: Calculate [D g ; [SEP ]; G n ] → X p 5: Sample action according p(y n | X p , θ p ) Sample T sub from T o and concatnate {T sub , P i } → T i", "figure_data": "ModelPTrg-C RF1PArg-C RF1DD69.3 79.7 74.1 47.6 56.5 51.7GD68.5 81.4 74.4 42.3 58.6 49.2OD66.3 80.7 72.8 43.1 61.2 50.6", "figure_id": "tab_12", "figure_label": "9", "figure_type": "table" } ]
Bo Wang; Heyan Huang; Xiaochi Wei; Ge Shi; Xiao Liu; Chong Feng; Tong Zhou; Shuaiqiang Wang; Dawei Yin
[ { "authors": "David Ahn", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "The stages of event extraction", "year": "2006" }, { "authors": "Ateret Anaby-Tavor; Boaz Carmeli; Esther Goldbraich; Amir Kantor; George Kour; Segev Shlomov; Naama Tepper; Naama Zwerdling", "journal": "AAAI Press", "ref_id": "b1", "title": "Do Not Have Enough Data? Deep Learning to the Rescue!", "year": "2020" }, { "authors": "Xiang Dai; Heike Adel", "journal": "International Committee on Computational Linguistics", "ref_id": "b2", "title": "An Analysis of Simple Data Augmentation for Named Entity Recognition", "year": "2020" }, { "authors": "Bosheng Ding; Linlin Liu; Lidong Bing; Canasai Kruengkrai; Hai Thien; Shafiq Nguyen; Luo Joty; Chunyan Si; Miao", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "DAGA: Data Augmentation with a Generation Approach for Low-resource Tagging Tasks", "year": "2020" }, { "authors": "George R Doddington; Alexis Mitchell; Mark A Przybocki; Lance A Ramshaw; Stephanie M Strassel; Ralph M Weischedel", "journal": "European Language Resources Association", "ref_id": "b4", "title": "The Automatic Content Extraction (ACE) Program -Tasks, Data, and Evaluation", "year": "2004" }, { "authors": "Xinya Du; Claire Cardie", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Event Extraction by Answering (Almost) Natural Questions", "year": "2020" }, { "authors": "Xinya Du; Sha Li; Heng Ji", "journal": "", "ref_id": "b6", "title": "Dynamic Global Memory for Document-level Argument Extraction", "year": "2022" }, { "authors": "Yutai Hou; Yijia Liu; Wanxiang Che; Ting Liu", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Sequence-to-Sequence Data Augmentation for Dialogue Language Understanding", "year": "2018" }, { "authors": "I-Hung Hsu; Kuan-Hao Huang; Elizabeth Boschee; Scott Miller; Prem Natarajan; Kai-Wei Chang; Nanyun Peng", "journal": "", "ref_id": "b8", "title": "DEGREE: A Data-Efficient Generative Event Extraction Model", "year": "2021" }, { "authors": "I-Hung Hsu; Kuan-Hao Huang; Elizabeth Boschee; Scott Miller; Prem Natarajan; Kai-Wei Chang; Nanyun Peng", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "DEGREE: A Data-Efficient Generation-Based Event Extraction Model", "year": "2022" }, { "authors": "Heyan Huang; Xiao Liu; Ge Shi; Qian Liu", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b10", "title": "Event extraction with dynamic prefix tuning and relevance retrieval", "year": "2023" }, { "authors": "Ashutosh Kumar; Kabir Ahuja; Raghuram Vadapalli; Partha P Talukdar", "journal": "Trans. Assoc. Comput. Linguistics", "ref_id": "b11", "title": "Syntax-Guided Controlled Generation of Paraphrases", "year": "2020" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "", "ref_id": "b12", "title": "BART: Denoising Sequence-to-Sequence Pretraining for Natural Language Generation, Translation, and Comprehension", "year": "2020" }, { "authors": "Fayuan Li; Weihua Peng; Yuguang Chen; Quan Wang; Lu Pan; Yajuan Lyu; Yong Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "Event Extraction as Multi-turn Question Answering", "year": "2020" }, { "authors": "Jiwei Li; Michel Galley; Chris Brockett; Jianfeng Gao; Bill Dolan", "journal": "The Association for Computational Linguistics", "ref_id": "b14", "title": "A Diversity-Promoting Objective Function for Neural Conversation Models", "year": "2016" }, { "authors": "Qi Li; Ji Heng; Liang Huang", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "Joint Event Extraction via Structured Prediction with Global Features", "year": "2013" }, { "authors": "Sha Li; Ji Heng; Jiawei Han", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Document-Level Event Argument Extraction by Conditional Generation", "year": "2021" }, { "authors": "Ying Lin; Heng Ji; Fei Huang; Lingfei Wu", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "A Joint Neural Model for Information Extraction with Global Features", "year": "2020" }, { "authors": "Jian Liu; Yubo Chen; Kang Liu; Wei Bi; Xiaojiang Liu", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Event Extraction as Machine Reading Comprehension", "year": "2020" }, { "authors": "Xiao Liu; Heyan Huang; Ge Shi; Bo Wang", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Dynamic Prefix-Tuning for Generative Templatebased Event Extraction", "year": "2022" }, { "authors": "Xiao Liu; Zhunchen Luo; Heyan Huang", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Jointly Multiple Events Extraction via Attentionbased Graph Information Aggregation", "year": "2018" }, { "authors": "Yaojie Lu; Hongyu Lin; Jin Xu; Xianpei Han; Jialong Tang; Annan Li; Le Sun; Meng Liao; Shaoyi Chen", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Text2Event: Controllable Sequenceto-Structure Generation for End-to-end Event Extraction", "year": "2021" }, { "authors": "Nathan Ng; Kyunghyun Cho; Marzyeh Ghassemi", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "SSMBA: self-supervised manifold based data augmentation for improving out-of-domain robustness", "year": "2020-11-16" }, { "authors": "Minh Van Nguyen; Viet Dac Lai; Thien Huu Nguyen", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Cross-task instance representation interactions and label dependencies for joint information extraction with graph convolutional networks", "year": "2021" }, { "authors": "Minh Van Nguyen; Bonan Min; Franck Dernoncourt; Thien Nguyen", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Joint extraction of entities, relations, and events via modeling interinstance and inter-label dependencies", "year": "2022" }, { "authors": "Minh Van Nguyen; Bonan Min; Franck Dernoncourt; Thien Nguyen", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Learning cross-task dependencies for joint extraction of entities, events, event arguments, and relations", "year": "2022" }, { "authors": "Thien Huu Nguyen; Kyunghyun Cho; Ralph Grishman", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "Joint Event Extraction via Recurrent Neural Networks", "year": "2016" }, { "authors": "Giovanni Paolini; Ben Athiwaratkun; Jason Krone; Jie Ma; Alessandro Achille; Rishita Anubhai; Cícero Nogueira Dos Santos; Bing Xiang; Stefano Soatto", "journal": "", "ref_id": "b27", "title": "Structured Prediction as Translation between Augmented Natural Languages", "year": "2021" }, { "authors": "Pengda Qin; Weiran Xu; William Yang; Wang ", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Robust Distant Supervision Relation Extraction via Deep Reinforcement Learning", "year": "2018" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "J. Mach. Learn. Res", "ref_id": "b29", "title": "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer", "year": "2020" }, { "authors": "Julian Salazar; Davis Liang; Toan Q Nguyen; Katrin Kirchhoff", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Masked Language Model Scoring", "year": "2020" }, { "authors": "David Wadden; Ulme Wennberg; Yi Luan; Hannaneh Hajishirzi", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "Entity, Relation, and Event Extraction with Contextualized Span Representations", "year": "2019" }, { "authors": "Xiaozhi Wang; Ziqi Wang; Xu Han; Zhiyuan Liu; Juanzi Li; Peng Li; Maosong Sun; Jie Zhou; Xiang Ren", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "HMEAE: Hierarchical Modular Event Argument Extraction", "year": "2019" }, { "authors": "Yufei Wang; Can Xu; Qingfeng Sun; Huang Hu; Chongyang Tao; Xiubo Geng; Daxin Jiang", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "PromDA: Prompt-based Data Augmentation for Low-Resource NLU Tasks", "year": "2022" }, { "authors": "Zirui Wang; Adams Wei Yu; Orhan Firat; Yuan Cao", "journal": "", "ref_id": "b34", "title": "Towards Zero-Label Language Learning", "year": "2021" }, { "authors": "Jason Wei; Kai Zou; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "EDA: Easy data augmentation techniques for boosting performance on text classification tasks", "year": "2019" }, { "authors": "Jason W Wei; Kai Zou", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks", "year": "2019" }, { "authors": "Xing Wu; Shangwen Lv; Liangjun Zang; Jizhong Han; Songlin Hu", "journal": "Springer", "ref_id": "b37", "title": "Conditional BERT Contextual Augmentation", "year": "2019" }, { "authors": "Mengzhou Xia; Xiang Kong; Antonios Anastasopoulos; Graham Neubig", "journal": "Association for Computational Linguistics", "ref_id": "b38", "title": "Generalized Data Augmentation for Low-Resource Translation", "year": "2019" }, { "authors": "Bishan Yang; Tom M Mitchell", "journal": "", "ref_id": "b39", "title": "Joint Extraction of Events and Entities within a Document Context", "year": "2016" }, { "authors": "Sen Yang; Dawei Feng; Linbo Qiao; Zhigang Kan; Dongsheng Li", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "Exploring Pre-trained Language Models for Event Extraction and Generation", "year": "2019" }, { "authors": "Tongtao Zhang; Ji Heng; Avirup Sil", "journal": "Data Intell", "ref_id": "b41", "title": "Joint Entity and Event Extraction with Generative Adversarial Imitation Learning", "year": "2019" }, { "authors": "Zixuan Zhang; Heng Ji", "journal": "Association for Computational Linguistics", "ref_id": "b42", "title": "Abstract Meaning Representation guided graph encoding and decoding for joint information extraction", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 369.95, 723.6, 154.46, 10.63 ], "formula_id": "formula_0", "formula_text": "R i = α(F i -F i-1 ),(1)" }, { "formula_coordinates": [ 4, 340.11, 201.22, 180.06, 50.97 ], "formula_id": "formula_1", "formula_text": "p(Y g | X g ) = |Yg| s=1 p (y s | y <s , X g ) X g = P g ; D g ; R g ; C . (2" }, { "formula_coordinates": [ 4, 520.17, 221.96, 4.24, 9.46 ], "formula_id": "formula_2", "formula_text": ")" }, { "formula_coordinates": [ 4, 331.97, 630.01, 192.44, 114.33 ], "formula_id": "formula_3", "formula_text": "L lm = |Yg| s=1 y s log p(y s | y <s , X g ) L a = T t=1 k t k=kt y k log p(y k | y <k , X g ) L g = - 1 N N n=1 (βL lm + γL a ) (3)" }, { "formula_coordinates": [ 5, 177.15, 260.67, 90.48, 10.63 ], "formula_id": "formula_4", "formula_text": "X p = [D g ; [SEP]; G]" }, { "formula_coordinates": [ 5, 108.24, 364.22, 180.89, 33.33 ], "formula_id": "formula_5", "formula_text": "L p = - 1 N N n=1 y n log p(y n | X p ),(4)" }, { "formula_coordinates": [ 5, 350.77, 359.4, 125.99, 27.17 ], "formula_id": "formula_6", "formula_text": "D i-1 = N i-1 -(N i-1 ∩ N i ) D i = N i -(N i-1 ∩ N i )" }, { "formula_coordinates": [ 5, 338.43, 499.61, 185.98, 55.42 ], "formula_id": "formula_7", "formula_text": "L p = D i y n log p(y n | X p )R i + D i-1 y n log p(y n | X p )(-R i ).(6)" }, { "formula_coordinates": [ 5, 337.07, 679.47, 187.34, 33.33 ], "formula_id": "formula_8", "formula_text": "L g = - 1 N N n=1 (βw n L n lm + γw n L n a )(7)" }, { "formula_coordinates": [ 7, 307.61, 419.99, 215.33, 34.6 ], "formula_id": "formula_9", "formula_text": "PLLs(W ) := 1 |W | |W | t=1 log P MLM (w s | W \\s ; Θ)." } ]
10.1016/j.cag.2022.12.010
[ { "figure_ref": [ "fig_0" ], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b2", "b8", "b10", "b9", "b14", "b15", "b16", "b17", "b8", "b10", "b11", "b18", "b19", "b20", "b21", "b22", "b23", "b24", "b25", "b26", "b21" ], "table_ref": [], "text": "Synthetic data produced by graphics engines or driving simulators has become prevalent [1,2] because it can model new sensors and driving situations providing large amounts of training and validation data to autonomous driving AI models. However, synthetic images are not sufficiently photorealistic, and models trained on synthetic data alone do not readily generalize to real data. Utilizing Generative Adversarial Networks (GANs) [3] is a possible alternate approach, as they have demonstrated huge success, achieving more photorealism than computer graphics in some applications like human face generation [4]. The success of GANs has mostly been achieved on images with single objects and is yet to reach multi-object complex scenes. While some GAN models can generate highquality street-level scenes like the StyleGAN series [4][5][6][7] and ProjectedGAN [8], they do not provide any level of controllability for localized edits in the image. Other models provide slightly more controllability, but their quality lags behind. Most importantly, SB-GAN [9] used a two-stage approach to generate complex images by first generating a semantic layout from a latent code, then using the layout as a prior to generate an image with a conditional GAN like SPADE [10]. However, image manipulation can only be achieved by manually editing the generated layout from the first stage. Semantic Palette [11] has built upon this approach by providing the label generator with a conditional input of class distributions (40% sky, 10% cars...). However, it still does not reach the quality of StyleGAN and provides a small level of controllability over the generation, as specifying the class statistics only does not offer fine-grained edits like changing the size, number, and/or location of a class in the generated image. For instance, increasing the percentage of the class 'cars' in an image can be achieved either by increasing the number or size of cars in the scene, and both can happen randomly as this is not controlled in Semantic Palette.\nIn this work, we aim to develop a generative framework that simultaneously achieves a high level of photorealism and controllability. Controllability is highly desired for autonomous driving applications, especially for model validation, because it enables counterfactual reasoning: for example, we wish to know how a perception model would behave had there been more cars on the road. To this end, we propose a novel approach, Urban-StyleGAN, to learn the generation and manipulation of urban scenes (see Figure 1). Contrary to previous approaches, which only use latent codes for the global image, Urban-StyleGAN is based on the idea that a disentangled latent code allows localized edits in the image. Specifically, we first adopt a recent work from human face generation, namely, SemanticStyleGAN [12] (SSG) but find that a straightforward application leads to training divergence. We hypothesize that, the large number of classes in urban scenes increases the number of local generators, and, consequently, the generator's overall learning capacity and the latent space dimension. This imposes a more complicated learning problem. As a remedy, we propose a class grouping strategy to learn super-classes, effectively reducing the number of local generators and speeding up the training convergence. Moreover, to allow localized semantic edits, we employ recent unsupervised latent exploration algorithms on the S-space of the disentangled class codes of SSG, which contrasts the common use of these approaches on the W + -space. Our contributions can be summarized as follows:\n• We propose a framework that can simultaneously generate high-quality images of urban scenes and enables fine-grained control over the image post-synthesis. Key to our method is a pre-training class grouping strategy for limiting the number of local generators and, as a result, the generator's total learning capacity. This allows a better exploitation of SSG's disentangled latent space. • To promote more controllability on the image content, we employ Principal Component Analysis (PCA) in the lower dimensional disentangled S-space for each class rather than the W + -space, which has often been used in previous publications. To the best of our knowledge, this is the first paper to explore directions of control in the latent space of a GAN trained on urban scenes. • Experiments on Cityscapes and Mapillary datasets show that the proposed model offers fine-grained localized semantic edits (e.g., the number, size, and position of objects in a scene) and outperforms previous urban scene generative models by a large margin in generation quality and controllability.\nII. RELATED WORKS Generative Adversarial Networks (GANs) have revolutionized image generation by employing an adversarial learning scheme between a generator and a discriminator network [3]. Since then, a plethora of works [4-8, 13, 14] has progressively enhanced the diversity and quality of generated images. While a high-quality synthesis can be achieved on urban scenes, the latent codes of these frameworks affect the image globally and do not provide mechanisms for localized edits. Semantic prior for image generation. Two recent works have emerged that tackle urban scene generation: SB-GAN [9] and Semantic Palette [11]. A two-stage approach is used in both models: the first stage generates semantic layouts, and the second stage translates the layouts into images, using a conditional network like [10,[15][16][17][18]. While they provide more controllability than general-purpose GANs (like StyleGAN), they lag in generation quality. We hypothesize that a limiting factor in both frameworks [9,11] is the conditional network used in the second stage and argue that the joint generation of images and layouts can boost performance. Most recently, SemanticStyleGAN [12] (SSG) proposed to generate images and layouts in one stage, with end-to-end training, but it was designed and applied for human face generation. Image manipulation in GANs has been enabled in 3 different ways: using synthetic data to provide additional supervision [19][20][21], exploring directions of control in the latent space in an unsupervised manner [22][23][24][25], and using language prompts [26,27]. These methods have been applied to singleobject datasets, we found no previous works on exploring latent space for generative models on urban scenes. Our work belongs to the last category (unsupervised latent exploration) and is based on GANSpace [22]. We opt not to adopt any synthetic datasets for additional supervision in order to generate photorealistic data only. We leave the third method (language prompts) for future works." }, { "figure_ref": [], "heading": "III. METHODOLOGY", "publication_ref": [ "b11" ], "table_ref": [], "text": "Our key idea is to build a framework with a disentangled latent code to learn the different factors of variation in an urban scene. For this, we adopt a recent baseline on human face generation, SSG [12]. SSG learns to generate semantic layouts using different local generators to model the different classes in the image. Similar to SB-GAN and Semantic Palette in urban scene generation, the learning of semantic labels is used as a prior for generating images and learning the shape of different classes, but SSG learns both images and layouts in an end-to-end manner instead of using a two-stage approach. We argue that an end-to-end approach can lead to a faster training convergence as the generator learns shape and texture jointly. However, we show that a straightforward application of SSG does not lead to convergence or high quality. As a remedy, we propose Urban-StyleGAN to generate and edit high-quality images of urban scenes. This is realized by learning classes in groups, which enables a more compact representation in the latent space and a faster training convergence. Moreover, after training, we choose to explore and find the main directions of variation in the class-specific S-space, instead of the W +space, to allow for more localized semantic edits. In test-time, we apply GANSPace on the disentangled latent s c,l -vectors to discover meaningful directions of control." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "A. Overview of the framework", "publication_ref": [ "b3", "b4", "b5", "b6", "b11", "b5", "b11" ], "table_ref": [], "text": "The StyleGAN series of models [4][5][6][7] is based on the idea of injecting multiple latent codes in different generator layers to modulate conv layers at different scales. To obtain these latent codes, a multilayer perceptron (MLP) maps vectors from a normal distribution called the Z-space to vectors w in an intermediate space called W-space. As the w vectors affect the image globally, localized edits are not possible. In this paper, we challenge this paradigm and draw inspiration from SSG [12] for the design of the generator's architecture. We employ multiple local generators g c , where each g c maps a latent code w and Fourier features p [6] to a class-specific depth map d c and class-specific features f c for a class c in the image. The features {f c } C c=1 are then fused in a compositional manner resulting in an intermediary semantic mask m and features f :\nf c , d c = g c (w, p), m c i,j = e d c i,j C k=1 e d k i,j , f = C k=1 m k • f k .(1)\nA renderer network R upsamples f to a high-resolution image x and a final semantic mask m (see Figure 2). For this, the generator (R and {g c } C c=1 ) trains adversarially on a labeled dataset with images and semantic layouts. The discriminator architecture has two branches for the image and mask, which are fused early in the network. A realism score is given at the end using a fully convolutional layer (for more details about the discriminator's architecture, readers are referred to SSG [12]).\nThe w vectors are factorized in three components: w = (w base , w shape , w texture ), where each modulates different layers in g c . w base determines the coarse-level shape of the semantic class and is shared across all classes to model their spatial dependencies. The vector w shape determines the class shape and w texture its texture. This is enforced by architectural design, as the depth map, which determines the mask's shape, only depends on w base and w shape (Figure 2). Each wsubvector (base, shape, texture) is passed to a set of layers inside each generator g c and transformed to a style vector, s c,l , through an MLP inside the layer l of g c (see Figure 2). The style vectors modulate the conv layers inside each generator. Formally, the mapping from z to s c,l can be denoted as follows:\nMLP z→w : z -→ w = (w base , w shape , w texture ), (2) MLP c,l : w base -→ s c,l=0,1 , w shape -→ s c,l=2:5 ,(3)\nw texture -→ s c,l=6:9 .\nNote that during training, we sample one w-vector for all classes, while during inference, we can sample different w for different classes. The ensemble of all w vectors is called the W + -space and the ensemble of all s-vectors in the generator is called the S -space and contains all vectors {s c,l } C,L c=1,l=1 ." }, { "figure_ref": [], "heading": "B. Learning high-quality generation of urban scenes", "publication_ref": [ "b27", "b28" ], "table_ref": [ "tab_0" ], "text": "We find that this architecture, originally designed for human face generation, does not readily extend to the more complex street-level images. As autonomous driving datasets typically contain a high number of classes (30-60 compared to 8 in human faces datasets), the number of local generators increases significantly. This, in turn, leads to 2 problems. First, the generator has a larger learning capacity and overpowers the discriminator, introducing overfitting (see Figure 3). Second, as some classes are less frequent in the dataset, allocating a local generator to learn each class makes the mapping task from latent code to feature substantially harder because the latent space has to be more descriptive. To mitigate these problems, we propose to remap the original classes into a smaller number of super-classes, so they can be learned in groups. This reduces the number of local generators to the number of super-classes and consequently reduces the generator's overall learning capacity. Moreover, it enables a compact yet disentangled latent representation of the objects in the scene. However, we do not want to reduce the number of classes drastically in a way that impedes controllability and disentanglement, and we find that 16 super-classes is a good balance (see Table I for an example of grouping on the Cityscapes dataset [28]). To further regularize the adversarial training, we add spectral normalization [29] to the discriminator weights, except for the last fully-connected layers, and increase the batchsize in the training pipeline. While spectral normalization regularizes large changes in the weights leading to more stable updates, a larger batchsize fosters the discriminator's training as it can observe all of the 16 classes in one forward pass. Otherwise, the discriminator learns to neglect some classes, especially the less frequent ones in the dataset." }, { "figure_ref": [ "fig_4", "fig_3", "fig_4" ], "heading": "C. Exploiting the disentangled representation of super-classes", "publication_ref": [ "b21", "b20", "b23", "b21", "b27", "b29", "b10", "b12", "b4", "b30", "b7", "b31", "b10", "b8", "b8", "b10", "b32", "b12", "b30", "b8", "b31", "b10", "b4", "b7" ], "table_ref": [], "text": "The architecture of SSG enables to have control over the shape and texture of different classes but does not offer by design any clear directions of control. Simply increasing or decreasing the latent code can have multiple simultaneous effects on a class shape but we wish to find fine-grained directions (like only changing the number of cars on the right or changing the time of day...). To this end, explore latent directions in an unsupervised way like [22], as we do not have additional supervision from synthetic data like [21] or attribute classifier networks like [24]. Specifically, we sample N vectors from the normal distribution in the Z-space and compute the corresponding class-specific style vectors s 1:N c,l=5,9 , where l = 5, 9 for shape and texture changes respectively. Then, to find the principal axes of the probability distribution of the s-vectors, we perform PCA on each subset of vectors, s 1:N c,l and obtain two bases per class, V c,l=5 for shape editing and V c,l=9 for texture editing. Note that we only choose these 2 layers, l = 5, 9, inside each local generator to perform shape or texture changes. The layer l = 5 corresponds to the last layer in the subnetwork of g c responsible for the generation of d c , while l = 9 is the last layer which outputs the feature map f c . We found that these downstream layers provide more meaningful directions of control than the earlier layers (Figure 5). A possible explanation is that they operate on a larger receptive field of view, generating more high-level features.\nFormally, editing can be expressed as changing the s-vector in the desired direction(s) to a new vector s c,l :\ns c,l = s c,l + V c,l • y, (5\n)\nwhere y is a vector of real-valued coordinates, which determines the specified change by the user in the desired direction. Note that in contrast to the original GANSpace algorithm [22], we do not apply PCA in the W + -space. Instead, we seek to edit the image by manipulating the s-vector. Our motivation for this is that the s c,l are more class-specific and layer-specific than the w-vectors, as they are generated by different MLP c,l located inside the different generators g c , whereas the wvectors are originated from z through one layer, MLP shared and the W-space is shared across all classes.\nIV. EXPERIMENTS Datasets. We use two datasets for our experiments, Cityscapes [28] and Mapillary Vistas [30]. Cityscapes contains street-level images from different cities in Germany, annotated with 34 semantic classes. The original training split has 3k annotated images and 20k images without annotations. Similar to Semantic Palette [11], we employ a segmentation network to generate annotations for the unlabeled images and use the 23k images and labels for training. Mapillary Vistas is larger and more diverse than Cityscapes and contains 25k annotated images with 66 semantic classes in diverse cities worldwide. All images are generated with a resolution of 256 × 256. Baselines and Metrics. In terms of generation quality, we compare with general-purpose generative models like Pro-GAN [13], StyleGAN2 [5], VQGAN [31], ProjectedGANs [8], SAGAN [32], and generative models for urban scenes namely: Semantic Palette [11] and SB-GAN [9]. Following Style-GAN2, we measure the Frechet Inception Distance (FID) between 50k generated images and all images from the corresponding real dataset. Note that the last two baselines [9,11] generate label maps and images like our approach, so we also measure the mIoU between generated images and generated layouts using pre-trained segmentation networks on the real datasets [33]. Note that after performing the class grouping strategy, the semantic layouts contain a different number of classes. For a fair comparison, we train SP and SB-GAN on the same layouts and measure the FID and mIoU. Baselines to assess controllability. In terms of controllability, we compare with SP and the original SSG baseline (after introducing our changes to generate high-quality images) in Figure 4. SP offers controllability by changing either the zvector or a condition vector containing the class distributions. For example, to generate more buildings, we would change this vector to increase the percentage of buildings and reduce the Method FID ↓ mIoU ↑ ProGANs [13] 63.87 -VQGAN [31] 173.80 -SB-GAN [9] 62.97 30.04 SAGAN [32] 12.81 -SP [11] 52.5 21.40 StyleGAN2 [5] 8.35 -ProjectedGANs [8] 3.41 -Ours 9.8 34.0 percentage of trees in the image. SSG native controllability is to sample different w-vectors for different classes and increase/decrease the value of these vectors with a constant parameter. In Figure 5, we also explore applying GANSpace in the W + -space of Urban-StyleGAN and in the S-space of different layers.\nV. RESULTS" }, { "figure_ref": [], "heading": "A. Generation results", "publication_ref": [ "b7", "b8", "b10", "b8", "b11", "b15", "b23" ], "table_ref": [ "tab_1", "tab_2" ], "text": "Main comparisons. In Table II, we compare with generalpurpose generative models on the Cityscapes benchmark. The proposed approach outperforms most previous methods and is almost on par with StyleGANv2 and only inferior to Project-edGAN [8]. However, both ProjectedGAN and StyleGAN2 do not provide controllability over the image generation as they use global latent codes. On the other hand, Urban-StyleGAN largely outperforms approaches that provide controllability through the use of a semantic prior [9,11] on both metrics (FID and mIoU). In this work, our primary objective is not to achieve the highest generative quality but to achieve high controllability in the generation process with strong quality and fidelity metrics.\nAblation on generator's architecture and training settings.\nWe perform ablation studies on the architecture and regularization strategy on the Cityscapes dataset and report the results in Table III. The FID plot for different experiments is shown in Figure 3. First, we show that running SSG with the full number of classes reaches a high FID and diverges after a short training time. Grouping the classes into 16 superclasses, we reduce the number of local generators g c , providing an architectural regularization to the adversarial framework. The FID drops significantly to 36.6, but the training still diverges after some time. Hypothesizing that the generator still overpowers the discriminator, we regularize the discriminator by applying spectral normalization on the convolutional weights, dropping the FID further to 23.8 and leading to training convergence. We also show that a large batchsize is fundamental to reaching a better FID. Our intuition for this is that the discriminator can observe more classes in one forward pass with a larger batchsize, which leads to stronger feedback to the generator. We also experiment with a different number of super-classes (9,12,16,24) and notice that it is inversely proportional to the FID. Note that reducing the number of super-classes to a certain point would compromise controllability. We reach the best performance with 16 classes when we increase the batchsize to 16 and use spectral normalization. Note that we choose to have 16 classes (instead of 9 or 12) to allow for more controllability." }, { "figure_ref": [], "heading": "B. Controllability results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_3", "fig_3", "fig_4" ], "heading": "Main Comparisons", "publication_ref": [ "b10", "b21" ], "table_ref": [], "text": "In Figure 4, we compare our approach with 2 baselines in terms of the quality and diversity of local edits. The first baseline is Semantic Palette [11], which offers controllability by explicitly increasing or decreasing the class distribution in the conditional input vector. We notice that while Semantic Palette is able to match the given conditional distribution, the effect of this manipulation is often unpredictable on other classes in the image. For instance, increasing the car distribution in row 1 Figure 4 leads to the appearance of more buildings and traffic signs, which should not be affected. Similarly, increasing the tree distribution leads to the car's disappearance on the left, although it is far from the trees' position in the original image. The second baseline is the native editing of SSG applied to our framework, which consists in increasing or decreasing the class latent vectors by a constant value. The edits look much more realistic and localized, proving that disentanglement in the latent code is key to controllability. However, we still observe minor inconsistencies, such as a slight change in the car's shape when the road changes, or the car in the middle becomes smaller while the cars on the side become larger. In contrast, our approach of applying PCA in the S-space of the network shows more localized and meaningful edits in the image.\nWhich latent space should be manipulated? In Figure 5, we show the merits of applying PCA in the S-space of a disentangled latent space. For this, we first show the directions obtained by GANSpace [22] applied to StyleGAN2. We note that no interpretable directions could be obtained, and the edits affect multiple parts of the image. Applying GANSpace in the W + -space of the class 'car' gives only one meaningful direction of control (increasing car number and size on both sides). On the other hand, the S-space reveals more directions (increasing cars on one side or multiple specific parts of the image), with the late layer in the generator showing the largest number of meaningful edits. This confirms our assumption that the S-space of the latest layers is the most disentangled. Further qualitative results. We show further qualitative results in Figure 6 on multiple classes in Mapillary and Cityscapes. As Mapillary is more diverse than Cityscapes, more interesting directions, such as the sky size, cloudiness, and time of day, can be found. Moreover, we notice interesting directions that can be attributed to the effect of super-classes. The super-class 'vegetation' in Mapillary is the combination of the 'tree' and 'bush' classes, and we found one direction of control that increases 'trees' while decreasing 'bushes' in the image. Another direction can increase the 2 simultaneously. This shows that the latent space is more compact and disentangled." }, { "figure_ref": [], "heading": "VI. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this work, we investigate the effect of latent code disentanglement on generating urban scenes. We find that a disentangled latent code is vital for high-quality scene generation and that the key to exploiting disentanglement is to learn groups of classes together. This allows a more compact latent representation and regularizes the adversarial training by reducing the generator's learning capacity. We also find that the key to finding more disentangled directions of control is to explore the S-space of the downstream class-specific layers. Our approach outperforms models for urban scene generation and is on par or slightly inferior to the state-of-the-art general-purpose models, which allow very little controllability. Our work is not without limitations. One drawback to our approach is that it needs label maps for training, which can Fig. 6: More Results on Mapillary (upper part) and Cityscapes (lower part). In each row, we provide multiple directions of control for one class in the image. Two changes are shown for each direction ( example: Car size: big/small.) be expensive and time-consuming. Future works can focus on how to provide a disentangled generation process in a totally unsupervised manner. On the other hand, we believe even more controllability can be achieved on the object level (as opposed to the class level in our framework) by using instance segmentation maps instead of semantic segmentation maps. While instance maps are more costly, they can lead to even more control over the generation process. Future directions include exploring diffusion models for synthesis or CLIPbased methods for editing with language prompts." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "The research leading to these results is funded by the German Federal Ministry for Economic Affairs and Energy within the project \"AI Delta Learning.\" The authors would like to thank the consortium for the successful cooperation.\nCode is available at https://github.com/GeorgeEskandar/UrbanStyleGAN" } ]
A promise of Generative Adversarial Networks (GANs) is to provide cheap photorealistic data for training and validating AI models in autonomous driving. Despite their huge success, their performance on complex images featuring multiple objects is understudied. While some frameworks produce highquality street scenes with little to no control over the image content, others offer more control at the expense of high-quality generation. A common limitation of both approaches is the use of global latent codes for the whole image, which hinders the learning of independent object distributions. Motivated by SemanticStyleGAN (SSG), a recent work on latent space disentanglement in human face generation, we propose a novel framework, Urban-StyleGAN, for urban scene generation and manipulation. We find that a straightforward application of SSG leads to poor results because urban scenes are more complex than human faces. To provide a more compact yet disentangled latent representation, we develop a class grouping strategy wherein individual classes are grouped into super-classes. Moreover, we employ an unsupervised latent exploration algorithm in the Sspace of the generator and show that it is more efficient than the conventional W + -space in controlling the image content. Results on the Cityscapes and Mapillary datasets show the proposed approach achieves significantly more controllability and improved image quality than previous approaches on urban scenes and is on par with general-purpose non-controllable generative models (like StyleGAN2) in terms of quality.
Urban-StyleGAN: Learning to Generate and Manipulate Images of Urban Scenes
[ { "figure_caption": "Fig. 1 :1Fig. 1: Our framework can generate and/or edit high-quality images. An example of a generated scene from our model is shown in the top left. After generation, we can edit the image in multiple different ways: increase the car size in the scene (Top Right), have trees with fewer leaves (bottom left), or have a wider road (bottom right).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 :2Fig. 2: Architecture of the proposed framework, Urban-StyleGAN. The number of local generators is controlled by the number of super-classes defined by the class-remapping table.In test-time, we apply GANSPace on the disentangled latent s c,l -vectors to discover meaningful directions of control.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "33 Fig. 3 :333Fig. 3: Convergence of SSG architecture before and after the proposed modifications. SN refers to spectral normalization in the discriminator and B the batchsize.", "figure_data": "", "figure_id": "fig_2", "figure_label": "333", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig. 4: Single Class manipulation on Cityscapes. Generated Images and labels are shown. Red Boxes denote inconsistencies between manipulated images.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 :5Fig. 5: Exploration of directions in different latent spaces.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Mapping table from 34 classes to 16 super-classes in the Cityscapes dataset. This is executed as a preprocessing step on the semantic layouts.", "figure_data": "Super-ClassOriginal ClassesVoidClasses 0-6 in CityscapesDrive-ableRoad, Parking, Rail TrackSide WalkSide WalkBuildingBuilding, Bridge, TunnelWallWall, Guard RailFenceFencePersonPersonCarCarOther Vehicles Bus, Train, Truck, Caravan, TrailerBikeBicycle, MotorcycleRiderRiderSkySkyGreeneryTerrain, VegetationTraffic LightTraffic LightTraffic SignTraffic SignPolesPoles, Poles Group", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "Image Synthesis on Cityscapes, Resolution 256 2 .", "figure_data": "Super-classes SpectralNorm Batchsize FID ↓344280.516436.616423.816423.8161221.416169.8241238.3161221.4121213.591211.416169.8", "figure_id": "tab_1", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "Ablation Study on Cityscapes, Resolution 256 2 .", "figure_data": "", "figure_id": "tab_2", "figure_label": "III", "figure_type": "table" } ]
George Eskandar; Youssef Farag; Tarun Yenamandra; Daniel Cremers; Karim Guirguis; Bin Yang
[ { "authors": "Alexey Dosovitskiy", "journal": "PMLR", "ref_id": "b0", "title": "CARLA: An open urban driving simulator", "year": "2017" }, { "authors": "Stephan R Richter", "journal": "Springer International Publishing", "ref_id": "b1", "title": "Playing for Data: Ground Truth from Computer Games", "year": "2016" }, { "authors": "I Goodfellow", "journal": "", "ref_id": "b2", "title": "Generative Adversarial Nets", "year": "2014" }, { "authors": "S Tero Karras; Timo Laine; Aila", "journal": "", "ref_id": "b3", "title": "A Style-Based Generator Architecture for Generative Adversarial Networks", "year": "2019" }, { "authors": "Tero Karras", "journal": "", "ref_id": "b4", "title": "Analyzing and Improving the Image Quality of StyleGAN", "year": "2020" }, { "authors": "Tero Karras", "journal": "Curran Associates, Inc", "ref_id": "b5", "title": "Alias-Free Generative Adversarial Networks", "year": "2021" }, { "authors": "Axel Sauer; Katja Schwarz; Andreas Geiger", "journal": "", "ref_id": "b6", "title": "Styleganxl: Scaling stylegan to large diverse datasets", "year": "2022" }, { "authors": "Axel Sauer", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b7", "title": "Projected gans converge faster", "year": "2021" }, { "authors": "Samaneh Azadi", "journal": "", "ref_id": "b8", "title": "Semantic bottleneck scene generation", "year": "2019" }, { "authors": "Taesung Park", "journal": "", "ref_id": "b9", "title": "Semantic image synthesis with spatiallyadaptive normalization", "year": "2019" }, { "authors": "Guillaume Le; Moing ", "journal": "", "ref_id": "b10", "title": "Semantic Palette: Guiding Scene Generation with Class Proportions", "year": "2021" }, { "authors": "Yichun Shi", "journal": "", "ref_id": "b11", "title": "SemanticStyleGAN: Learning Compositional Generative Priors for Controllable Image Synthesis and Editing", "year": "2022" }, { "authors": "Tero Karras", "journal": "", "ref_id": "b12", "title": "Progressive Growing of GANs for Improved Quality, Stability, and Variation", "year": "" }, { "authors": "Bingchen Liu", "journal": "", "ref_id": "b13", "title": "Towards faster and stabilized gan training for high-fidelity few-shot image synthesis", "year": "2021" }, { "authors": "Edgar Schönfeld", "journal": "", "ref_id": "b14", "title": "You Only Need Adversarial Supervision for Semantic Image Synthesis", "year": "2021" }, { "authors": "Xihui Liu", "journal": "NeurIPS", "ref_id": "b15", "title": "Learning to predict layout-to-image conditional convolutions for semantic image synthesis", "year": "2019" }, { "authors": "Ting-Chun Wang", "journal": "", "ref_id": "b16", "title": "High-resolution image synthesis and semantic manipulation with conditional GANs", "year": "2018" }, { "authors": "George Eskandar", "journal": "Computers & Graphics", "ref_id": "b17", "title": "USIS: Unsupervised Semantic Image Synthesis", "year": "2023" }, { "authors": "Yu Deng", "journal": "", "ref_id": "b18", "title": "Disentangled and controllable face image generation via 3d imitative-contrastive learning", "year": "2020" }, { "authors": "Marek Kowalski", "journal": "Springer", "ref_id": "b19", "title": "Config: Controllable neural face image generation", "year": "2020" }, { "authors": "Alon Shoshan", "journal": "", "ref_id": "b20", "title": "Gan-control: Explicitly controllable gans", "year": "2021" }, { "authors": "Erik Härkönen", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b21", "title": "Ganspace: Discovering interpretable gan controls", "year": "2020" }, { "authors": "Yujun Shen", "journal": "", "ref_id": "b22", "title": "Interpreting the latent space of gans for semantic face editing", "year": "2020" }, { "authors": "Rameen Abdal", "journal": "ACM Transactions on Graphics (ToG)", "ref_id": "b23", "title": "Styleflow: Attribute-conditioned exploration of stylegan-generated images using conditional continuous normalizing flows", "year": "2021" }, { "authors": "Yujun Shen; Bolei Zhou", "journal": "", "ref_id": "b24", "title": "Closed-form factorization of latent semantics in gans", "year": "2021" }, { "authors": "Umut Kocasari", "journal": "", "ref_id": "b25", "title": "StyleMC: multi-channel based fast textguided image generation and manipulation", "year": "2022" }, { "authors": "Or Patashnik", "journal": "", "ref_id": "b26", "title": "Styleclip: Text-driven manipulation of stylegan imagery", "year": "2021" }, { "authors": "Marius Cordts", "journal": "", "ref_id": "b27", "title": "The cityscapes dataset for semantic urban scene understanding", "year": "2016" }, { "authors": "Takeru Miyato", "journal": "", "ref_id": "b28", "title": "Spectral normalization for generative adversarial networks", "year": "2018" }, { "authors": "Gerhard Neuhold", "journal": "", "ref_id": "b29", "title": "The mapillary vistas dataset for semantic understanding of street scenes", "year": "2017" }, { "authors": "Patrick Esser; Robin Rombach; Bjorn Ommer", "journal": "", "ref_id": "b30", "title": "Taming transformers for high-resolution image synthesis", "year": "2021" }, { "authors": "Han Zhang", "journal": "PMLR", "ref_id": "b31", "title": "Self-attention generative adversarial networks", "year": "2019" }, { "authors": "Liang-Chieh Chen", "journal": "", "ref_id": "b32", "title": "Rethinking atrous convolution for semantic image segmentation", "year": "2017" } ]
[ { "formula_coordinates": [ 3, 48.96, 481.32, 251.57, 40.72 ], "formula_id": "formula_0", "formula_text": "f c , d c = g c (w, p), m c i,j = e d c i,j C k=1 e d k i,j , f = C k=1 m k • f k .(1)" }, { "formula_coordinates": [ 3, 335.82, 363.64, 227.21, 24.63 ], "formula_id": "formula_1", "formula_text": "MLP z→w : z -→ w = (w base , w shape , w texture ), (2) MLP c,l : w base -→ s c,l=0,1 , w shape -→ s c,l=2:5 ,(3)" }, { "formula_coordinates": [ 4, 394.85, 167.8, 164.32, 10.65 ], "formula_id": "formula_3", "formula_text": "s c,l = s c,l + V c,l • y, (5" }, { "formula_coordinates": [ 4, 559.16, 168.14, 3.87, 8.64 ], "formula_id": "formula_4", "formula_text": ")" } ]
2023-05-16
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b0", "b7", "b8", "b9", "b10", "b10", "b11" ], "table_ref": [], "text": "F EDERATED Learning (FL) is a recent distributed Machine Learning (ML) paradigm that aims to collaboratively train an ML model using data owned by clients, without those clients sharing their training data with a central server or other participating clients. Practical applications of FL range from 'crossdevice' scenarios, with a huge number of unreliable clients each possessing a small number of samples, to 'cross-silo' scenarios with fewer, more reliable clients possessing more data [1]. FL has huge economic potential, with cross-device tasks including mobile-keyboard next-word prediction [2], voice detection [3], and even as proof-of-work for blockchain systems [4]. Cross-silo tasks include hospitals jointly training healthcare models [5] and financial institutions creating fraud detectors [6]. FL has been of particular interest for training large Deep Neural Networks (DNNs) due to their state-of-the-art performance across a wide range of tasks.\nDespite FL's great potential for privacy-preserving ML, there exist significant challenges to address before FL can be more widely adopted at the network edge. These include:\n• Heterogeneous client data: each client device generates its own data, and cannot share it with any other device. The data between clients is therefore highly heterogeneous, which has been shown theoretically and empirically to harm the convergence and final performance of the FL model. • High communication costs: many FL algorithms operate in rounds that involve sending the FL model parameters between the clients and the coordinating server thousands of times.\nConsidering the bandwidth constraints of wireless edge clients, communication represents a major hindrance to training. • High computation costs: training ML models has a high computational cost (especially for modern DNNs with a huge number of parameters). FL clients are typically low-powered (often powered by battery), so computing the updates to the FL model is a substantial bottleneck. • Wireless edge constraints: clients are connected to the network edge and can range from modern smartphones to Internet-of-Things (IoT) devices. They are highly unreliable and can leave and join the training process at any time.\nTo address some of the above challenges, McMahan et al. proposed the Federated Averaging (FedAvg) algorithm [7]. FedAvg is an iterative algorithm that works in communication rounds, where in each round clients download a copy of the 'global model' to be trained, perform K steps of Stochastic Gradient Descent (SGD) on their local data, then upload their models to the coordinating server, which averages them to produce the next round's global model. Therefore, FedAvg works similarly to distributed-SGD (dSGD) as used in the datacentre, but more than one gradient is calculated by clients per communication round. Using K > 1 local steps improves the per-round convergence rate compared to dSGD (hence saving on communication), and FedAvg only requires a fraction of all clients to participate in each round, mitigating the impact of unreliable clients and stragglers.\nIncreasing K improves the convergence rate (in terms of communication rounds) of FedAvg, however it has been demonstrated that it comes at the cost of harming the minimum training error and maximum validation accuracy than can be achieved, especially when client data is heterogeneous [1], and large values of K show diminishing returns for convergence speed. Therefore the total amount of computation performed to reach a given model error can be significantly greater compared to datacentre training, leading to concerns over the energy cost of FL [8]. Furthermore, the computation time on low-powered FL clients is not negligible, so improving communication-efficiency by using larger K can lead to a long training procedure [9], [10].\nThe primary reason behind the performance degradation with increasing K is 'client-drift' [11]: as the data between clients is non-Independent and Identically Distributed (non-IID), the minimum point(s) of each client's objective will be different. During local training, client models diverge (drift) towards their disparate minimisers, and the average of these disparate models may not have good performance. The extent of client-drift has been shown theoretically to be proportional to the level of heterogeneity between client data, the client learning rate (η), and K [11], [12].\nOne theoretically-justified method of addressing the problem of client-drift is to reduce η during training. Intuitively, if the learning rate is smaller then client models can move less far apart during the local update. Previous works have shown that decaying η is required for the error of the global model to converge to 0. In this paper, we propose instead decaying K to achieve a similar goal. Decreasing K addresses client-drift whilst reducing the realtime and computational cost of each FedAvg round. We show in experiments using benchmark FL datasets that decaying K can match or outperform decaying η in terms of time to converge to a given error, total computational cost, and maximum validation accuracy achieved by the model. The main contributions of this paper are as follows:\n• We analyse the convergence of FedAvg when using a decreasing value of K for strongly-convex objectives, which provides novel insight into the constraints on K and η, and intuitively demonstrates why and demonstrates the impact of K > 1 on convergence. • We derive the optimal value of K for any point during the training runtime, and use this optimal value to propose two theoretically-motivated approaches for decaying K based either on the communication round or the relative FL model error. We also use the analysis to derive the optimal value of η for later comparison. • We perform extensive experiments using four benchmark FL datasets (FEMNIST, CIFAR100, Sentiment140, Shakespeare) to show that the proposed decaying-K scheme can reduce the amount of real-time taken to achieve a given model error, as well as improving final model validation performance. • We present a further practical heuristic for decaying K based on training error which also shows excellent performance in terms of improving the validation performance of the FL model on the four benchmark datasets.\nThe rest of this paper is organised as follows: in Section 2 we cover related works that analyse the convergence properties of FedAvg, algorithms designed to address client-drift, and relevant developments in datacentre-based training; in Section 3 we formalise the FL training objective, analyse the convergence of FedAvg using our proposed decaying-K schedule, derive the optimal value of K during training, and use this to motivate three K-decay schemes; in Section 4 we present an experimental evaluation of the proposed schemes; and in Section 5 we conclude the paper." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [], "table_ref": [], "text": "In this section, we cover works that study the theoretical convergence properties of FedAvg (and related algorithms) and client-drift, algorithms that improve the convergence of FL, and works that study related problems in the datacentre setting." }, { "figure_ref": [], "heading": "Analysis of FedAvg", "publication_ref": [ "b11", "b10", "b12", "b13", "b14", "b15" ], "table_ref": [], "text": "There has been significant research efforts in theoretically analysing the convergence of FedAvg. Li et al. [12] proved a convergence rate of O(1/T ) (where T is equal to the number of total iterations, rather than communications rounds) on stronglyconvex objectives. Their analysis suggests that an optimal number of local steps (K) exists to minimise the number of communication rounds to reach -precision, and the authors highlighted the need to decay the learning rate (η) during training. Non-dominant convergence in terms of total iterations remains an open problem within FL. Karimireddy et al. [11] added a server learning rate to FedAvg to prove its convergence for nonconvex objectives.\nCharles and Konecný [13] analysed the convergence of Local-SGD methods (including FedAvg) for quadratic objectives to gain insights into the trade-off between convergence rate and final model accuracy. Malinovsky et al. [14] generalised Local-SGD methods to generic fixed-point functions to analyse the effect of K on the -accuracy. Yang et al. [15] were the first to achieve linear speedup in terms of number of participating workers for FedAvg on nonconvex objectives. However, when considering partial worker participation (which is a key element of the FL scenario), their analysis does not show speedup with respect to K. Previous works have also analysed the convergence of FedAvg from perspectives such as minimising the total energy cost and optimal resource allocation [16].\nWhile the above works analyse the convergence of FedAvg in terms of total iterations and/or communication rounds, the runtime of FedAvg is affected by multiple factors: model convergence rate, total number of local SGD steps, communication bandwidth, model size, and the compute power of client devices. These factors must all be considered if the objective is to improve the runtime of FedAvg, as we do in this work." }, { "figure_ref": [], "heading": "Novel FL Algorithms", "publication_ref": [ "b16", "b10", "b9", "b17", "b18", "b19" ], "table_ref": [], "text": "Due to FL's long training times and the challenging distributed edge environment, a large number of novel algorithms have been designed to improve the convergence rate of FedAvg. Li et al. [17] proposed FedProx, which adds a proximal term to client objectives penalising the distance to the current global model. Karimireddy et al. [11] added Stochastic Variance-Reduced Gradients (SVRG) to FedAvg in SCAFFOLD, demonstrating significant speedup on popular FL benchmarks. Empirical convergence rates have also been improved by adding adaptive optimisation to FedAvg both locally [10] and globally [18]. Adaptive optimisation has also been implemented during the server-update of FedAvg [19], which can accelerate convergence without increasing the perround communication or computation costs for clients. A recent survey covering developments in FL algorithms and their relation to the communications properties of FL is given in [20].\nThe above algorithms can be considered variants of FedAvg in that they perform rounds of local training and model averaging. Our proposed method of decaying the number of local steps during training could in principle be used with any FedAvg variant, which is a potential avenue for future research." }, { "figure_ref": [], "heading": "Datacentre Training", "publication_ref": [ "b20", "b21", "b22", "b23", "b24" ], "table_ref": [], "text": "Distributed training in the datacentre shares similarities with the FL scenario, and there exists a substantial body of work studying datacentre training. The classic datacentre-based algorithm is distributed-SGD (dSGD), where nodes each compute a single (often very large) minibatch gradient and send it to the parameter server for aggregation. Woodworth et al. [21] proved for quadratic objectives that Local-SGD methods (which perform multiple steps of SGD between aggregations) converge at least as fast as dSGD (in terms of total iterations), but that Local-SGD does not dominate for more general convex problems. Similarly, Wang et al. [22] unified the analysis of various algorithms related to Local-SGD, covering different communication topologies and non-IID clients, achieving state-of-the-art rates for some settings. Lin et al. [23] presented a thorough empirical study showing that local-SGD methods generalise better than large-batch dSGD, motivating their approach of switching from dSGD to local-SGD during the later stages of training. Another approach to improve the generalisation performance of large-batch dSGD are 'extra-gradient' methods that compute gradient updates after a step of SGD before applying them to the global model [24].\nWhile these works present methods that variously improve runtime or generalisation performance, their findings cannot be directly applied to FL. From a theoretical perspective, the primary differences are FL's highly non-IID clients and very low perround participation rates (which can be as low as 0.1% [25]). FL client also have much lower communication bandwidth and computational power compared to datacentre compute nodes." }, { "figure_ref": [], "heading": "FEDAVG WITH DECAYING LOCAL STEPS", "publication_ref": [], "table_ref": [], "text": "We now formally describe the FL optimisation problem, theoretically analyse the convergence of FedAvg with a decaying number of local SGD steps, and present theoretically-motivated schedules based upon the analysis." }, { "figure_ref": [], "heading": "Problem Setup", "publication_ref": [ "b25", "b6" ], "table_ref": [], "text": "In FL there are a large number of clients that each possess a small number of local samples. The objective is to train model x to minimise the expected loss over all samples and over all clients, namely:\nF (x) = C c=1 p c f c (x) = C c=1 p c nc n=1 f (x, ξ c,n ) , (1\n)\nwhere C is the total number of FL clients, p c is the fraction of all samples owned by client c (such that C c=1 p c = 1), f is the loss function used on clients, and {ξ c,1 , • • • , ξ c,nc } represent the training samples owned by client c.\nTo minimise F (x) in a communication-efficient manner, Fe-dAvg (presented in Algorithm 1) performs multiple steps of SGD on each client between model averaging. FedAvg operates in communication rounds, where in each round r a subset of clients C r download the current global model x r (line 5), perform K r steps of SGD on their local dataset (lines 6-8), and then upload their new models to the coordinating server (line 9). The server averages the received client models to produce the next round's global model (line 11).\nFedAvg is typically described and analysed as selecting a subset of clients uniformly at random to participate in each communication round (line 3). However in real-world FL deployments, clients generally do not participate uniformly at random due to their behaviour, communication and compute capabilities. The non-uniform participation of FL clients has lead to the research direction of 'fair' FL [26].\nAlgorithm 1: Federated Averaging [7] 1 input: initial global model x 0 , learning rate schedule {η r }, local steps schedule {K r } 2 for round r = 1 to R do The updates to clients models within the FedAvg process can be viewed from the perspective of communication rounds (as shown in most FL works and presented in Algorithm 1), but can also be reformulated in terms of a continuous sequence of SGD steps on each client, with updates periodically being replaced by averaging. Suppose we reindex the client models from x c r,k to x c t where t is the global iteration, t ∈ {1, • • • , T }. Note that r {K r } = T . For a given FL client i and local SGD step t the update to the local model x i t can be given as:\ny i t+1 = x i t -η t ∇f (x i t , ξ i t ), x i t+1 = c∈Ct p c y c t+1 if t ∈ I, y i t+1 otherwise, (2\n)\nwhere I is the set of indexes denoting the iterations at which model communication occurs (which will be equal to the cumulative sum of {K r }). This formulation states that clients not participating in the current round compute and then discard some local updates, which is not true in reality but makes analysis more amenable and is theoretically equivalent to FedAvg as presented in Algorithm 1. We define the average client model at any given iteration t using (2) as:\nxt = C c=1 p c x t c ." }, { "figure_ref": [], "heading": "Runtime Model of FedAvg", "publication_ref": [ "b6", "b8" ], "table_ref": [], "text": "Inspecting Algorithm 1 shows that the nominal wall-clock time for each client c to complete a communication round r is:\nW c r = |x| D c + K r β c + |x| U c , (3\n)\nwhere |x| is the size of the FL model (in megabits), U c and D c are the upload and download bandwidth of client c in megabits per second, and β c is the per-minibatch computation time of client c. The nominal time to complete a round for client c is therefore the sum of the download, local compute, and upload times. Furthermore, as FL clients are usually connected wirelessly at the network edge and geographically dispersed, we assume that U c and D c are independent of the total number of participating clients. For wireless connections, typically\nD c >> U c .\nFor a single round, the server must wait for the slowest client (straggler) to send its update. Therefore the time taken to complete a single round W r is:\nW r = max c∈Cr {W c r } .(4)\nTo simplify the FedAvg runtime model, we assume that all clients have the same upload bandwidth, download bandwidth, and perminibatch compute time. That is, U c = U , D c = C, and β c = β, ∀ c. Using these simplifications, the total runtime W for R communication rounds of FedAvg are:\nW = R r=1 W r = R |x| D + |x| U + β R r=1 K r .(5)\nPrevious works consider a fixed number of local steps during training: K r = K, ∀ r. There are extensive works showing that a larger K can lead to an increased convergence rate of the global model [7]. However, large K means that fewer communication rounds can be completed in a given timeframe. Previous works have shown that due to the low computational power of FL clients, the value of β can dominate the per-round runtime [9]. Therefore by decaying K r during training, a balance between fast convergence and higher round-completion rate can be achieved, which is the primary focus of this work." }, { "figure_ref": [], "heading": "Convergence Analysis", "publication_ref": [ "b11", "b11", "b15", "b10", "b11", "b10", "b11", "b20", "b24" ], "table_ref": [], "text": "We now present a convergence analysis of FedAvg using a decaying number of local steps K r and constant learning rate η.\nWe make the following assumptions, which are typical of within the theoretical analysis within FL.\nAssumption 1: Client objective functions are L-smooth:\nf c (x) ≤ f c (y) + (x -y) ∇f c (y) + L 2 x -y 2 .\nAs F (x) is a convex combination of f c , then it is also L-smooth.\nAssumption 2: Client objective functions are µ-strongly convex:\nf c (x) ≥ f c (y) + (x -y) ∇f c (y) + µ 2 x -y 2 , with minima f * c = min f c . As F (x) is a convex combination of f c , then it is also µ-strongly convex.\nAssumption 3: For uniformly sampled data points ξ c k on client c, the variance of stochastic gradients on c are bounded by:\nE ∇f c (x; ξ c k ) -∇f c (x) 2 ≤ σ 2 c .\nDue to analysing gradient descent on an L-smooth function, the magnitude of the gradient is naturally bounded by the distance between the first iterate x 1 and the minimiser:\n∇F (x) 2 ≤ L 2 x 1 -x * 2 .\nIn our later analysis we define G 2 = L 2 x 1 -x * 2 for convenience, i.e. the maximum norm of the gradient during training.\nAs per [12], we quantify the extent of non-IID client data with:\nΓ = F * - C c=1 p c f * c ,\nwhere F * is the minimum point of F (x). Γ = 0 when the minimiser of the global objectives is not the same as the average minimiser of client objectives. Γ = 0 if the FL data is IID over the clients. Assumption 2 states that our analysis considers stronglyconvex objectives. Although FL is typically used to train large DNNs (with nonconvex objectives), strongly-convex models are widely-used, for example in Support Vector Machines. Furthermore, the starte-of-the-art in analysing FedAvg's convergence behaviour lags behind the empirical developments, with contemporary anlyses also making the convex assumption [12] [16]. The experimental evaluations in Section 4 consider one convex model (Sentiment 140) and three nonconvex DNNs. We leave it to future work to derive optimal K schedules for nonconvex objectives.\nTheorem 1: Let Assumptions 1-3 hold, and define κ = L /µ. The expected minimum gradient norm of FedAvg using a monotonically decreasing number of local SGD steps K r and fixed stepsize η ≤ 1 /4L after T total iterations is given by:\nmin t {E ∇F ( xt ) 2 } ≤ 2κ(κF ( x0 ) -F * ) ηT + ηκL C c=1 p 2 c σ 2 c + 6LΓ + 8 + 4 N G 2 R r=1 K 3 r R r=1 K r . (6) Proof: See Appendix A.2.\nThe above theorem provides some useful insights into the convergence properties of FedAvg when using multiple local steps. Some of these are detailed below.\nRemark 1.1: relation to centralised SGD. With a fixed learning rate and decreasing K r , Theorem 1 shows that FedAvg converges with O( 1 /T ) + O(η). This result reflects the classical result of centralised SGD with a fixed learning rate (albeit with different constants due to non-IID clients and K r > 1). Previous works have shown (like in centralised SGD) the requirement for η to be decayed to allow FedAvg to converge arbitrarily close to the global minima [11], [12]. However in this work we are interested in the runtime and computational savings available when decaying K r , so do not feel the need to prove the already-covered decaying η result here.\nRemark 1.2: benefit of K > 1. When using a decreasing η, previous analyses have shown that K > 1 acts to reduce the variance introduced by client stochastic gradients (the\nC c=1 p 2 c σ 2 c\nterm) [11], [12]. Dividing (6) by K r (to achieve the convergence result in terms of total number of rounds) shows the same benefit in our analysis. We also observe empirically that K r > 1 helps to reduce the variance of the global model updates even with a fixed η. Similarly a large number of clients participating per round (N ) helps to reduce the variance that is introduced by performing K r > 1 steps. FedAvg deployments can therefore benefit more from sampling a larger number of clients per round N when the number of local steps K r is large.\nRemark 1.3: drawback of K > 1.\nThe second term of Theorem 1 shows that using K r > 1 harms the convergence of FedAvg in terms of total number of iterations T . This is the case for all state-of-the-art analyses save for quadratic objectives [21]. However, in FL we wish to minimise the number of communication rounds (due to the quantity of communicated data and impact of stragglers etc.) alongside the total number of iterations (both of which affect the runtime of FedAvg).\nRemark 1.4: real-world participation rates. Our formulation of FedAvg our analysis assumes a constant participation rate, but in real-world FL the round participation rate varies [25]. Setting K r to a large value makes more progress in a round, but fewer clients will be able to complete the round in a given timeframe. This poses an interesting trade-off between K r and N , which could be a potential avenue for future research. " }, { "figure_ref": [], "heading": "Optimal", "publication_ref": [ "b6", "b7", "b26", "b10", "b10" ], "table_ref": [], "text": "W = T K |x| D + |x| U + βK .(7)\nSetting K r = K and substituting (7) into Theorem 1 gives us the convergence of t rounds of FedAvg for fixed K and η in terms of the runtime, starting from an arbitrary round in the training process x t0 , ∀t ∈ I, rather than the number of iterations:\nmin t>t0 {E ∇F ( xt ) 2 } ≤ 2κ(κF ( xt0 ) -F * ) ηW K |x| D + |x| U + βK + ηκL C c=1 p 2 c σ 2 c + 6LΓ + 8 + 4 N G 2 K 2 . (8\n)\nAs (8) gives the convergence of t rounds of FedAvg, starting from arbitrary point x t0 , with a fixed K and η. If the round index is instead substituted with a time index (with x w corresponding to the value of x t at time w), (8) can be used to determine what the optimal fixed valued of K looking forward would be for any point in time during the training process, K * w .\nTheorem 2: Let Assumptions 1-3 hold and define κ = L /µ. For fixed η ≤ 1 /4L, the optimal number of local SGD steps K to minimise (8) is given by:\nK * w = 3 (κF ( xt0 ) -F * ) 8η 2 L (1 + 1 /2N) ( |x| /D + |x| /U) W .(9)\nProof: See Appendix A. Remark 2.1: relation to other works. Wang and Joshi [27] investigated variable communication intervals for the Periodic Averaging SGD (PASGD) algorithm in the datacentre, and found that the optimal interval decreased with O( 1 / 2 √ W ). K * w decreases slower in FedAvg due to the looser bound on client divergence between averaging (scaling with K 2 rather than K). Remark 2.2: dependence on client participation rate. As the number of clients participating per round (N ) increases, K * w increases. This is because a higher number of participating clients decreases the variance in model updates (which is especially significant considering the non-IID client data).\nRemark 2.3: reformulation using communication rounds. In FL, it is typically assumed that the local computation time is dominated by the communication time due to the low-bandwidth connections to the coordinating server. If we consider the case where ( |x| /D + |x| /U >> βK), then W ≈ R ( |x| /D + |x| /U). This means:\nK * r = 3 κF ( xt0 ) -F * 8η 2 L (1 + 1 /2N) 1 R ≤ 3 κF ( x0 ) -F * 8η 2 L (1 + 1 /2N) 1 R ,(10)\nwhere the inequality comes from the fact that F (x t ) ≤ F (x 0 )\ngiven Assumption 1 and an appropraitely chosen stepsize η. K * r is not dependent on the local computation time, only the total number of rounds R. Using (10) as a decay scheme produces a fairly aggressive decay rate, and is tested experimentally in Section 4 using a variety of model types (which have different communication and computation times).\nA similar approach can be taken to find the optimal value of η * r at each communication round. Although the focus of this paper is on decaying K to improve the convergence speed of FL, we compare it to the effect of decaying η as well as constant η and K.\nCorollary 2.1: Let Assumptions 1-3 hold and define κ = L /µ. Given stepsizes η r ≤ 1 /4L, the optimal value of η at any point in time during training to minimise ( 8) is given by:\nη * w = 2(κF ( xt0 ) -F * ) LZ ( |x| /D + |x| /U + βK) W , where Z = C c=1 p 2 c σ 2 c + 6LΓ + (8 + 4 /N)G 2 K 2 . (11\n)\nProof: See Appendix A.4.\nCorollary 2.1 shows that the optimal value of η decreases with O( 1 / √ W ). Several insights from Corollary 2.1 are given below.\nRemark 2.1.1: impact of round time. (11) shows that η * w is directly affected by the per-round time: as any of the upload, download or computation time increases, η * w increases. This is because less progress is made over time (due to longer rounds) so \nη * r = 2(κF (x t0 ) -F * ) LZ 1 R ≤ 2(κF (x 0 ) -F * ) LZ 1 R , (12\n)\nwhere Z is defined in (11), and the inequality again comes from using F (x t ) ≤ F (x 0 ). This decay schedule is also tested empirically in Section 4." }, { "figure_ref": [], "heading": "Schedules Based on Training Progress", "publication_ref": [], "table_ref": [], "text": "In practice the values of κ, F * , and L are difficult or impossible to evaluate due to complex nonlinear models (i.e. DNNs) and data privacy in FL. Therefore, appropriate values of and η are chosen via grid-search or some other method (such as Bayesian Optimisation). Denote K 0 as a 'good' value of K at W = 0 (found via grid search), and K r as the value of K to be used for round r. Each successive round of FedAvg can be considered as a new optimisation procedure with starting model xr . If we make the further assumption that F * = 0, substituting these two sets of values into (9) and dividing gives us K r in terms of K 0 :\nK * r = 3 F ( xr ) F ( x0 ) K 0 .(13)\nA similar process can be applied to find η r in terms of η 0 : Due to only a small fraction of the non-IID clients being sampled per round, the per-round variance of 1 N c∈Cr f c ( xr , ξ c,0 ) can be very high. Therefore, we propose a simple rolling-average estimate using window size s:\nη * r = 2 F ( xr ) F ( x0 ) η 0 . (14\nF ( xr ) ≈ 1 sN r i=r-s c∈Ci f c ( xi , ξ c,0 ). (15\n)\nOur experiments in Section 4 use a window size s = 100, where our experiments run for at least R = 10, 000 communication rounds. For the first s rounds when ( 15) cannot be computed, we keep K r = K 0 . When using a fixed value of K, Theorem 1 shows that the minimum gradient norm converges with O( 1 /T ) + O(ηK 2 ). As noted earlier, this result is analogous to the classical result of dSGD using a fixed learning rate. In the datacentre, the practical heuristic of decaying the learning rate η when the validation error plateaus is commonly used to allow the model to reach a lower validation error. We can therefore use a similar strategy for FedAvg: once the validation error plateaus we decay either K or η. We investigate this heuristic alongside the decay schedules presented above in Section 4." }, { "figure_ref": [], "heading": "EXPERIMENTAL EVALUATION", "publication_ref": [], "table_ref": [], "text": "In this section, we present the results of simulations comparing the three decaying-K schemes proposed in Section 3 to evaluate their benefits in terms of runtime, communicated data and computational cost on four benchmark FL datasets. Code to reproduce the experiments is available from: github.com/JedMills/Faster-FL." }, { "figure_ref": [], "heading": "Datasets and Models", "publication_ref": [ "b27", "b27", "b18", "b18", "b17", "b27" ], "table_ref": [], "text": "To show the broad applicability of our approach, we conduct experiments on 4 benchmark FL learning tasks from 3 Machine Learning domains (sentiment analysis, image classification, sequence prediction) using 4 different model types (simple linear, DNN, Convolutional, Recurrent).\nSentiment 140: a sentiment analysis task of Tweets from a large number of Twitter users [28]. We limited this dataset to users with ≥ 10 samples, leaving 22k total clients, with 336k training and 95k validation samples, with an average of 15 training samples per client. We generated a normalised bag-of-words vector of size 5k for each sample using the 5k most frequent tokens in the dataset. We train a binary linear classifier (i.e. a convex model) using 50 clients per round (0.2% of all clients) and a batch size of 8.\nFEMNIST: an image classification task of (28 × 28) pixel greyscale (flattened) images of handwritten letters and numbers from 62 classes, grouped by the writer of the symbol [28]. We CIFAR100: an image classification task of (32 × 32) pixel RGB images of objects from 100 classes. We use the non-IID partition first proposed in [19], which splits the images into 500 clients based on the class labels. There are 50k training and 10k validation samples in the dataset, with each client possessing 100 samples. We select 25 clients per round (5% of all clients). We train a Convolutional Neural Network (CNN) consisting of two (3 × 3) ReLU convolutional + (2 × 2) Max-Pooling blocks, a 512-unit ReLU FC layer, and a softmax output layer. As per other FL works [19], [18] we apply random preprocessing composed of a random horizontal flip and crop of the (28 × 28) pixel sub-image to improve generalisation.\nShakespeare: a next-character prediction task using the complete plays of William Shakespeare [28]. The lines from all plays are partitioned by the speaking part in each play, and clients with ≤ 2 lines are discarded, leaving 660 total clients. Using a sequence length of 80, there are 3.7m training and 357k validation samples, with an average of 5573 training samples per client. We sample 10 clients per round (1.5% of all clients) with a batch size of 32. We train a Gated Recurrent Unit (GRU) DNN comprising a 79 → 8 embedding, two stacked GRUs of 128 units, and a softmax output layer." }, { "figure_ref": [], "heading": "Simulating Communication and Computation", "publication_ref": [ "b28", "b24" ], "table_ref": [ "tab_1" ], "text": "The convergence of FedAvg for the learning tasks was simulated using Pytorch on GPU-equipped workstations. However, realworld FL runs distributed training on low-powered edge clients (such as smartphones and IoT devices). These clients exhibit much lower computational power and lower bandwidth to the coordinating server compared to datacentre nodes.\nTo realistically simulate real-world FedAvg, we use the runtime model presented in Section 3.2 and Equation ( 5). We assume that each client has a download bandwidth of D = 20 Mbps and an upload bandwidth of U = 5 Mbps. These are typical values for wireless devices connected via 4G LTE in the United Kingdom [29]. To determine the runtime of a minibatch of SGD on a typical low-powered edge device (β), we ran 100 steps of SGD for each learning task on a Raspberry Pi 3B+ with the following configuration:\n• 1.4GHz 64-bit quad-core Cortex-A53 processor.\n• 1GB LPDDR2 SDRAM.\n• Ubuntu Server 22.04.1.\n• PyTorch 1.8.2. Table 2 presents the values of β recorded.\nAs shown in Table 2, there is a large difference in the minibatch runtimes between the tasks. This is due to the relative computational costs of the models used: the Sent140 task uses a simple linear model, whereas the Shakespeare GRU model requires a far larger number of matrix multiplications for a single forward-backward pass. For each learning task, we ran FedAvg for 10k communication rounds using fixed K r = K 0 and η r = η 0 (henceforth 'Kηfixed'). The number of rounds reflects typical real-world deployments (which are on the order of thousands of rounds) [25]. We selected K 0 and η 0 via grid-search such that the validation error for each task could plateau within the 10k rounds, and present the values in Table 1. We also ran dSGD (FedAvg with K r = 1) to show the runtime benefit of using K > 1 local steps.\nWe then ran FedAvg using the three schedules for K r and the three schedules for η r as discussed in Section 3.4 and 3.5. Table 3 shows the different decay schedules tested and the name we denote each one by in Section 4.4. We also tested jointly decaying K r and η r during training. However decaying either K r or η r decreases the amount of progress that is made during each training round as the global model changes less. We found empirically that decaying both lead to training progress slowing too rapidly, so have not included the results in Section 4.4. Kη-fixed K 0 η 0 K r -rounds (10)\n3 1/r K 0 η 0 K r -error (13) 3 Fr /F0 K 0 η 0 K r -step K0 /10 if converged η 0 η r -rounds (12) K 0 2 1/r η 0 η r -error (14) K 0 2 Fr /F0 η 0 η r -step K 0 η0 /10 if converged" }, { "figure_ref": [ "fig_3", "fig_3", "fig_3", "fig_3", "fig_6", "fig_3" ], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "Figure 1 shows the minimum cumulative training error achieved by FedAvg for the different K r and η r schedules (as shown in Table 3). Confidence intervals for Figure 1 were omitted for clarity due to the larger number of curves. For all tasks other than Shakespeare, FedAvg with Kη-fixed (solid grey curve) increases the convergence rate compared to dSGD (dashed grey curve). For Shakespeare (Figure 1 (d)), Kη-fixed improved the initial convergence rate but was overtaken by dSGD at approximately 2500 minutes. This is likely because of the very high computation time for Shakespeare (see Table 2) relative to the other datasets (due to the very high computational cost of the GRU model).\nFor Sentiment 140 (Figure 1 (a)) and Shakespeare (Figure 1 for FEMNIST and CIFAR100, the K r -rounds scheme lead to lower training error compared to Kη-fixed. For CIFAR100, an improvement was also seen with η r -rounds. Both FEMNIST and CIFAR100 are image classification tasks, so it be may the case that decaying K r or η r during training is beneficial for computer vision tasks, which could be investigated further in future works.\nFigure 2 shows the impact on validation accuracy for the tested decay schedules. The Kη-fixed schedule shows faster initial convergence for all tasks, but it is overtaken by dSGD in the later stages of training. For FEMNIST, CIFAR100 and Shakespeare, the aggressive K r -rounds and K r -step schemes improved the convergence rate compared to dSGD, with very significant improvement for CIFAR100. A marked increase in convergence rate can be seen in Figure 1 (c) at 1000 minutes when K r -step is decayed.\nIn all tasks, all K-decay schemes were able to match or improve the validation accuracy that Kη-fixed achieved whilst performing (often substantially) fewer total steps of SGD within a given runtime. Table 4 shows the total SGD steps performed by the K-decay schemes relative to the total steps performed by Kη-fixed over the 10k communication rounds (all the ηdecay schemes perform the same amount of computation as Kηfixed). The fact that K-decay schemes can outperform Kηfixed with lower total computation indicates that much of the extra computation performed by FedAvg is wasted when considering validation performance. CIFAR100 using K r -rounds for example achieved over 18% higher validation accuracy compared to Kηfixed whilst performing less than 10% of the total steps of SGD. Similarly, K r -step achieved the same validation accuracy as Kηfixed whilst performing only 68% of the total SGD steps." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "The popular Federated Averaging (FedAvg) algorithm is used within the Federated Learning (FL) paradigm to improve the convergence rate of an FL model by performing several steps of SGD (K) locally during each training round. In this paper, we analysed FedAvg to examine the runtime benefit of decreasing (K) during training. We set up a runtime model of FedAvg and used this to determine the optimal value of K (and learning rate η) at any point during training under different assumptions, leading to three practical schedules for decaying K as training progresses. Simulated experiments using realistic values for communicationtime and computation-time on 4 benchmark FL datasets from 3 learning domains showed that decaying K during training can lead to improved training error and validation accuracy within a given timeframe, in some cases whilst performing over 10× less computation compared to fixed K." }, { "figure_ref": [], "heading": "APPENDIX A PROOF OF THEOREMS A.1 Key Lemmas", "publication_ref": [ "b11", "b11" ], "table_ref": [], "text": "Previously, Li et al. [12] analysed the per-iteration convergence of FedAvg for µ-strongly convex functions when using a decreasing stepsize. Their result was the first to prove convergence for non-IID clients with partial participation. We make assumptions that are at least as strong as Li et al., so can use their intermediary result bounding the distance to the global minimiser when using partial client participation: Lemma 1: Given Assumptions 1 -3, the expected distance between average client model xt and the global minimiser x * is upper-bounded by:\nE xt+1 -x * 2 ≤ (1 -η t µ)E xt -x 2 2 + η 2 t C c=1 p 2 c σ 2 c + 6LΓ + 8(K t -1) 2 G 2 + C -N N -1 4 N K 2 t G 2 . (16\n)\nProof: See Appendix B.3 of [12].\nLemma 2: Given Assumptions 1 -3, the sum of expected gradient norms over T iterations of the average client model xt is upper- bounded by:\nT t=1 η t E ∇F ( xt ) 2 ≤ 2κ(κF ( x0 ) -F * ) + κL C c=1 p 2 c σ 2 c + 6LΓ + 8(K t -1) 2 G 2 + 4 N K 2 t G 2 T t=1 η 2 t .(17)\nProof : Rearranging Lemma 1 and then defining for notational convenience\nD = C c=1 p 2 c σ 2 c + 6LΓ + 8(K t -2) 2 G 2 + C -N N -1 4 N K 2 t G 2 ,\nthe recursive definition can be written as:\nη t µE xt -x * 2 ≤ E xt -x * 2 -E xt+1 -x * 2 + η 2 t D. (18\n) Using Assumption 1 (L-smoothness), we have:\nη t µ L 2 E ∇F ( xt ) 2 ≤ E xt -x * 2 -E xt+1 -x * 2 + η 2 t D. (19\n) Summing up the T iterations and telescoping the distance terms gives:\nµ L 2 T t=1 η 2 t E ∇F ( xt ) 2 ≤ E x0 -x * 2 -E xT -x * 2 + D T t=1 η 2 t .(20)\nUsing Assumption 1 (L-smoothness) and Assumption 2 (µ-strong convexity) to bound the distance terms now gives:\nµ L 2 T t=1 η 2 t E ∇F ( xt ) 2 ≤ 2 µ [F ( x0 ) -F * ] - 2 L [F ( xT ) -F * ] + D T t=1 η 2 t ,(21)\nwhich can be simplified by noting that µ ≤ L, so that: \nµ L 2 T t=1 η 2 t E ∇F ( xt ) 2 ≤ 2 µ F(" }, { "figure_ref": [], "heading": "A.3 Proof of Theorem 2", "publication_ref": [ "b24", "b27" ], "table_ref": [], "text": "We start from the bound on gradient norms using a constant K w (within a communication round) and η (8): Taking the second derivative with respect to K w gives: Considering (F ( xt0 ) -F * ) > 0 and all the constants in (25) are > 0, then inspection of (25) shows that the second derivative with respect to K w is greater than 0, and hence ( 23) is convex. Solving As with the proof of Theorem 2, we start with the bound on gradient norms using a constant K and η w (within a communication round) given in (8): Taking the second derivative with respect to η w gives:\nd 2 min t>t0 {E ∇F ( xt0 ) 2 } d η 2 w = 4κ(κF ( xt0 ) -F * ) η 3 w W K |x| D + |x| U + βK .(28)\nNoting that (F ( xt0 ) -F * ) > 0 and all the constants in ( 28) are > 0, then inspection of (28) shows that the second derivative with respect to η w is > 0 and hence ( 27) is convex. Solving " } ]
In Federated Learning (FL) client devices connected over the internet collaboratively train a machine learning model without sharing their private data with a central server or with other clients. The seminal Federated Averaging (FedAvg) algorithm trains a single global model by performing rounds of local training on clients followed by model averaging. FedAvg can improve the communication-efficiency of training by performing more steps of Stochastic Gradient Descent (SGD) on clients in each round. However, client data in real-world FL is highly heterogeneous, which has been extensively shown to slow model convergence and harm final performance when K > 1 steps of SGD are performed on clients per round. In this work we propose decaying K as training progresses, which can jointly improve the final performance of the FL model whilst reducing the wall-clock time and the total computational cost of training compared to using a fixed K. We analyse the convergence of FedAvg with decaying K for strongly-convex objectives, providing novel insights into the convergence properties, and derive three theoretically-motivated decay schedules for K. We then perform thorough experiments on four benchmark FL datasets (FEMNIST, CIFAR100, Sentiment140, Shakespeare) to show the real-world benefit of our approaches in terms of real-world convergence time, computational cost, and generalisation performance.
Faster Federated Learning with Decaying Number of Local SGD Steps
[ { "figure_caption": "3 select round clients C r 4 for 5 download global model x r 6 for3456client c ∈ C r in parallel do local SGD step k = 1 to K r do 7 x c r,k ← x r -η r ∇f (x c k , ξ c k )", "figure_data": "", "figure_id": "fig_0", "figure_label": "3456", "figure_type": "figure" }, { "figure_caption": "3 . 3 √33Theorem 2 shows that K * w decreases at least as fast as O( 1 / W ), motivating the principal of decreasing the number of local steps during FedAvg. The decreasing value of the global model objective F ( xt0 ) also influences K * w . The implications of Theorem 2 are discussed in the following Remarks.", "figure_data": "", "figure_id": "fig_1", "figure_label": "33", "figure_type": "figure" }, { "figure_caption": ")F( xr ) is the training loss of the global model at the start of round r. Practically, an estimate of F ( xr ) can be obtained by requiring clients c ∈ C r to send their training loss after the first step of local SGD to the server each round: f c ( xr , ξ c,0 ), where E [f c ( xr , ξ c,0 )] = F ( xr ). This is only a single floating-point value that does not require any extra computation and negligibly increases the per-round communication costs.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 1 :1Fig. 1: Cumulative lowest training cross-entropy error over time of FedAvg using different schedules for K r and η r . Curves show mean over 5 random trials. Vertical line shows the communication round where the validation error plateaus.", "figure_data": "", "figure_id": "fig_3", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 : 5 × 10 - 2 4. 3 K251023Fig. 2: Cumulative highest validation top-1 accuracy over time of FedAvg using different schedules for K r and η r . Curves show mean and shaded regions show 95% confidence intervals of the mean over 5 random trials.", "figure_data": "", "figure_id": "fig_4", "figure_label": "251023", "figure_type": "figure" }, { "figure_caption": "TABLE 3 :3Values of K r and η r for given communication round r as tested in the experimental evaluation.", "figure_data": "", "figure_id": "fig_5", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "2 ≤..2of the inequality by L 2 /µ, using the lowerbound F * ≤ F ( xT ), the definition κ = L/µ, and the fact thatC-N N -1 ≤ 1 completes the proof.A.2 Proof of Theorem 1The bound on gradient norms given in Lemma 2 uses the global index t that denotes the global SGD step that each client evaluates (irrespective of communication rounds). However, FedAvg clients participate in communication rounds. The values of η t and K t are therefore fixed for each communication round. To account for this, Lemma 2 can be reindexed using the given communication round r and local step k: t = I + k, where where I = r i=1 K i . The total number of communication rounds is R, therefore T = R r=1 K r . Using this to reindex Lemma 2: 2κ(κF ( x0 ) -F * ) Diving both sides through by r=1 η r K r :Using a fixed η r = η ≤ 1/4L, then the above inequality can be simplified as:Using min{E ∇F ( xt ) 2 } ≤ E ∇F ( xt ) 2 then completes the proof.", "figure_data": "", "figure_id": "fig_6", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "22Taking the first derivative with respect to K gives:d min t>t0 {E ∇F ( xt0 ) 2 } dK w = -2κ(κF ( xt0 ) -F * ) K w .(24)", "figure_data": "", "figure_id": "fig_7", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "2 .2(25) ", "figure_data": "", "figure_id": "fig_8", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "d min t>t 0 A. 404{E[ ∇F ( xt 0 ) 2 ]} /dKw = 0 gives Theorem 2. Proof of Corollary 2.1", "figure_data": "", "figure_id": "fig_9", "figure_label": "04", "figure_type": "figure" }, { "figure_caption": "2 K 2 . ( 26 ) 2 K 2 .222622Taking the first derivative with respect to η w gives:(27) ", "figure_data": "", "figure_id": "fig_10", "figure_label": "222622", "figure_type": "figure" }, { "figure_caption": "d min t>t 00{E[ ∇F ( xt 0 ) 2 ]} /dηw = 0 yields Corollary 2.1.", "figure_data": "", "figure_id": "fig_11", "figure_label": "0", "figure_type": "figure" }, { "figure_caption": "Values of K r and η rFedAvg is an iterative algorithm with each round starting from a new global model. Therefore, each iteration can be viewed as restarting the algorithm, using model x t0 , ∀t 0 ∈ I. Using this formulation, we can indepedently derive what the optimal fixed value of K would be at the start of any communication round during training. As training progresses and new rounds of training are completed, this value of K can therefore vary. When using a fixed number of local steps K", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Datasets and models used in the experimental evaluation (DNN = Deep Neural Network, CNN = Convolutional Neural Network, GRU = Gated Recurrent Network). K 0 and η 0 are the initial number of local steps and initial learning rate used. Remark 2.1.3: reformulation using communication rounds. Making the same assumption as in (10) ( |x| /D + |x| /U >> βK) gives a decay schedule for η r in terms of the number of communication rounds:", "figure_data": "TaskTypeClasses ModelModel Size (Mb)Total ClientsClients per Round per Client SamplesK 0η 0Sent140Sentiment analysis2Linear0.32218765015603.0FEMNISTImage classification62DNN6.71300060170800.3CIFAR100Image classification100CNN40.05002510050 0.01Shakespeare Character prediction79GRU5.21660105573800.1a larger η * + W compensates by making more progress per SGDstep.Remark 2.1.2: dependence on client participation rate. Similarto K * w , a larger number of clients participating per round (N ) allows for a smaller η * w by reducing variance due to client-drift.Larger K in (11) also allows for smaller η as more progress ismade per round.", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Total SGD steps performed during training for each Kdecay schedule relative to Kη-fixed for different learning tasks.", "figure_data": "TaskScheduleRelative SGD StepsK r -rounds0.21Sentiment 140K r -error0.99K r -step0.68K r -rounds0.11FEMNISTK r -error0.80K r -step0.44K r -rounds0.090CIFAR100K r -error0.57K r -step0.40K r -rounds0.74ShakespeareK r -error0.99K r -step0.96(d)), decaying either K r or η r during training lead to smallerimprovements in the training error that was achieved. However,", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" } ]
Jed Mills; Jia Hu; Geyong Min
[ { "authors": "P Kairouz; H B Mcmahan", "journal": "Foundations and Trends in Machine Learning", "ref_id": "b0", "title": "Advances and open problems in federated learning", "year": "2021" }, { "authors": "A Hard; K Rao; R Mathews; S Ramaswamy; F Beaufays; S Augenstein; H Eichner; C Kiddon; D Ramage", "journal": "", "ref_id": "b1", "title": "Federated learning for mobile keyboard prediction", "year": "2018" }, { "authors": "D Leroy; A Coucke; T Lavril; T Gisselbrecht; J Dureau", "journal": "", "ref_id": "b2", "title": "Federated learning for keyword spotting", "year": "2019" }, { "authors": "X Qu; S Wang; Q Hu; X Cheng", "journal": "IEEE Transactions on Parallel and Distributed Systems", "ref_id": "b3", "title": "Proof of federated learning: A novel energy-recycling consensus algorithm", "year": "2021" }, { "authors": "M Sheller; B Edwards; G Reina; J Martin; S Pati; A Kotrotsou; M Milchenko; W Xu; D Marcus; R Colen; S Bakas", "journal": "Scientific Reports", "ref_id": "b4", "title": "Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data", "year": "2020" }, { "authors": "W Yang; Y Zhang; K Ye; L Li; C.-Z Xu", "journal": "Springer International Publishing", "ref_id": "b5", "title": "Ffd: A federated learning based method for credit card fraud detection", "year": "2019" }, { "authors": "B Mcmahan; E Moore; D Ramage; B A Arcas", "journal": "", "ref_id": "b6", "title": "Communication-efficient learning of deep networks from decentralized data", "year": "2017" }, { "authors": "X Qiu; T Parcollet; D J Beutel; T Topal; A Mathur; N D Lane", "journal": "", "ref_id": "b7", "title": "Can federated learning save the planet?", "year": "2020" }, { "authors": "C Wang; Y Yang; P Zhou", "journal": "IEEE Transactions on Parallel and Distributed Systems", "ref_id": "b8", "title": "Towards efficient scheduling of federated mobile devices under computational and statistical heterogeneity", "year": "2021" }, { "authors": "J Mills; J Hu; G Min", "journal": "IEEE Internet of Things Journal", "ref_id": "b9", "title": "Communication-efficient federated learning for wireless edge intelligence in IoT", "year": "2020" }, { "authors": "S P Karimireddy; S Kale; M Mohri; S Reddi; S Stich; A T Suresh", "journal": "", "ref_id": "b10", "title": "SCAFFOLD: Stochastic controlled averaging for federated learning", "year": "2020" }, { "authors": "X Li; K Huang; W Yang; S Wang; Z Zhang", "journal": "", "ref_id": "b11", "title": "On the convergence of fedavg on non-iid data", "year": "2020" }, { "authors": "Z Charles; J Konečný", "journal": "", "ref_id": "b12", "title": "Convergence and accuracy trade-offs in federated learning and meta-learning", "year": "2021" }, { "authors": "G Malinovskiy; D Kovalev; E Gasanov; L Condat; P Richtarik", "journal": "", "ref_id": "b13", "title": "From local SGD to local fixed-point methods for federated learning", "year": "2020" }, { "authors": "H Yang; M Fang; J Liu", "journal": "", "ref_id": "b14", "title": "Achieving linear speedup with partial worker participation in non-IID federated learning", "year": "2021" }, { "authors": "C T Dinh; N H Tran; M N H Nguyen; C S Hong; W Bao; A Y Zomaya; V Gramoli", "journal": "IEEE/ACM Transactions on Networking", "ref_id": "b15", "title": "Federated learning over wireless networks: Convergence analysis and resource allocation", "year": "2021" }, { "authors": "T Li; A K Sahu; M Zaheer; M Sanjabi; A Talwalkar; V Smith", "journal": "", "ref_id": "b16", "title": "Federated optimization in heterogeneous networks", "year": "2020" }, { "authors": "S P Karimireddy; M Jaggi; S Kale; M Mohri; S Reddi; S U Stich; A T Suresh", "journal": "", "ref_id": "b17", "title": "Breaking the centralized barrier for crossdevice federated learning", "year": "2021" }, { "authors": "S J Reddi; Z Charles; M Zaheer; Z Garrett; K Rush; J Konečný; S Kumar; H B Mcmahan", "journal": "", "ref_id": "b18", "title": "Adaptive federated optimization", "year": "2021" }, { "authors": "J Mills; J Hu; G Min", "journal": "IEEE Communications Magazine", "ref_id": "b19", "title": "Client-side optimization strategies for communication-efficient federated learning", "year": "2022" }, { "authors": "B Woodworth; K K Patel; S Stich; Z Dai; B Bullins; B Mcmahan; O Shamir; N Srebro", "journal": "", "ref_id": "b20", "title": "Is local SGD better than minibatch SGD?", "year": "2020" }, { "authors": "J Wang; G Joshi", "journal": "Journal of Machine Learning Research (JMLR)", "ref_id": "b21", "title": "Cooperative sgd: A unified framework for the design and analysis of local-update sgd algorithms", "year": "2021" }, { "authors": "T Lin; S U Stich; K K Patel; M Jaggi", "journal": "", "ref_id": "b22", "title": "Don't use large mini-batches, use local sgd", "year": "2020" }, { "authors": "T Lin; L Kong; S Stich; M Jaggi", "journal": "", "ref_id": "b23", "title": "Extrapolation for largebatch training in deep learning", "year": "2020" }, { "authors": "K Bonawitz; H Eichner; W Grieskamp; D Huba; A Ingerman; V Ivanov; C Kiddon; J Konečný", "journal": "", "ref_id": "b24", "title": "Towards federated learning at scale: System design", "year": "2019" }, { "authors": "L Lyu; J Yu; K Nandakumar; Y Li; X Ma; J Jin; H Yu; K S Ng", "journal": "IEEE Transactions on Parallel and Distributed Systems", "ref_id": "b25", "title": "Towards fair and privacy-preserving federated deep models", "year": "2020" }, { "authors": "J Wang; G Joshi", "journal": "", "ref_id": "b26", "title": "Adaptive communication strategies to achieve the best error-runtime trade-off in local-update SGD", "year": "2019" }, { "authors": "S Caldas; P Wu; T Li; J Konecný; H B Mcmahan; V Smith; A Talwalkar", "journal": "", "ref_id": "b27", "title": "Leaf: A benchmark for federated settings", "year": "2019" }, { "authors": "", "journal": "Open Signal, Tech. Rep", "ref_id": "b28", "title": "United kingdom mobile network experience report", "year": "2021-01" } ]
[ { "formula_coordinates": [ 3, 76.46, 528.58, 219.84, 29.45 ], "formula_id": "formula_0", "formula_text": "F (x) = C c=1 p c f c (x) = C c=1 p c nc n=1 f (x, ξ c,n ) , (1" }, { "formula_coordinates": [ 3, 296.31, 539.3, 3.69, 8.24 ], "formula_id": "formula_1", "formula_text": ")" }, { "formula_coordinates": [ 3, 363.61, 405.37, 196.7, 42.98 ], "formula_id": "formula_2", "formula_text": "y i t+1 = x i t -η t ∇f (x i t , ξ i t ), x i t+1 = c∈Ct p c y c t+1 if t ∈ I, y i t+1 otherwise, (2" }, { "formula_coordinates": [ 3, 560.31, 423.61, 3.69, 8.24 ], "formula_id": "formula_3", "formula_text": ")" }, { "formula_coordinates": [ 3, 403.89, 534.83, 70.15, 13.65 ], "formula_id": "formula_4", "formula_text": "xt = C c=1 p c x t c ." }, { "formula_coordinates": [ 3, 384.09, 617.32, 176.21, 22.31 ], "formula_id": "formula_5", "formula_text": "W c r = |x| D c + K r β c + |x| U c , (3" }, { "formula_coordinates": [ 3, 560.31, 624.68, 3.69, 8.24 ], "formula_id": "formula_6", "formula_text": ")" }, { "formula_coordinates": [ 3, 477.67, 735.8, 47.94, 10.43 ], "formula_id": "formula_7", "formula_text": "D c >> U c ." }, { "formula_coordinates": [ 4, 135.73, 84.2, 164.27, 16.58 ], "formula_id": "formula_8", "formula_text": "W r = max c∈Cr {W c r } .(4)" }, { "formula_coordinates": [ 4, 82.71, 170.39, 217.29, 29.29 ], "formula_id": "formula_9", "formula_text": "W = R r=1 W r = R |x| D + |x| U + β R r=1 K r .(5)" }, { "formula_coordinates": [ 4, 71.06, 441.7, 205.89, 22.31 ], "formula_id": "formula_10", "formula_text": "f c (x) ≤ f c (y) + (x -y) ∇f c (y) + L 2 x -y 2 ." }, { "formula_coordinates": [ 4, 48, 506.96, 252, 48.92 ], "formula_id": "formula_11", "formula_text": "f c (x) ≥ f c (y) + (x -y) ∇f c (y) + µ 2 x -y 2 , with minima f * c = min f c . As F (x) is a convex combination of f c , then it is also µ-strongly convex." }, { "formula_coordinates": [ 4, 105.85, 598.43, 136.31, 12.69 ], "formula_id": "formula_12", "formula_text": "E ∇f c (x; ξ c k ) -∇f c (x) 2 ≤ σ 2 c ." }, { "formula_coordinates": [ 4, 118.52, 682.46, 115.95, 11.72 ], "formula_id": "formula_13", "formula_text": "∇F (x) 2 ≤ L 2 x 1 -x * 2 ." }, { "formula_coordinates": [ 4, 397.1, 61.47, 81.81, 29.29 ], "formula_id": "formula_14", "formula_text": "Γ = F * - C c=1 p c f * c ," }, { "formula_coordinates": [ 4, 312, 336.8, 252, 71.9 ], "formula_id": "formula_15", "formula_text": "min t {E ∇F ( xt ) 2 } ≤ 2κ(κF ( x0 ) -F * ) ηT + ηκL C c=1 p 2 c σ 2 c + 6LΓ + 8 + 4 N G 2 R r=1 K 3 r R r=1 K r . (6) Proof: See Appendix A.2." }, { "formula_coordinates": [ 4, 528.19, 630.77, 35.31, 13.65 ], "formula_id": "formula_16", "formula_text": "C c=1 p 2 c σ 2 c" }, { "formula_coordinates": [ 5, 48, 56.13, 163.31, 8.82 ], "formula_id": "formula_17", "formula_text": "Remark 1.3: drawback of K > 1." }, { "formula_coordinates": [ 5, 113.79, 384.26, 186.21, 22.31 ], "formula_id": "formula_18", "formula_text": "W = T K |x| D + |x| U + βK .(7)" }, { "formula_coordinates": [ 5, 57.96, 467.05, 238.34, 75.83 ], "formula_id": "formula_19", "formula_text": "min t>t0 {E ∇F ( xt ) 2 } ≤ 2κ(κF ( xt0 ) -F * ) ηW K |x| D + |x| U + βK + ηκL C c=1 p 2 c σ 2 c + 6LΓ + 8 + 4 N G 2 K 2 . (8" }, { "formula_coordinates": [ 5, 296.31, 524.16, 3.69, 8.24 ], "formula_id": "formula_20", "formula_text": ")" }, { "formula_coordinates": [ 5, 87.69, 671.41, 212.31, 23.15 ], "formula_id": "formula_21", "formula_text": "K * w = 3 (κF ( xt0 ) -F * ) 8η 2 L (1 + 1 /2N) ( |x| /D + |x| /U) W .(9)" }, { "formula_coordinates": [ 5, 378.61, 318.26, 185.39, 54.91 ], "formula_id": "formula_22", "formula_text": "K * r = 3 κF ( xt0 ) -F * 8η 2 L (1 + 1 /2N) 1 R ≤ 3 κF ( x0 ) -F * 8η 2 L (1 + 1 /2N) 1 R ,(10)" }, { "formula_coordinates": [ 5, 321.96, 580.4, 238.08, 57.16 ], "formula_id": "formula_23", "formula_text": "η * w = 2(κF ( xt0 ) -F * ) LZ ( |x| /D + |x| /U + βK) W , where Z = C c=1 p 2 c σ 2 c + 6LΓ + (8 + 4 /N)G 2 K 2 . (11" }, { "formula_coordinates": [ 5, 560.04, 618.84, 3.96, 8.24 ], "formula_id": "formula_24", "formula_text": ")" }, { "formula_coordinates": [ 6, 115.2, 323.09, 180.85, 54.91 ], "formula_id": "formula_25", "formula_text": "η * r = 2(κF (x t0 ) -F * ) LZ 1 R ≤ 2(κF (x 0 ) -F * ) LZ 1 R , (12" }, { "formula_coordinates": [ 6, 296.04, 363.05, 3.96, 8.24 ], "formula_id": "formula_26", "formula_text": ")" }, { "formula_coordinates": [ 6, 124.86, 584.56, 175.14, 22.45 ], "formula_id": "formula_27", "formula_text": "K * r = 3 F ( xr ) F ( x0 ) K 0 .(13)" }, { "formula_coordinates": [ 6, 135.45, 636.21, 160.59, 22.45 ], "formula_id": "formula_28", "formula_text": "η * r = 2 F ( xr ) F ( x0 ) η 0 . (14" }, { "formula_coordinates": [ 6, 363.65, 213.59, 196.39, 29.56 ], "formula_id": "formula_29", "formula_text": "F ( xr ) ≈ 1 sN r i=r-s c∈Ci f c ( xi , ξ c,0 ). (15" }, { "formula_coordinates": [ 6, 560.04, 224.16, 3.96, 8.24 ], "formula_id": "formula_30", "formula_text": ")" }, { "formula_coordinates": [ 8, 327.06, 477.37, 223.63, 85.37 ], "formula_id": "formula_31", "formula_text": "3 1/r K 0 η 0 K r -error (13) 3 Fr /F0 K 0 η 0 K r -step K0 /10 if converged η 0 η r -rounds (12) K 0 2 1/r η 0 η r -error (14) K 0 2 Fr /F0 η 0 η r -step K 0 η0 /10 if converged" }, { "formula_coordinates": [ 11, 57.96, 233.82, 238.08, 70.27 ], "formula_id": "formula_32", "formula_text": "E xt+1 -x * 2 ≤ (1 -η t µ)E xt -x 2 2 + η 2 t C c=1 p 2 c σ 2 c + 6LΓ + 8(K t -1) 2 G 2 + C -N N -1 4 N K 2 t G 2 . (16" }, { "formula_coordinates": [ 11, 296.04, 289.13, 3.96, 8.24 ], "formula_id": "formula_33", "formula_text": ")" }, { "formula_coordinates": [ 11, 59.94, 376.01, 240.06, 94.75 ], "formula_id": "formula_34", "formula_text": "T t=1 η t E ∇F ( xt ) 2 ≤ 2κ(κF ( x0 ) -F * ) + κL C c=1 p 2 c σ 2 c + 6LΓ + 8(K t -1) 2 G 2 + 4 N K 2 t G 2 T t=1 η 2 t .(17)" }, { "formula_coordinates": [ 11, 48, 502.9, 258.73, 29.29 ], "formula_id": "formula_35", "formula_text": "D = C c=1 p 2 c σ 2 c + 6LΓ + 8(K t -2) 2 G 2 + C -N N -1 4 N K 2 t G 2 ," }, { "formula_coordinates": [ 11, 57.96, 557.18, 238.08, 27.65 ], "formula_id": "formula_36", "formula_text": "η t µE xt -x * 2 ≤ E xt -x * 2 -E xt+1 -x * 2 + η 2 t D. (18" }, { "formula_coordinates": [ 11, 59.16, 608.35, 236.89, 35.69 ], "formula_id": "formula_37", "formula_text": "η t µ L 2 E ∇F ( xt ) 2 ≤ E xt -x * 2 -E xt+1 -x * 2 + η 2 t D. (19" }, { "formula_coordinates": [ 11, 59.16, 682.56, 240.84, 62.02 ], "formula_id": "formula_38", "formula_text": "µ L 2 T t=1 η 2 t E ∇F ( xt ) 2 ≤ E x0 -x * 2 -E xT -x * 2 + D T t=1 η 2 t .(20)" }, { "formula_coordinates": [ 11, 323.16, 75.89, 240.84, 62.02 ], "formula_id": "formula_39", "formula_text": "µ L 2 T t=1 η 2 t E ∇F ( xt ) 2 ≤ 2 µ [F ( x0 ) -F * ] - 2 L [F ( xT ) -F * ] + D T t=1 η 2 t ,(21)" }, { "formula_coordinates": [ 11, 323.16, 163.58, 136.86, 29.29 ], "formula_id": "formula_40", "formula_text": "µ L 2 T t=1 η 2 t E ∇F ( xt ) 2 ≤ 2 µ F(" }, { "formula_coordinates": [ 12, 323.16, 314.16, 240.84, 68.72 ], "formula_id": "formula_41", "formula_text": "d 2 min t>t0 {E ∇F ( xt0 ) 2 } d η 2 w = 4κ(κF ( xt0 ) -F * ) η 3 w W K |x| D + |x| U + βK .(28)" } ]
10.48550/arXiv.2210.02875
2023-10-23
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b46", "b42", "b31" ], "table_ref": [], "text": "issues is to augment LLMs with external knowledge resources, so as to amend the incorrect generations. Among these resources, structured data (e.g., knowledge graphs and databases), has been widely used as the carrier of the required knowledge for LLMs. Unlike plain text, structured data is organized in a standardized format, conforming to some logical data model. For example, knowledge graphs (KGs) are often organized as fact triples that state the relations between head entities and tail entities, and data tables are organized in the form of column-indexed records by rows. However, as structured data has special data formats or schemas that LLMs have not seen during pretraining, they may be not fully grasped or understood by LLMs (Wei et al., 2021). A straightforward way to solve this problem is to linearize the structured data into a sentence that LLMs can well understand. While, the amount of structured data is often vast, making it infeasible to include all the data records in the input prompt.\nRegarding the above challenges, we are inspired by the tool manipulation strategy for augmenting the abilities of LLMs (Schick et al., 2023;Nakano et al., 2021). Our basic idea is to incorporate specialized interfaces (e.g., extracting columns from tables) to manipulate the structured data records. With these interfaces, we can effectively reduce the search space of the data records, and more accurately identify the required evidence to fulfill specific tasks. In this way, LLMs can concentrate on reasoning based on the evidence obtained from the interfaces. To implement the interface-augmented approach, there remain two key problems, namely how to design suitable interfaces for specific tasks and how to utilize them for reasoning by LLMs, which are the focus of this work.\nTo design suitable interfaces, we regard multiple types of structured data as black-box systems, and design the interfaces to provide accurate, efficient data access and filtering for LLMs. For each interface, its implementation is dependent on the characteristic of the structured data, while its functionality is general to all LLMs, with just a few arguments for specifying the data requirements. Based on these interfaces, we propose an Iterative Reading-then-Reasoning (IRR) framework for LLMs to utilize the interfaces to solve the tasks based on structured data, namely StructGPT. This framework considers two major functions to fulfill different tasks, namely collecting relevant evidence (reading) and inferring the answer or planning subsequent steps (reasoning). Specifically, we propose an invoking-linearization-generation procedure to support LLMs in reading and reasoning on the structured data with the help of the external interfaces. By iterating this procedure with provided interfaces, we can gradually approach the target answer to a given question.\nTo our knowledge, this is the first work that explores how to support LLMs in reasoning on multiple types of structured data (including tables, KGs, and DBs) in a unified paradigm. To evaluate the effectiveness of our approach, we conduct extensive experiments on a wide range of tasks (e.g., KGbased question answering (KGQA), Table-based question answering (TableQA), and DB-based Textto-SQL). Experimental results on 8 datasets demonstrate that our approach can effectively enhance the reasoning performance of LLMs on structured data in zero-shot and few-shot settings, even comparable with competitive full-data supervised-tuning methods. For example, in KGQA, TableQA, and Text-to-SQL tasks, our approach yields an increase of 11.4% of Hits@1 on WebQSP, 4.2% of accuracy in TabFact, and 4.7% of execution accuracy in Spider respectively, compared to directly using ChatGPT in the zero-shot setting." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b43", "b14", "b45", "b39", "b19", "b39", "b17", "b47", "b4", "b5", "b40", "b5", "b1", "b48", "b12", "b23" ], "table_ref": [], "text": "Reasoning over Structured Data. Structured data (e.g., knowledge graphs, tables, and databases) is an important knowledge carrier for a variety of QA and reasoning tasks. Early work focuses on designing specific model architectures tailored for each type of structured data, such as graph neural networks (Sun et al., 2018), table Transformers (Herzig et al., 2020), and tree-structured decoder (Wang et al., 2020). While achieving remarkable performance, these approaches lack generality for various types of structured data and are hard to be transferred across different tasks. Re-cently, with the success of pre-trained language models (PLMs) (e.g., T5 (Raffel et al., 2020), BART (Lewis et al., 2020)), several methods (Raffel et al., 2020;Khashabi et al., 2020) have adopted PLMs as the general encoder or solver for different structured data and tasks. Among them, Unified-SKG (Xie et al., 2022) unifies a number of reasoning tasks over structured data into a text-to-text format, which concatenates the question and the linearized structured data as input, and then finetunes T5 to learn to generate the answer. However, UnifiedSKG also requires to tune the model parameters, and is unable to handle large-scale structured data under the limitation of the maximum input length. Instead, our method can utilize the LLM to perform reasoning on structured data without training, and also leverage the interfaces of structured data to better manipulate vast structured data.\nLLMs for Structured Data. Benefitting from the strong few-shot and zero-shot capability, recent studies have leveraged LLMs to perform reasoning over structured data (Chen et al., 2023;Li et al., 2023a;Cheng et al., 2022;Rajkumar et al., 2022). Existing work can be roughly divided into two types. The first type of method linearizes the structured data into a sentence (e.g., table rows), and feeds it into the LLMs to generate the answer according to in-context exemplars (Cheng et al., 2022;Chen, 2023). For complex questions or structured data, they first decompose it into multiple simple and short ones and then perform linearization and generation (Ye et al., 2023). Another type of method leverages LLMs to evaluate the plausibility of the solution plan based on the knowledge base (Gu et al., 2023), or first generate a solution draft with in-context exemplars and then revise the draft grounding on the knowledge base (Li et al., 2023c). However, most of them only focus on a specific type of structured data, and are lack of generality across various data and tasks. In StructGPT, we provide a unified paradigm that is general to various structured data and downstream tasks." }, { "figure_ref": [], "heading": "Preliminary", "publication_ref": [ "b47", "b3" ], "table_ref": [], "text": "In this section, we introduce the definition of structured data, which mainly consists of three commonly used types. Then we present the unified problem statement.\nStructured Data. Structured data (e.g., data tables and knowledge graphs) refers to the data that is in a standardized format, conforming to some logical data model (Xie et al., 2022;Chen et al., 2009). Due to the formal structure, it is easy and efficient to access and query structured data using formal languages (e.g., SQL and SPARQL for databases) or specific algorithms (e.g., triples search for knowledge graphs). In this work, we mainly focus on three types of structured data, namely knowledge graphs (KG), data tables (Table), and databases (DB), since they play an important role as the knowledge source in helping solve complex reasoning tasks, described as follows.\n• Knowledge Graph. A knowledge graph (KG) consists of a number of triples to store the factual knowledge, denoted as G = {⟨e, r, e ′ ⟩|e, e ′ ∈ E, r ∈ R}, where E and R denote the set of entities and relations, respectively. A triple ⟨e, r, e ′ ⟩ represents the fact that there is a relation r between the head entity e and the tail entity e ′ .\n• Data Table . A data table T (table in short) contains multiple columns {c i } C i=1 and rows {l j } R j=1 , where each row l j denotes a data record formatted by the attributes indexed by columns {c i } C i=1 , and v i,j denotes the content in the cell corresponding to the position at column i and row j.\n• Database. A database (DB) typically consists of N data tables, denoted as D = {T 1 , T 2 , ..., T N }. Besides the column names, the foreign keys across all tables are also available to link the data from two tables, denoted as {(c\n(k) i , c (h) j )}, where c (k) i and c (h) j\ndenote the i-th and j-th columns in the k-th and h-th tables, respectively.\nProblem Statement. This work mainly focuses on using LLMs to solve complex reasoning tasks based on structured data. Formally, it can be described as a question answering task: given a natural language question q and an accessible structured data S (e.g., a knowledge graph, a table, or database), the LLM needs to extract useful evidence from S and then generates the expected result to answer the question q based on the extracted evidence. According to the task requirement, the generated result can be either free-form answers in natural language or structured expressions (e.g., SQL statements) to be executed for obtaining the answer from S. Since we consider three types of structured data (Section 4), our tasks can be instantiated as follows:\n• KG based question answering (KGQA)\n• Table based question answering (TableQA)\n• DB based semantic parsing (Text-to-SQL)\n4 Approach" }, { "figure_ref": [], "heading": "Overview", "publication_ref": [ "b15", "b32", "b30", "b8", "b31", "b42" ], "table_ref": [], "text": "In this work, we assume that LLMs have to rely on the evidence contained in the structured data to solve the three tasks described in Section 3. An intuitive idea is to conduct a two-stage framework as prior studies on retrieval-augmented approaches (Izacard et al., 2022;Oguz et al., 2022), in which LLMs are employed to first collect sufficient evidence relating to the question and then figure out the answer by the LLMs. However, such an approach is not directly applicable to structured data. Although LLMs are capable of solving diverse tasks in natural language, they have limited capacities in accurately representing and understanding structured data, especially for their contained domain-specific knowledge (Moiseev et al., 2022;Emelin et al., 2022).\nTo address this difficulty, our solution is inspired by the use of specialized tools in solving complex tasks for LLMs (Nakano et al., 2021;Gao et al., 2022b;Schick et al., 2023). We noted that structured data is well organized and supports easy access via formal language or queries (called interface for generality). The basic idea of our approach is to disentangle the two processes of reading and reasoning for LLMs: we utilize the interface of structure data to implement accurate, efficient data access and filtering (obtaining the relevant evidence), and further utilize the reasoning ability of LLMs to figure out the final plan or result for the question (fulfilling the task). In this way, LLMs can concentrate on the reasoning process in answering the question, without considering the specialized approach to reading the structure data.\nSpecially, in our framework, we encapsulate the structure data as a black-box system, and provide specific interfaces for LLMs to access the contained data. Further, we propose an invokinglinearization-generation procedure that enables LLMs to read and extract useful evidence from structured data via the corresponding interface. By iterating the above procedure with provided interfaces, we can gradually obtain the answers by leveraging the superior reasoning abilities of LLMs." }, { "figure_ref": [], "heading": "Interfaces for Structured Data", "publication_ref": [ "b43" ], "table_ref": [], "text": "Due to the standardized data formats, structured data is often equipped with efficient data management ways, e.g., SQL for the database. In our approach, we aim to provide LLMs with special- The overview of the proposed iterative reading-then-reasoning approach. We design specialized interfaces for reading structured data, and iterate the invoking-linearization-generation procedure to utilize LLMs for performing reasoning on the interfaces, until deriving the final answer or executable SQL.\nized interfaces, helping LLMs to read and utilize the structured data. Next, we present the specially designed interfaces for KG, table, and DB.\nInterfaces for Knowledge Graph. When performing complex reasoning on a KG, existing work (Sun et al., 2018) typically starts from a certain entity (about the question topic), and jumps along with the relations until reaching the answer.\nIn this process, LLMs should be aware of the neighboring relations of the current entity, and the neighboring triples with certain relations to the current entity. Based on it, LLMs can select the relevant relations and triples from them to find the answer.\nFor this purpose, we devise two functions for assisting LLMs to accomplish the above operations.\n• Extract_Neighbor_Relations (e): extracts all the neighboring relations of the entity e.\n• Extract_Triples (e, {r}): extracts all the triples with the relation in {r} and head entity e." }, { "figure_ref": [], "heading": "Interfaces for Table.", "publication_ref": [], "table_ref": [], "text": "Given a data table, LLMs need to know its contained column names, and can access the content by row or column, enabling LLMs to extract its sub-table containing relevant columns and rows. Thus, we define three functions:\n• Extract_Column_Name (T ): extracts all the column names of a table T .\n• Extract_Columns (T , {c}): extracts the contents of columns from a table T by indices {c}.\n• Extract_SubTable (T , {c}, {j}): extracts the sub-table specified by the column indices {c} and row indices {j} from a table T .\nInterfaces for Database. Considering a simplified setting when querying the database, LLMs should be aware of all the contained tables and columns (by name) for relevant tables selection, and can also acquire the detailed columns and foreign keys from the selected tables to search for the answer. Thus, we devise two functions as follows:\n• Extract_Table&Column_Name (D): extracts the names of all the tables and their contained columns from the database.\n• Extract_Tables_Information ({T }): extracts the table names, column names, and foreign keys from a set of tables {T }." }, { "figure_ref": [], "heading": "Reading and Reasoning with Interfaces", "publication_ref": [ "b36" ], "table_ref": [], "text": "Based on the above interfaces, we propose a general invoking-linearization-generation procedure that can be iterated in multiple turns for utilizing LLMs to perform reading and reasoning on structured data. For each iteration, based on the currently collected data, we first invoke an interface to extract relevant evidence from structure data, then linearize it into a textual prompt, and finally feed the prompt into the LLM for generation (selecting useful data or predicting the answer).\nInvoking an Interface. In this step, we aim to invoke an interface for extracting the relevant information from the structured data. According to the designed interfaces in Section 4.2, we construct the input based on the currently available data (e.g., entity and table), and then invoke the interface to obtain more detailed relevant information (e.g., neighboring relations and column names), which will be fed into LLMs for collecting useful information or generating the answer.\nInformation Linearization. Given the extracted information, we convert it into a textual sentence that can be understood by LLMs. For the information from KG (i.e., relations and triples), we concatenate them into a long sentence marked by specific separation and boundary symbols. For table and database, we leverage the same way to linearize the extracted table names or column names. While for contents in columns and rows, we follow existing work (Pasupat and Liang, 2015) that first converts them into triples, where head entities are the row indices, relations are column names, and tail entities are the content in the cell, e.g., \"(row 1, year, 1896)\" and \"(row 1, city, Athens)\". Then, for each row, we extract the row indices in the front and omit it in the triples, to compose a simplified sentence, e.g., \"row 1: (year, 1896), (city, Athens)\". For multiple rows, we concatenate them into a long sentence via a special separation symbol.\nLLM for Generation. After linearization, we design two types of input prompts for LLMs to fulfill different purposes1 :\n• The first type of prompts mostly adopts the following pattern: \"Here are [Y]. Which [X] are most relevant to answer the question [Q]\". It aims to elicit the ability of LLMs to select useful evidence (i.e., [X]) from linearized extracted information (i.e., [Y]), according to the question (i.e., [Q]).\n• The second type of prompt follows the pattern: \"Based on [Y], please generate [Z] for the question [Q]\". It aims to predict the targeted results (i.e., [Z]) for the given question (i.e., [Q]) based on the linearized extracted information (i.e., [Y]). Note that the targeted results can be either the answer string or executable formal language (e.g., SQL) that can lead to the final answer.\nBy iterating the above invoking-linearizationgeneration procedure on designed interfaces, LLMs can progressively capture more useful evidence for deriving the final answer." }, { "figure_ref": [], "heading": "Instantiated Downstream Tasks", "publication_ref": [ "b43", "b20", "b18" ], "table_ref": [], "text": "In the following, we describe the instances of the above general workflow for the tasks described in Section 3, since they deal with very different structure data and vary in the task settings.\nKG-based Question Answering (KGQA). This task aims to find the answer entities for the question based on the KG. Following existing work (Sun et al., 2018), we denote the mentioned entity in the given question q as the topic entity e T , and assume it has been linked to some specific entity on the KG through existing linking tools (e.g., Google Knowledge Graph Search API) or models (e.g., ELQ (Li et al., 2020)). Starting from e T , we perform the invoking-linearization-generation procedure two times using the two interfaces in KG sequentially. First, we invoke the interface Extract_Neighbor_Relation(e T ) to extract the candidate one-hop relations, linearize them to compose the input prompt, and then leverage the LLM to select the useful relations {r} according to the question. Then, based on {r}, we invoke the Ex-tract_Triples (e T , {r}) interface to collect the relevant triples for the head entity e T and relation in {r}, then linearize this information, and finally employ the LLM to select the most relevant triples, whose tail entities will be considered as the final answer. Besides, we can also consider the multihop KGQA task (Lan et al., 2021), where after selecting the triples of the current hop, the LLM should assess whether the current information is sufficient to answer the question. Then, LLMs will make according actions based on the assessment, i.e., stopping the iterations for producing the answer or continuing the iterations on next-hop tail entities from selected triples." }, { "figure_ref": [], "heading": "Table-based Question Answering (TableQA).", "publication_ref": [], "table_ref": [], "text": "For TableQA, we typically need to answer the question according to the content in the given table. We also perform the above procedure by using the three interfaces in turn. Concretely, first, we invoke Extract_Column_Name (T ) to extract all column names of a table, linearize them, and leverage LLMs to select the relevant ones {c} according to the question. Then, we invoke Ex-tract_Columns (T , {c}) to extract the contents of all relevant columns, and select the useful row indices {j} by LLMs. Subsequently, we further invoke Extract_SubTable (T , {c}, {j}) to generate the sub-table for the question. Based on the linearized sub-table, the LLM finally generates the answer to the question." }, { "figure_ref": [], "heading": "DB-based Semantic Parsing (Text-to-SQL).", "publication_ref": [], "table_ref": [], "text": "This task focuses on generating a SQL query that can be executed to obtain the required information from a database. To achieve this goal, first, we invoke Extract_Table&Column_Name (D) to obtain all the table names and their column names in the DB, linearize them, and utilize the LLM to select the relevant table names. Then, we invoke Extract_Tables_Information ({T }) to obtain all the relevant information (i.e., column names and foreign keys) from these tables. Similarly, by linearizing this information and composing the input prompt, the LLM can generate an executable SQL for the given question." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [], "table_ref": [], "text": "We conduct experiments on three complex reasoning tasks over structured data, i.e., KGQA, TableQA, and DB based text-to-SQL." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b49", "b52", "b54", "b36", "b2", "b50", "b9", "b6" ], "table_ref": [], "text": "For KG based QA (KGQA), we adopt two benchmark datasets, i.e., WebQuestionsSP (We-bQSP) (Yih et al., 2016) and MetaQA (Zhang et al., 2018) for evaluation. The answer entities in We-bQSP require up to 2-hop reasoning on the Freebase KG. In contrast, MetaQA contains questions in the movie domain, whose answer entities are up to 3 hops away from the topic entities on a movie KG (based on OMDb). According to the number of hops, it is split into three sub-datasets, i.e., MetaQA-1hop, MetaQA-2hop, and MetaQA-3hop.\nFor Table based QA (TableQA), we adopt three widely-used datasets, weakly-supervised Wik-iSQL (WikiSQL) (Zhong et al., 2017), WikiTable-Questions (WTQ) (Pasupat and Liang, 2015), and TabFact (Chen et al., 2020). The first two are typical table-based question answering datasets, and the third one is a multiple-choice dataset that concentrates on table fact verification. WikiSQL requires filtering and aggregating information over the table content, and the WTQ demands more advanced reasoning capabilities (e.g., sorting). Tab-Fact needs to judge whether the provided statement agrees with the facts stored in a table.\nFor DB based semantic parsing (Text-to-SQL), we adopt three public datasets, i.e., Spider (Yu et al., 2018), Spider-SYN (Gan et al., 2021), and Spider-Realistic (Deng et al., 2021). Spider is a typical Text-to-SQL dataset covering 20 databases with a set of 1034 evaluation samples. Spider-SYN and Spider-Realistic are two more challenging datasets derived from Spider. Concretely, Spider-SYN manually substitutes the synonyms in natural language questions, while Spider-Realistic removes the questions in the evaluation set that explicitly mention the required columns' names." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [ "b44" ], "table_ref": [], "text": "For KGQA, we employ Hits@1 which assesses whether the top-1 predicted answer is correct. In our approach, we focus on generating the most confident answer and then checking if the prediction hits any target. As LLMs may generate multiple answers, we also conducted a manual double-check finally (Tan et al., 2023), to judge if wrong answers are included. For TableQA, we adopt two evaluation metrics, namely denotation accuracy and accuracy. In WTQ and WikiSQL, denotation accuracy is employed to evaluate whether the predicted answer is the same as the gold answer based on set-level equivalence. In TabFact, we adopt accuracy to assess the correctness of the prediction. For Text-to-SQL, we adopt the execution accuracy (EX) to assess whether the execution results of the predicted SQL and the gold SQL are the same." }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [ "b29", "b43", "b41", "b13", "b16", "b24", "b14", "b47", "b27", "b48", "b45", "b39", "b38", "b44" ], "table_ref": [], "text": "We compare our method with competitive fulldata supervised-tuning baselines tailored to these tasks. Specifically, our method is a general iterative reading-then-reasoning (IRR) framework that can be used for different LLMs. And we test our IRR with two different LLMs, i.e., Davinci-003 (textdavinci-003 (Ouyang et al., 2022b)) and Chat-GPT (i.e., gpt-3.5-turbo2 ), under zero-shot and few-shot settings3 . Considering the evolution of the closed large language model, e.g., ChatGPT, we have further conducted supplementary experiments on three datasets (i.e., WebQSP, WTQ, and Spider) using the latest August version of ChatGPT. The results are presented in Appendix A. For KGQA, we select KV-Mem (Miller et al., 2016), GragtNet (Sun et al., 2018), EmbedKGQA (Saxena et al., 2020), NSM (He et al., 2021), and UniKGQA (Jiang et al., 2023). For TableQA, we select MAPO (Liang et al., 2018), TAPAS (Herzig et al., 2020;Eisenschlos et al., 2020), UnifiedSKG (T5-3B) (Xie et al., 2022), TAPEX (Liu et al., 2022), and DATER (Ye et al., 2023). For Text-to-SQL, we select RAT-SQL+BERT Large (Wang et al., 2020), TKK-Large (Gao et al., 2022a), T5-3B+PICARD (Raffel et al., 2020), RASAT+PICARD (Qi et al., 2022), and RESDSQL-3B+NatSQL (Li et al., 2023a).\nAdditionally, we incorporate baselines that employ Davinci-003 and ChatGPT directly for achieving the aforementioned tasks in a zero-shot setting. To ensure a fair comparison, we utilize the same instruction prompt to evaluate them, ensuring that the only difference with our method is the usage of structured data. Specifically, in KGQA datasets, we follow existing work (Tan et al., 2023) that utilizes LLMs to answer the questions without using KG. In TableQA and Text-to-SQL, we feed the required information of tables with questions into LLMs (Liu et al., 2023c,a), without special treatment for the overlength problem." }, { "figure_ref": [], "heading": "Results and Analysis", "publication_ref": [ "b13", "b16" ], "table_ref": [ "tab_3" ], "text": "We show the results on KGQA, TableQA, and Textto-SQL tasks and analyze them respectively.\nEvaluation on KGQA. Table 1 shows the results on KGQA datasets. First, LLMs can achieve performance comparable to the supervised learning model (i.e., 61.2 of ChatGPT v.s. 66.4 of GraftNet and 48.3 of Davinci-003 v.s. 46.7 of KV-Mem) on the WebQSP dataset, in a zero-shot setting without using KGs. It demonstrates that LLMs indeed grasp a certain amount of knowledge that can help them answer complex questions. However, on more difficult datasets that require multi-hop reasoning (e.g., MetaQA-2hop and MetaQA-3hop), the two LLMs perform not well. It indicates that Table 1: Results of different methods for KGQA (Hits@1 in percent). We copy the results in the first block from He et al. (2021) and Jiang et al. (2023) LLMs can not solely rely on their own knowledge to answer difficult questions, and their augmentation with KGs is necessary. In contrast, when incorporating our proposed method to access KG, the performance of Davinci-003 and ChatGPT can be both substantially improved, indicating the effectiveness of our proposed method for supporting LLMs reasoning over KG. By adding a few incontext exemplars (i.e., 15 for WQSP and 32 for MQA) to LLMs, we can further improve the model performance. In our approach, we devise interfaces for KG to efficiently read the relevant information, and leverage LLMs to extract useful parts and perform reasoning. We leverage the IRR procedure on devised interfaces sequentially, which can progressively capture more useful detailed evidence for finally obtaining the answer.\nEvaluation on TableQA. iteratively access and utilize the relevant information from the table, which reduces the influence of irrelevant and redundant information.\nEvaluation on Text-to-SQL. Table 3 shows the results on DB-based datasets. First, with all the information from DB (table names, column names, and foreign keys) as the prompt, the LLMs have the capability of directly generating a suitable SQL query of the question, performing well on all three datasets. Whereas, the performance of LLMs is not better than competitive full-data supervisedtuning methods, showing the difficulty of this task.\nAs our proposed method can extract relevant tables and columns, it also alleviates the influence of irrelevant information for LLMs to generate the SQL query. Simultaneously, with the assistance of 32 in-context exemplars, LLMs exhibit enhanced comprehension of the mapping between natural language questions and their corresponding SQL queries. The consistent performance improvements over the three datasets whenever in zero-shot or few-shot settings also indicate the effectiveness of our proposed method.\nCase Study. We show an example of KGQA in Figure 2, to help understand the working process of our method. Given the question, the interfaces of the structured data are sequentially invoked to iteratively extract more useful and detailed information. In each iteration, we first invoke the Ex-tract_Neighbor_Relations function to extract the neighboring relations (e.g., birthplace, residence, and education) of the topic entity \"Harper Lee\", Error Analysis. To systemically analyze the shortcomings of our approach, we first select three datasets (i.e., WebQSP, WTQ, and Spider) with different types of structured data, and randomly sample 100 error cases from each dataset. Then, we manually examine these failures and classify them into five categories:\n• Selection Error: the relevant information has not been selected by the LLM.\n• Reasoning Error: given the extracted relevant information, the LLM fails to generate the groundtruth answer or SQL.\n• Generation Format Error: the generated answer is in an abnormal format that fails to be identified by our result parser.\n• Hallucination: the generated results are inconsistent with the extracted information.\n• Other Errors: other uncategorizable errors. To answer … first look at the available columns in the table: \"District\", \"Incumbent\", ... Invoke:" }, { "figure_ref": [], "heading": "Extract_SubTable (table, [District…], [19])", "publication_ref": [], "table_ref": [], "text": "Return: [(row 1,(District,19th), …]\nLinearize: \"District, …, 2007 Result\" Linearize: \"(row 1, (District, 19th), …)\"\nLinearize: \"row 1: (District, 1st); …\"" }, { "figure_ref": [], "heading": "Generate", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Generate", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Generate", "publication_ref": [], "table_ref": [], "text": "Rows: row 19 We show the statistics in Figure 3. First, for the three datasets, the distributions of occurring errors are different. In WikiSQL, the frequencies of generation format, selection, and reasoning errors are relatively uniform. Whereas, in WebQSP, the selection error is the major error type (74%), since the KGQA task requires selecting the most relevant one from thousands of relations, which is not easy work. In Spider, reasoning error occurs more (62%), since the Text-to-SQL task requires LLMs to generate a SQL that can be executed to obtain the answer, which is also hard for LLMs.\nAnswer: 19th • • • • • • • • • • • • District Incumbent 2007 Result • • • 1st • • • • • • 19st charles hawkins Marty Williams John Miller … Robert Hurt ... • • • • • • • • • • • • District Incumbent 2007 Result • • • 1st • • • • • • 19st charles hawkins Marty Williams John Miller … Robert Hurt ... • • • • • • • • • • • • District Incumbent 2007 Result • • • (b) Case of TableQA (KGQA)\nAccording to the error distributions, it is promising to refine the major error cases to specifically improve the performance on each dataset. Concretely, we can devise more high-quality prompts that elicit LLMs to carefully make decisions when selecting and reasoning on KGQA and Text-to-SQL tasks, respectively. Besides, we also consider adding more interfaces and iteration turns for decomposing the hard problem into multiple simple ones, to simplify the complex reasoning task for better performance. We will try the above solutions in our future work." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b47" ], "table_ref": [], "text": "In this work, we proposed a general framework for improving the zero-shot reasoning ability of LLMs over structured data, namely StructGPT. In our approach, we first constructed the specialized interfaces that support accurate and efficient data access, and then proposed an invoking-linearizationgeneration procedure that leverages LLMs to read and perform reasoning based on the interface. By iterating the above procedure using the interfaces sequentially, LLMs can progressively capture more useful and detailed evidence and finally generate the answer. To verify the effectiveness of our approach, we implemented our approach on KG based QA, table based QA and DB based semantic parsing tasks. Experimental results on 8 datasets show that our approach can boost the zero-shot performance of LLMs by a large margin, and achieve comparable performance as full-data supervisedtuning methods. We also provide detailed error analysis to point out the weakness of our approach, for enlighting other researchers in related areas.\nAlthough StructGPT demonstrates remarkable performance across tasks over structured data, there are some limitations of our method. First, the two LLMs used in our model, i.e., ChatGPT and Davinci-003, have a strong capability of following instructions. Hence, more experiments are required to evaluate our method with in-context learning on other LLMs that perform poorly at instruction following. Similarly, we only evaluate question answering tasks based on structured data. Future work should include wider evaluation scenarios to evaluate the universality of our method, e.g., data-to-text and formal-language-to-text (Xie et al., 2022). Finally, since it is difficult to control the answer format during the generation process of LLMs in different datasets, there are several format errors in generated texts as shown in Section 5. Therefore, the performance of our method can be further improved by meticulously designing the prompt and answer parsing for different datasets. " }, { "figure_ref": [], "heading": "B Case Study", "publication_ref": [], "table_ref": [], "text": "Here, we select one representative example for each type of structured data and present the case study in Figure 4. For KG, we first invoke the Ex-tract_Neighbor_Relations function to extract the neighboring relations (e.g., birthplace, residence, and education) of the topic entity \"Harper Lee\", then linearize them and compose the input prompt.\nIn the prompt, we utilize the instruction (i.e., provide only one relevant relation that's present in the candidate) to elicit the LLM to generate the most relevant relation, i.e., education. Based on the selected relation, we further invoke the Ex-tract_Triples function to extract the triples with the relation to the topic entity. After linearization, another instruction (i.e., you just need to provide only one answer entity), is adopted for guiding the LLM to generate the final answer, i.e., Monroe County High School.\nFor table, we first invoke the Ex-tract_Column_Name function to extract the column names from the table for linearization, and then design the prompt (i.e., which columns are most relevant to answering the question?) for the LLM to select the useful columns, i.e., District and Incumbent. Then, by using the Extract_Columns and Extract_SubTable functions and proper instructions, we elicit the LLM to select the useful row indices (i.e., item 8) and finally generate the answer (i.e., 19th).\nFor database, we also first invoke the Ex-tract_Table&Column_Name to extract all the table names and column names, linearize them and utilize the instruction (i.e., which tables do you need to complete the SQLite SQL query?) to prompt the LLM. Then, based on the selected tables (i.e., Dogs and Breeds), we further invoke the Extract_Tables_Information function and prompt the LLM via an instruction (i.e., complete sqlite SQL query only with no explanation) to generate the SQL for the question, which can be executed to obtain the final answer." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This work was partially supported by National Natural Science Foundation of China under Grant No. 62222215, Beijing Natural Science Foundation under Grant No. 4222027 and L233008. And this work is also partially supported by the Outstanding Innovative Talents Cultivation Funded Programs 2022 of Renmin University of China. Xin Zhao is the corresponding author." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "To answer … first look at the available columns in the table: \"District\", \"Incumbent\", ... Which columns are most relevant to answering the question? …\nColumns: District, Incumbent.\nTo answer … Below is the list of rows …. Invoke:" }, { "figure_ref": [], "heading": "Extract_SubTable (table, [District…], [19])", "publication_ref": [], "table_ref": [], "text": "Return: [(row 1,(District,19th), …]\nLinearize: \"District, …, 2007 Result\" Linearize: \"(row 1, (District, 19th), …)\"\nLinearize: \"row 1: (District, 1st); …\"" }, { "figure_ref": [], "heading": "Generate", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Generate", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Generate", "publication_ref": [], "table_ref": [], "text": "Rows: row 19 " } ]
In this paper, we aim to improve the reasoning ability of large language models (LLMs) over structured data in a unified way. Inspired by the studies on tool augmentation for LLMs, we develop an Iterative Reading-then-Reasoning (IRR) framework to solve question answering tasks based on structured data, called StructGPT. In this framework, we construct the specialized interfaces to collect relevant evidence from structured data (i.e., reading), and let LLMs concentrate on the reasoning task based on the collected information (i.e., reasoning). Specially, we propose an invokinglinearization-generation procedure to support LLMs in reasoning on the structured data with the help of the interfaces. By iterating this procedure with provided interfaces, our approach can gradually approach the target answers to a given query. Experiments conducted on three types of structured data show that StructGPT greatly improves the performance of LLMs, under the few-shot and zero-shot settings. Our codes and data are publicly available at https://github.com/RUCAIBox/StructGPT.
StructGPT: A General Framework for Large Language Model to Reason over Structured Data
[ { "figure_caption": "Figure1: The overview of the proposed iterative reading-then-reasoning approach. We design specialized interfaces for reading structured data, and iterate the invoking-linearization-generation procedure to utilize LLMs for performing reasoning on the interfaces, until deriving the final answer or executable SQL.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Question:What is the name of the breed with the most dogs? Invoke: Extract_Table&Column_Name (database) Dogs | Breeds ### Complete sqlite SQL query only and …. # Breeds(*, breed_code, breed_name); … ### What is the name of the breed with the most dogs? SELECT Breeds.breed_name FROM Dogs JOIN … ### Here are the SqliteSQL tables … # Breeds(*, breed_code, breed_name); … ### What is the name of the breed with the most dogs? Which tables do you need to complete the SQLite Case of Spider (Text-to-SQL) Dogs.breed_code → Breed.breed_code", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Generate\"(AnswerHarper Lee, education, CVT_0); … \" Extract_Triples (Harper Lee, [education]) The candidate relations: education … birthplace. The question is … Provide only one relevant relation that's present in the candidates … The relevant relation: education. The triples are: (Harper Lee, education, CVT_0) ... Based on these triples … give me the final answer entity. You just need to provide only one answer entity. If you think … In what district was the incumbent charles hawkins? Invoke: Extract_Column_Name (table)", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Which columns are most relevant to answering the question? … Columns: District, Incumbent. To answer … Below is the list of rows …. row 1: (District, 1st); (Incumbent, Marty Williams) … (2007 Result … ) which rows should be considered? … The table contains: row 1: (District, 19th); (Incumbent, Charles Hawkins). Using this information, In what district was the incumbent Charles Hawkins? … Resturn: [Didtrict, …, 2007 Result] Return: [(row 1, (District, 19th)…)…]", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :Figure 3 :23Figure 2: Case study of our method on WebQSP.", "figure_data": "", "figure_id": "fig_4", "figure_label": "23", "figure_type": "figure" }, { "figure_caption": ". The best results of each block are highlighted in bold.", "figure_data": "MethodsWQSPMQA 1hopMQA 2hopMQA 3hopKV-Mem46.796.282.748.9GraftNet66.497.094.877.7EmbedKGQA66.697.598.894.8NSM68.797.199.998.9UniKGQA75.197.599.099.1Davinci-00348.352.125.342.5+ IRR (ours)71.994.459.570.2+ IRR (ours, few-shot) 71.097.193.575.3ChatGPT61.261.931.043.2+ IRR (ours)72.694.293.980.2+ IRR (ours, few-shot) 69.697.197.387.0", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Table2shows the results on three TableQA datasets. First, with the full table as the prompt, ChatGPT can also achieve comparable performance on WTQ and TabFact as full-data supervised-tuning methods, but performs not well on more difficult WikiSQL datasets. It also indicates that LLMs have the capability of understanding the knowledge within table data to some extent. Second, our proposed method can consistently improve the performance of two LLMs a lot in both three datasets. At the same time, when adding 32 in-context exemplars to the LLMs, they can obtain further performance improvements. It indicates the effectiveness of our proposed method in helping LLMs reasoning over Table.Our approach provides a more effective way for LLMs to", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Results of different methods for TableQA (denotation accuracy for WTQ and WikiSQL, accuracy for TabFact). We copy the results of TAPAS on TabFact from Eisenschlos et al. (2020), and others in the first block from their original papers. The best results of each block are highlighted in bold.", "figure_data": "MethodsWTQ WikiSQL TabFactMAPO43.872.6-TAPAS48.883.681.0UnifiedSKG (T5-3B)49.386.083.7TAPEX57.589.584.2DATER65.9-93.0Davinci-00334.849.180.7+ IRR (ours)39.251.876.5+ IRR (ours, few-shot) 57.064.687.3ChatGPT43.351.682.9+ IRR (ours)48.454.487.1+ IRR (ours, few-shot) 52.265.687.6", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Performance comparison of different methods for Text-to-SQL (execution accuracy in percent). We copy the results of RAT-SQL+BERT Large and TKK-Large fromDeng et al. (2021) andGao et al. (2022a), respectively. And we copy the results of the other three methods in the first block fromLiu et al. (2023b). The best results of each block are highlighted in bold.then linearize them and compose the input prompt. Here, we utilize the instruction (i.e., provide only one relevant relation that's present in the candidate) to elicit the LLM to generate the most relevant relation, i.e., education. Based on the selected relation, we further invoke the Extract_Triples function to extract the triples with the relation to the topic entity. After linearization, another instruction (i.e., you just need to provide only one answer entity), is adopted for guiding the LLM to generate the final answer, i.e., Monroe County High School. Besides, we show the representative examples of TableQA and Text-to-SQL in Appendix B.", "figure_data": "MethodsSpiderSpider-SYNSpider-RealisticRAT-SQL + BERTLarge72.3-62.1TKK-Large73.260.564.4T5-3B + PICARD79.369.871.4RASAT + PICARD80.570.771.9RESDSQL-3B + NatSQL84.176.981.9Davinci-00368.860.163.2+ IRR (ours)69.560.364.2+ IRR (ours, few-shot)72.763.270.7ChatGPT70.158.663.4+ IRR (ours)74.862.070.3+ IRR (ours, few-shot)77.864.072.0", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Results of different version of ChatGPT forWebQSP, WTQ, and Spider.We have noted that the ChatGPT is continuously evolving. Furthermore, we have conducted supplementary experiments on three datasets using the latest August version of LLM. The results are presented in the Table4. It is noteworthy that Chat-GPT indeed continuously evolves, as evidenced by its distinct performance compared to that of the June version. Although the evolved ChatGPT underperforms compared to the June version on the WTQ dataset, our approach can consistently further enhances the ChatGPT performance with the evolved version on all three tasks. It indicates the robustness of our proposed method.", "figure_data": "MethodsWebQSP WTQ SpiderChatGPT (June)61.243.370.1ChatGPT (June) + IRR72.648.474.8ChatGPT (August)62.141.175.2ChatGPT (August) + IRR75.350.477.1A Experiment With Latest Version ofLLM", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" } ]
Jinhao Jiang; Kun Zhou; Zican Dong; Keming Ye; Wayne Xin Zhao; Ji-Rong Wen
[ { "authors": "B Tom; Benjamin Brown; Nick Mann; Melanie Ryder; Jared Subbiah; Prafulla Kaplan; Arvind Dhariwal; Pranav Neelakantan; Girish Shyam; Amanda Sastry; Sandhini Askell; Ariel Agarwal; Gretchen Herbert-Voss; Tom Krueger; Rewon Henighan; Aditya Child; Daniel M Ramesh; Jeffrey Ziegler; Clemens Wu; Christopher Winter; Mark Hesse; Eric Chen; Mateusz Sigler; Scott Litwin; Benjamin Gray; Jack Chess; Christopher Clark; Sam Berner; Alec Mccandlish; Ilya Radford; Dario Sutskever; Amodei", "journal": "", "ref_id": "b0", "title": "Language models are few-shot learners", "year": "2020-12-06" }, { "authors": "Wenhu Chen", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Large language models are few(1)-shot table reasoners", "year": "2023-05-02" }, { "authors": "Wenhu Chen; Hongmin Wang; Jianshu Chen; Yunkai Zhang; Hong Wang; Shiyang Li; Xiyou Zhou; William Yang; Wang ", "journal": "", "ref_id": "b2", "title": "Tabfact: A large-scale dataset for table-based fact verification", "year": "2020-04-26" }, { "authors": "Yi Chen; Wei Wang; Ziyang Liu; Xuemin Lin", "journal": "ACM", "ref_id": "b3", "title": "Keyword search on structured and semi-structured data", "year": "2009-06-29" }, { "authors": "Zhikai Chen; Haitao Mao; Hang Li; Wei Jin; Hongzhi Wen; Xiaochi Wei; Shuaiqiang Wang; Dawei Yin; Wenqi Fan; Hui Liu; Jiliang Tang", "journal": "", "ref_id": "b4", "title": "Exploring the potential of large language models (llms) in learning on graphs", "year": "2023" }, { "authors": "Zhoujun Cheng; Tianbao Xie; Peng Shi; Chengzu Li; Rahul Nadkarni; Yushi Hu; Caiming Xiong; Dragomir Radev; Mari Ostendorf; Luke Zettlemoyer; Noah A Smith; Tao Yu", "journal": "CoRR", "ref_id": "b5", "title": "Binding language models in symbolic languages", "year": "2022" }, { "authors": "Xiang Deng; Ahmed Hassan Awadallah; Christopher Meek; Oleksandr Polozov; Huan Sun; Matthew Richardson", "journal": "", "ref_id": "b6", "title": "Structure-grounded pretraining for text-to-sql", "year": "2021-06-06" }, { "authors": "Julian Martin Eisenschlos; Syrine Krichene; Thomas Müller", "journal": "", "ref_id": "b7", "title": "Understanding tables with intermediate pre-training", "year": "2020-11" }, { "authors": "Denis Emelin; Daniele Bonadiman; Sawsan Alqahtani; Yi Zhang; Saab Mansour", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "Injecting domain knowledge in language models for task-oriented dialogue systems", "year": "2022-12-07" }, { "authors": "Yujian Gan; Xinyun Chen; Qiuping Huang; Matthew Purver; John R Woodward; Jinxia Xie; Pengsheng Huang", "journal": "", "ref_id": "b9", "title": "Towards robustness of text-tosql models against synonym substitution", "year": "2021-08-01" }, { "authors": "Chang Gao; Bowen Li; Wenxuan Zhang; Wai Lam; Binhua Li; Fei Huang; Luo Si; Yongbin Li", "journal": "", "ref_id": "b10", "title": "Towards generalizable and robust text-to-sql parsing", "year": "2022" }, { "authors": "Luyu Gao; Aman Madaan; Shuyan Zhou; Uri Alon; Pengfei Liu; Yiming Yang; Jamie Callan; Graham Neubig", "journal": "CoRR", "ref_id": "b11", "title": "PAL: program-aided language models", "year": "2022" }, { "authors": "Yu Gu; Xiang Deng; Yu Su", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Don't generate, discriminate: A proposal for grounding language models to real-world environments", "year": "2023-07-09" }, { "authors": "Gaole He; Yunshi Lan; Jing Jiang; Wayne Xin Zhao; Ji-Rong Wen", "journal": "ACM", "ref_id": "b13", "title": "Improving multi-hop knowledge base question answering by learning intermediate supervision signals", "year": "2021-03-08" }, { "authors": "Jonathan Herzig; Krzysztof Pawel; Thomas Nowak; Francesco Müller; Julian Piccinno; Eisenschlos Martin", "journal": "", "ref_id": "b14", "title": "Tapas: Weakly supervised table parsing via pre-training", "year": "2020-07-05" }, { "authors": "Gautier Izacard; S H Patrick; Maria Lewis; Lucas Lomeli; Fabio Hosseini; Timo Petroni; Jane Schick; Armand Dwivedi-Yu; Sebastian Joulin; Edouard Riedel; Grave", "journal": "", "ref_id": "b15", "title": "Few-shot learning with retrieval augmented language models", "year": "2022" }, { "authors": "Jinhao Jiang; Kun Zhou; Xin Zhao; Ji-Rong Wen", "journal": "", "ref_id": "b16", "title": "Unikgqa: Unified retrieval and reasoning for solving multi-hop question answering over knowledge graph", "year": "2023-05-01" }, { "authors": "Daniel Khashabi; Sewon Min; Tushar Khot; Ashish Sabharwal; Oyvind Tafjord; Peter Clark; Hannaneh Hajishirzi", "journal": "", "ref_id": "b17", "title": "Unifiedqa: Crossing format boundaries with a single QA system", "year": "2020-11" }, { "authors": "Yunshi Lan; Gaole He; Jinhao Jiang; Jing Jiang; Wayne Xin Zhao; Ji-Rong Wen", "journal": "", "ref_id": "b18", "title": "A survey on complex knowledge base question answering: Methods, challenges and solutions", "year": "2021-08" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "", "ref_id": "b19", "title": "BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2020-07-05" }, { "authors": "Belinda Z Li; Sewon Min; Srinivasan Iyer; Yashar Mehdad; Wen-Tau Yih", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Efficient one-pass end-to-end entity linking for questions", "year": "2020-11-16" }, { "authors": "Haoyang Li; Jing Zhang; Cuiping Li; Hong Chen", "journal": "", "ref_id": "b21", "title": "Decoupling the skeleton parsing and schema linking for text-to-sql", "year": "2023" }, { "authors": "Junyi Li; Xiaoxue Cheng; Wayne Xin Zhao; Jian-Yun Nie; Ji-Rong Wen", "journal": "", "ref_id": "b22", "title": "Halueval: A largescale hallucination evaluation benchmark for large language models", "year": "2023" }, { "authors": "Tianle Li; Xueguang Ma; Alex Zhuang; Yu Gu; Yu Su; Wenhu Chen", "journal": "CoRR", "ref_id": "b23", "title": "Few-shot in-context learning for knowledge base question answering", "year": "2023" }, { "authors": "Chen Liang; Mohammad Norouzi; Jonathan Berant; Quoc V Le; Ni Lao", "journal": "", "ref_id": "b24", "title": "Memory augmented policy optimization for program synthesis and semantic parsing", "year": "2018-12-03" }, { "authors": "Aiwei Liu; Xuming Hu; Lijie Wen; Philip S Yu", "journal": "", "ref_id": "b25", "title": "a. A comprehensive evaluation of chatgpt's zeroshot text-to-sql capability", "year": "2023" }, { "authors": "Aiwei Liu; Xuming Hu; Lijie Wen; Philip S Yu", "journal": "", "ref_id": "b26", "title": "A comprehensive evaluation of chatgpt's zero-shot text-to-sql capability", "year": "2023" }, { "authors": "Qian Liu; Bei Chen; Jiaqi Guo; Morteza Ziyadi; Zeqi Lin; Weizhu Chen; Jian-Guang Lou", "journal": "", "ref_id": "b27", "title": "TAPEX: table pre-training via learning a neural SQL executor", "year": "2022-04-25" }, { "authors": "Qian Liu; Fan Zhou; Zhengbao Jiang; Longxu Dou; Min Lin", "journal": "", "ref_id": "b28", "title": "From zero to hero: Examining the power of symbolic tasks in instruction tuning", "year": "2023" }, { "authors": "Alexander H Miller; Adam Fisch; Jesse Dodge; Amir-Hossein; Antoine Karimi; Jason Bordes; Weston", "journal": "", "ref_id": "b29", "title": "Key-value memory networks for directly reading documents", "year": "2016-11-01" }, { "authors": "Fedor Moiseev; Zhe Dong; Enrique Alfonseca; Martin Jaggi", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "SKILL: structured knowledge infusion for large language models", "year": "2022-07-10" }, { "authors": "Reiichiro Nakano; Jacob Hilton; Suchir Balaji; Jeff Wu; Long Ouyang; Christina Kim; Christopher Hesse; Shantanu Jain; Vineet Kosaraju; William Saunders; Xu Jiang; Karl Cobbe; Tyna Eloundou; Gretchen Krueger; Kevin Button; Matthew Knight; Benjamin Chess; John Schulman", "journal": "CoRR", "ref_id": "b31", "title": "Webgpt: Browserassisted question-answering with human feedback", "year": "2021" }, { "authors": "Barlas Oguz; Xilun Chen; Vladimir Karpukhin; Stan Peshterliev; Dmytro Okhonko; Michael Sejr Schlichtkrull; Sonal Gupta; Yashar Mehdad; Scott Yih", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "Unik-qa: Unified representations of structured and unstructured knowledge for opendomain question answering", "year": "2022-07-10" }, { "authors": " Openai", "journal": "", "ref_id": "b33", "title": "GPT-4 technical report", "year": "2023" }, { "authors": "Long Ouyang; Jeffrey Wu; Xu Jiang; Diogo Almeida; Carroll L Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray; John Schulman; Jacob Hilton; Fraser Kelton; Luke Miller; Maddie Simens; Amanda Askell; Peter Welinder; Paul F Christiano; Jan Leike; Ryan Lowe", "journal": "", "ref_id": "b34", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Long Ouyang; Jeffrey Wu; Xu Jiang; Diogo Almeida; Carroll L Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray; John Schulman; Jacob Hilton; Fraser Kelton; Luke Miller; Maddie Simens; Amanda Askell; Peter Welinder; Paul F Christiano; Jan Leike; Ryan Lowe", "journal": "", "ref_id": "b35", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Panupong Pasupat; Percy Liang", "journal": "", "ref_id": "b36", "title": "Compositional semantic parsing on semi-structured tables", "year": "2015-07-26" }, { "authors": "Baolin Peng; Michel Galley; Pengcheng He; Hao Cheng; Yujia Xie; Yu Hu; Qiuyuan Huang; Lars Liden; Zhou Yu; Weizhu Chen; Jianfeng Gao", "journal": "", "ref_id": "b37", "title": "Check your facts and try again: Improving large language models with external knowledge and automated feedback", "year": "2023" }, { "authors": "Jiexing Qi; Jingyao Tang; Ziwei He; Xiangpeng Wan; Chenghu Zhou; Xinbing Wang; Quanshi Zhang; Zhouhan Lin", "journal": "", "ref_id": "b38", "title": "Rasat: Integrating relational structures into pretrained seq2seq model for text-tosql", "year": "2022" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "The Journal of Machine Learning Research", "ref_id": "b39", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Nitarshan Rajkumar; Raymond Li; Dzmitry Bahdanau", "journal": "CoRR", "ref_id": "b40", "title": "Evaluating the text-to-sql capabilities of large language models", "year": "2022" }, { "authors": "Apoorv Saxena; Aditay Tripathi; Partha P Talukdar", "journal": "", "ref_id": "b41", "title": "Improving multi-hop question answering over knowledge graphs using knowledge base embeddings", "year": "2020-07-05" }, { "authors": "Timo Schick; Jane Dwivedi-Yu; Roberto Dessì; Roberta Raileanu; Maria Lomeli; Luke Zettlemoyer; Nicola Cancedda; Thomas Scialom", "journal": "CoRR", "ref_id": "b42", "title": "Toolformer: Language models can teach themselves to use tools", "year": "2023" }, { "authors": "Haitian Sun; Bhuwan Dhingra; Manzil Zaheer; Kathryn Mazaitis; Ruslan Salakhutdinov; William W Cohen", "journal": "", "ref_id": "b43", "title": "Open domain question answering using early fusion of knowledge bases and text", "year": "2018-10-31" }, { "authors": "Yiming Tan; Dehai Min; Yu Li; Wenbo Li; Nan Hu; Yongrui Chen; Guilin Qi", "journal": "", "ref_id": "b44", "title": "Evaluation of chatgpt as a question answering system for answering complex questions", "year": "2023" }, { "authors": "Bailin Wang; Richard Shin; Xiaodong Liu; Oleksandr Polozov; Matthew Richardson", "journal": "", "ref_id": "b45", "title": "Rat-sql: Relation-aware schema encoding and linking for textto-sql parsers", "year": "2020" }, { "authors": "Xiaokai Wei; Shen Wang; Dejiao Zhang; Parminder Bhatia; Andrew O Arnold", "journal": "", "ref_id": "b46", "title": "Knowledge enhanced pretrained language models: A compreshensive survey", "year": "2021" }, { "authors": "Tianbao Xie; Chen Henry Wu; Peng Shi; Ruiqi Zhong; Torsten Scholak; Michihiro Yasunaga; Chien-Sheng Wu; Ming Zhong; Pengcheng Yin; I Sida; Victor Wang; Bailin Zhong; Chengzu Wang; Connor Li; Ansong Boyle; Ziyu Ni; Dragomir Yao; Caiming Radev; Lingpeng Xiong; Rui Kong; Noah A Zhang; Luke Smith; Tao Zettlemoyer; Yu", "journal": "", "ref_id": "b47", "title": "Unifiedskg: Unifying and multi-tasking structured knowledge grounding with text-to-text language models", "year": "2022-12-07" }, { "authors": "Yunhu Ye; Binyuan Hui; Min Yang; Binhua Li; Fei Huang; Yongbin Li", "journal": "CoRR", "ref_id": "b48", "title": "Large language models are versatile decomposers: Decompose evidence and questions for table-based reasoning", "year": "2023" }, { "authors": "Wen-Tau Yih; Matthew Richardson; Christopher Meek; Ming-Wei Chang; Jina Suh", "journal": "The Association for Computer Linguistics", "ref_id": "b49", "title": "The value of semantic parse labeling for knowledge base question answering", "year": "2016-08-07" }, { "authors": "Tao Yu; Rui Zhang; Kai Yang; Michihiro Yasunaga; Dongxu Wang; Zifan Li; James Ma; Irene Li; Qingning Yao; Shanelle Roman; Zilin Zhang; Dragomir R Radev", "journal": "", "ref_id": "b50", "title": "Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task", "year": "2018-10-31" }, { "authors": "Susan Zhang; Stephen Roller; Naman Goyal; Mikel Artetxe; Moya Chen; Shuohui Chen; Christopher Dewan; Mona T Diab; Xian Li; Xi Victoria Lin; Todor Mihaylov; Myle Ott; Sam Shleifer; Kurt Shuster; Daniel Simig; Punit Singh Koura; Anjali Sridhar; Tianlu Wang; Luke Zettlemoyer", "journal": "", "ref_id": "b51", "title": "OPT: open pre-trained transformer language models", "year": "2022" }, { "authors": "Yuyu Zhang; Hanjun Dai; Zornitsa Kozareva; Alexander J Smola; Le Song", "journal": "", "ref_id": "b52", "title": "Variational reasoning for question answering with knowledge graph", "year": "2018-02-02" }, { "authors": "Kun Wayne Xin Zhao; Junyi Zhou; Tianyi Li; Xiaolei Tang; Yupeng Wang; Yingqian Hou; Beichen Min; Junjie Zhang; Zican Zhang; Yifan Dong; Chen Du; Yushuo Yang; Zhipeng Chen; Jinhao Chen; Ruiyang Jiang; Yifan Ren; Xinyu Li; Zikang Tang; Peiyu Liu; Jian-Yun Liu; Ji-Rong Nie; Wen", "journal": "CoRR", "ref_id": "b53", "title": "A survey of large language models", "year": "2023" }, { "authors": "Victor Zhong; Caiming Xiong; Richard Socher", "journal": "CoRR", "ref_id": "b54", "title": "Seq2sql: Generating structured queries from natural language using reinforcement learning", "year": "2017" } ]
[ { "formula_coordinates": [ 3, 70.87, 463.73, 217.77, 33.53 ], "formula_id": "formula_0", "formula_text": "(k) i , c (h) j )}, where c (k) i and c (h) j" }, { "formula_coordinates": [ 9, 123.78, 309.27, 385.14, 232.98 ], "formula_id": "formula_1", "formula_text": "Answer: 19th • • • • • • • • • • • • District Incumbent 2007 Result • • • 1st • • • • • • 19st charles hawkins Marty Williams John Miller … Robert Hurt ... • • • • • • • • • • • • District Incumbent 2007 Result • • • 1st • • • • • • 19st charles hawkins Marty Williams John Miller … Robert Hurt ... • • • • • • • • • • • • District Incumbent 2007 Result • • • (b) Case of TableQA (KGQA)" } ]
2023-08-04
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b50", "b72", "b19", "b36", "b52", "b51", "b60", "b32", "b52", "b49", "b0" ], "table_ref": [], "text": "What can you do in Figure 1? This single RGB image conveys a rich, interactive 3D world where you can interact with many objects. For instance, if you grab the chair with two hands, you can move it as a rigid object; the pillow can be picked up freely and squished; and door can be moved, but only rotated. This ability to recognize and interpret potential affordances in scenes helps humans plan our interactions and more quickly learn to interact with objects. The goal of this work is to give the same ability to computers.\nObtaining such an understanding of potential interac-tions from a single 3D image is beyond the current state of the art in scene understanding because it spans multiple disparate subfields of computer vision. For instance, single image 3D has made substantial progress [54,51,73,20], but primarily focuses on the scene as it exists, as opposed to as it could be. There has been an increasing interest in understanding articulation [37,70,53], but these works primarily focus on articulation as it occurs in a 3D model or carefully collected demonstrations, instead of as it could occur. Finally, while there is long-standing work on enabling robots to learn interaction and potential interaction points [52,61], these works focus primarily on evaluation in primarily the same environment (e.g. the lab) and do not focus on applying the understanding in entirely new environments.\nWe propose to bootstrap this interactive understanding by developing (1) a problem formulation, (2) a rich dataset of annotations on challenging images, and (3) a transformer-based approach. We frame the problem of recognizing the articulation as a prediction-at-a-query-location problem: given an image and 2D location, our method aims to answer \"what can I do here?\" in the style of classic pointand-click games like Myst. We frame \"what can I do here\" via a set of common questions: whether the object can be moved, its extent when moved and location in 3D, rigidity, whether there are constrains on its motion, as well as estimates of how one would interact the object. To maximize the potential for downstream transfer, our questions are chosen to be generic rather than specific to particular hands or end-effectors: knowing where to act or the degrees of freedom of an object may accelerate reinforcement learning even if one must still learn end-effector-specific skills.\nIn order to tackle the task, we introduce a transformerbased model. Our approach, described in Section 5 builds on a detection backbone such as Segment-Anything [33] in order to build on the advances and expertise of object detection. We extend the backbone with additional heads that predict each of our \"what I can I do here\" tasks, and which can be trained end-to-end. As an advantage of our formulation, we can train the system on sparse annotations; we believe this will be helpful for eventually converting our direct supervision to supervision via video.\nPowering our approach is a new dataset, described in Section 4, which we name the 3D Object Interaction dataset (3DOI). In order to maximize the likelihood of generalizing to new environments, the underlying data comes from diverse sources, namely Internet and egocentric videos as well as 3D renderings of scene layouts. We provide annotations of our tasks on this data and, due to the source of the data, we also naturally obtain 3D supervision in the form of depth and normals. In total, the dataset has over 50K objects across 10K images, as well as over 31K annotations of non-interactable objects (e.g., floor, wall).\nOur experiments in Section 6 test how well our approach recognizes potential interaction, testing on both unseen data in 3DOI as well as robotics data. We compare with a number of alternatives, including generalizing from data of demonstrations [53,50] and synthetic data [70], as well alternate network designs. Our approach outperforms these models and shows strong generalization to the robotics dataset WHIRL [1].\nTo summarize, we see our primary contributions as: (1) the novel task of detecting 3D object interactions from a single RGB image; (2) 3D Object Interaction dataset, which is the first large-scale dataset containing objects that can be interacted and their corresponding locations, affordance and physical properties; (3) A transformer-based model to tackle this problem, which has strong performance on the 3DOI dataset and robotics data." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b4", "b18", "b57", "b47", "b48", "b28", "b36", "b65", "b52", "b23", "b20", "b49", "b44", "b35", "b29", "b52", "b23", "b20", "b28", "b61", "b25", "b33", "b55", "b31", "b66", "b6", "b32", "b6", "b32", "b32", "b72", "b5", "b14", "b63", "b15", "b42", "b41", "b30", "b8", "b45", "b19", "b50", "b60", "b51", "b9", "b11", "b68", "b22", "b70", "b71", "b64", "b0" ], "table_ref": [], "text": "Our paper proposes to extract 3D object interaction from a single image. This problem lies at the intersection of 3D vision, object detection, human-object interaction and scene understanding. It is also closely related to downstream robotics applications. Interactive scene understanding. Recently, the computer vision community is increasingly interested in understanding 3D dynamics of objects. It is motivated by human-object interaction [5,19,58], although humans do not need to be present in our setting. Researchers try to understand the 3D shapes, axes, movable parts and affordance on synthetic data [48,70,49,29,68,37,66], videos [53,24,21,50,45,36] or point clouds [30,28]. Our work is mainly related to [53,24,21] since they work on real images, but is different from them on two aspectives: (1) they need video or multi-view inputs, but our input is only a single image; (2) their approaches recover objects which are being interacted, while our approach understands potential interactions before any interactions happen. Finally, OPD [29,62] tackles a similar problem for articulated objects, but ours also work for non-articulated objects. Object detection. The training anchor-based object detection pipeline basically follows the pipeline of Mask R-CNN [26,34,56,32]. As the development of transformerbased models goes, DETR [4], AnchorDETR [67] and MaskFormer [7] approach object detection as a direct set prediction problem. Recently, Kirillov et al. proposes Segment Anything Model [33], which predicts object masks from input prompts such as points or boxes. Our network needs to be built on decoder-based backbones [4, 7,33], and we choose SAM [33] due to its state-of-the-art performance. Single image 3D. Since our problem requires us recover 3D object interaction instead of 2D from a single image, it is also related to single image 3D. In the recent few years, researchers have developed many different approaches to recover 3D from a single image, including depth [73,54,39,6,15], surface normals [64,16], 3D planes [43,42,31] and shapes [9,46,20,51]. Our work is built upon their works. Especially, our architecture is motivated by DPT [54] which trains ViT for both segmentation and depth estimation. Robotics manipulation. Manipulation of objects is a longterm goal of robotics. Researchers have developed various solutions for different kinds of objects in different scenes, ranging from articulated objects [61,52,10,12,69,23] to deformable objects [71,72,65,8]. While manipulation is not the goal of our paper, understanding objects and the environment in 3D is typically an important part of a manipulation pipeline. Our paper mainly improves the perception part, which can potentially improve manipulation. Therefore, we also test our approach on robotics data [1], to show it can generalize." }, { "figure_ref": [], "heading": "Overview", "publication_ref": [ "b52", "b13" ], "table_ref": [], "text": "Given a single image, our goal is to be able to answer \"What could I do here?\" with the object at a query point. We introduce annotations in Section 4 as well as a method for the task in Section 5. Before we do so, we present a unified explanation for the questions we answer as well as the rationale for choosing these questions. We group our questions into six property types, some of which are further subdivided. Not all objects support all questions: objects that cannot be moved, for instance, do not have other properties and objects that can be freely moved do not have rotation axes. We further note that some objects defy these Figure 2. Example annotations of our 3DOI dataset. Our images come from Internet videos [53], egocentric videos [11] and renderings of 3D dataset [14]. is the query point, and ▼ is the affordance.\nproperties -ball joints, for example, permit a 2D subspace of motion -our goal is to identify a large subspace of potential interactions." }, { "figure_ref": [], "heading": "Movable", "publication_ref": [ "b59", "b57" ], "table_ref": [], "text": "The most important subdivision is whether the object at the query point can be moved. This follows work in both 3D scene understanding [60] and human-object interaction [58] that subdivide objects into how movable they are. We group objects into three categories based on how easily the object can be moved: (1) fixtures which effectively cannot be moved, such as walls and floor; (2) one hand objects that can be moved with a single hand, such as a water bottle or cabinet door; (3) two hand objects that require two hands to move, such as a large TV. We frame the task as three-way classification." }, { "figure_ref": [], "heading": "Localization", "publication_ref": [ "b25", "b72" ], "table_ref": [], "text": "Understanding the extent of an object is important, and so we localize the object in the world. Since our objects consist of a wide variety of categories, we frame localization as 2D instance segmentation as in [26,4], as well as a depthmap to localize the object in 3D [54,73]. These properties can be estimated for most objects." }, { "figure_ref": [], "heading": "Rigidity", "publication_ref": [ "b37" ], "table_ref": [], "text": "To understand action, one primary distinction is rigid-vs-non-rigid since rigid objects are subject to substantially simpler rules of motion [38]. We therefore classify whether the object is rigid or not." }, { "figure_ref": [], "heading": "Articulation", "publication_ref": [ "b60", "b52" ], "table_ref": [], "text": "Most rigid objects can further decomposed as permitting freeform, rotational / revolute, or translation / prismatic motion [61]. Each of these requires different endeffector interactions to effectively interact with. We frame the articulation category as a three-way classification problem, and recognizing the rotation axis as a line prediction problem following [53]." }, { "figure_ref": [], "heading": "Action", "publication_ref": [], "table_ref": [], "text": "We also want to understand what the potential action could be to interact with the object. Here we focus on three types of actions: pull, push or other.\nAffordance Finally, we want to know where we should interact with the object. For example, we need to manipulate the handle if we want to open a door. We predict a probability map which is over the location of the affordance." }, { "figure_ref": [], "heading": "3D Object Interaction Dataset", "publication_ref": [ "b52", "b73", "b13", "b73" ], "table_ref": [], "text": "One critical component of our contribution is accurate annotations of object interactions, as there is no publicly available data. In this paper, we introduces 3D Object Interaction dataset (3DOI), which is the first dataset. We picked data that can can be easily integrated with 3D, including a 3D dataset, so that we have accurate 3D ground truth to train our approach. Examples of our data are shown in Figure 2.\nImages. Our goal is to pick up diverse images representing real-world scenarios. In particular, we want our images contain a lot of everyday objects we can interact with. Therefore, we sample 10K images from a collection of publicly available datasets: (1) Articulation [53] comes from thirdperson Creative Commons Internet videos. Typically, a video clip contains humans manipulating an articulated objects in households. We randomly sample 3K images from the articulation dataset; (2) EpicKitchen [11] contains egocentric videos making foods in kitchen environments. We sample 2K images from EpicKitchen; (3) Taskonomy [74] is an indoor 3D dataset with real 2D image and corresponding 3D ground truth. We use the renderings by Omnidata [14]. We sample 5k images from the taskonomy split of Omnidata starter dataset. Overall, there are 10K images.\nAnnotation. With a collection of images with potential objects we can interact, we then turn to manual annotation. For a single image, we select around 5 interactable query points, including both large and small objects. For each query point, we annotate: (Movable ) one hand, two hand, or fixture. (Localization ) The bounding box and mask of the part this point belonging to. (Rigidity ) Rigid, or nonrigid. (Articulation ) Rotation, translation or freeform. We also annotate their rotation axes. (Action ) Pull, push or others. (Affordance ) A keypoint which indicates where we should interact with the object. At the same time, our taskonomy [74] images come with 3D ground truth, including depth and surface normals. We also annotate 31K query points of fixtures. Finally, we split 10K images into a train/val/test set of 8k/1k/1k split, respectively. " }, { "figure_ref": [], "heading": "Image Encoder", "publication_ref": [], "table_ref": [], "text": "Depth Query Depth Figure 3\n. Overview of our approach. The inputs of our network is a single image and a set of query points . For each query point, it predicts the potential 3D interaction, in terms of movable , location , rigidity , articulation , action and affordance . In addition, the input of transformer decoder includes a learnable depth query, which estimates the dense depth to recover 3D object interaction for articulated objects.\nAvailability and Ethics. Our images come from three publicly available datasets. Taskonomy does not contain any humans. The video articulation dataset comes from Creative Commons Internet videos. We do not foresee any ethical issues in our dataset." }, { "figure_ref": [], "heading": "Approach", "publication_ref": [ "b32", "b25" ], "table_ref": [], "text": "We now introduce a model which can take an image and a set of query point and answer all of questions we asked in Section 3, including movable, localization, rigidity, articulation, action and affordance. A brief overview of our approach is shown in Figure 3.\nSince our inputs include a set of query points and our outputs include both bounding boxes and segmentation masks, we mainly extend SAM [33] to build our model. Compared with traditional detection pipeline such as Mask R-CNN [26], we can use a query point to naturally guide SAM to detect the corresponding object. Mask R-CNN generates thousands of anchors for each image, which is challenging to find the correct matching. However, we also compare with alternative network architectures in our experiments for completeness. We find they can also work despite being worse than SAM. For simplicity, we assume there is only a single query point. But our model can accept hundreds of query points at a time." }, { "figure_ref": [], "heading": "Backbone", "publication_ref": [ "b24", "b12", "b32" ], "table_ref": [], "text": "The goal of our backbone is to map an image I and a query point [x, y] to a pooled feature h = f (I; [x, y]). Full details are in the supplemental. Image Encoder. Our image encoder is a MAE [25] pretrained Vision Transformer (ViT) [13], following SAM [33]. They map a single image I to the memory of the transformer decoder. Query Point Encoder. We transfer the query point [x, y] to positional encodings [63], which is then feed into the transformer decoder. We use the embedding k to guide the transformer to produce the feature h for different query points. Transformer Decoder. The decoder accepts inputs of the memory from the encoder, and an embedding k of the query point. It produces a embedding h for each query point, and we use it to predict all the properties, like a ROI feature." }, { "figure_ref": [], "heading": "Prediction Heads", "publication_ref": [], "table_ref": [], "text": "We now describe how to map from the pooled feature h to the features. Each prediction is done by a separate head that handles each output type." }, { "figure_ref": [], "heading": "Movable", "publication_ref": [], "table_ref": [], "text": "We add a linear layer and map the hidden embedding h to the prediction of movable. We use the standard cross entropy loss to train it." }, { "figure_ref": [], "heading": "Localization", "publication_ref": [ "b39", "b46", "b72" ], "table_ref": [], "text": "We follow SAM standard practice to predict segmentation masks. We predict segmentation masks using mask decoder and train them using focal loss [40] and DICE [47] as loss functions. For depth, we have a separate depth transformer decoder with a corresponding learnable depth query. We train depth using scale-and shift-invariant L1 loss and gradient-matching loss following [73,54,39]. The shift and scale are normalized per image." }, { "figure_ref": [], "heading": "Rigidity", "publication_ref": [], "table_ref": [], "text": "Similar to movable, we add a linear layer to predict whether the object is rigid or not. We train the linear layer using a standard binary cross entropy loss." }, { "figure_ref": [], "heading": "Articulation", "publication_ref": [ "b52", "b74", "b75", "b34", "b39" ], "table_ref": [], "text": "We first add a linear layer to predict whether the interactive object is rotation, translation or freeform, and we use the standard cross entropy loss to train it. For the rotation axis, we follow [53,75] to represent an axis as a 2D line (θ, r). Any points on this line satisfy x cos(θ) + y sin(θ) = r where θ represents the angle and r represents the diatance from the object center to the line. In training, we represent the 2D line as (sin 2θ, cos 2θ, r), so that the axis angle is in a continuous space [76]. We use a 3-layer MLP to predict the axis, similar to bounding boxes as both tasks require localization. We use L1 loss to train it. Action Similar to movable, we add a linear layer to predict what the potential action is to interact with the object. We train the linear layer using a standard binary cross entropy loss. Affordance Our prediction of affordance is a probability map, while our annotation is a single keypoint. How-ever, affordance can have multiple solutions. Therefore, we transform the annotation of affordance to a 2D gaussian bump [35] and train the network using a binary focal loss [40]. We set the weight of positive examples to be 0.95 and that of negative ones to be 0.05 to balance positives and negatives, as there are more negatives than positives.\nOur total loss is a weighted linear combination of all losses mentioned above. Details are in supplemental." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b32" ], "table_ref": [], "text": "Full architectural details of our approach are in the supplemental. In practice, we use three different transformer decoders for mask, depth and affordance. The image encoder, query point encoder and mask decoder are pretrained on SAM [33]. Other parts, including affordance head and depth head, are trained from scratch. We use an AdamW optimizer of the learning rate 10 -4 , and train our model for 200 epochs." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b0" ], "table_ref": [], "text": "We have introduced an approach that can localize and predict the properties of the moving part from an image. In the experiments, we aim to answer the following questions:\n(1) how well can one localize and predict the properties of the moving part from an image; (2) how well do alternative approaches to the problem do? We evaluate our approach on our 3DOI dataset and test the generalization to robotics data WHIRL [1]." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b0", "b0", "b40", "b14", "b52", "b74", "b74", "b49", "b2", "b35", "b44", "b52", "b49", "b26", "b58", "b52", "b25" ], "table_ref": [], "text": "We first describe the setup of our experiments. Our method aims to look at a single RGB image and infer information about the moving part given a keypoint. We therefore evaluate our approach on two challenging datasets, using metrics that capture various aspects. Datasets. We train and validate our approach on two datasets: 3DOI dataset (described in Section 4), and the WHIRL dataset [1]. WHIRL [1] is a robotics dataset including every-day objects and settings, for example drawers, dishwashers, fridges in different kitchens, doors to various cabinets. We use WHIRL to validate the generalization of our approach and downstream applications in the robotics settings. We split the first frame of all WHIRL videos and annotate them using the same pipeline as our datasets. Typically, humans are not present in the first frame and it's before any manipulation. Metrics. We report standard practices of evaluation for all of our predictions. For all metrics, the higher the better. These metrics are detailed as follows:\n• Movable , Rigidity , and Action : We report accuracy as these are multiple choice questions.\n• Localization : We report Intersection-over-Union (IoU) for our predictions of bounding boxes and masks [41]. We report threshold accuracy for depth [15]. • Articulation : We report accuracy for articulation type. The rotation axis is a 2D line. Therefore, we report EA-Score between the prediction and the ground truth, following [53,75]. EA-Score [75] is a score in [0, 1] to measure the angle and euclidean distance between two lines. • Affordance : It's a probablity map and we report the histogram intersection (or SIM) following [50,3,36,45]. Baselines. We compare our approach with a series of baselines, to evaluate how well alternative approaches work on our problem. We first evaluate 3DADN [53], SAPIEN [70], and InteractionHotspots [50] using their pretrained checkpoints, to test how well learning from videos or synthetic data works on our problem. We then train two query-pointbased model, ResNet MLP [27] and COHESIV [59], to test how well alternative network architectures work on our problem. The details are introduced as follows.\n• (3DADN [53]): 3DADN detects articulated objects which humans are interacting with, extending Mask R-CNN [26]. It is trained on Internet videos. We drop the temporal optimization part since we work on a single image. For each image, it can detect articulated objects, as well as the type (translation or rotation), bounding boxes, masks and axes. Since the inputs of 3DADN do not include a query point, we compare the predicted bounding boxes and the ground truth to find the matching detection, and evaluate other metrics. We lower the detection threshold to 0.05 to ensure we have enough detections to match our ground truth." }, { "figure_ref": [], "heading": "• (SAPIEN [70]):", "publication_ref": [ "b52", "b52", "b49", "b26", "b58" ], "table_ref": [], "text": "The training frames of 3DADN [53] typically have human activities. However, our dataset does not require humans to be present, which may lead to generalization issues. Alternatively, we are interested in whether we can just learn the skill from synthetic data. We train 3DADN [53] on renderings of synthetic objects generated by SAPIEN. SAPIEN is a simulator which contains a large scale set of articulated objects. We use the renderings provided by 3DADN and the same evaluation strategies. • (InteractionHotspots [50]): While 3DADN and SAPIEN can detect articulated objects as well as their axes, they cannot tell the affordance. InteractionHotspots learns affordance from watching OPRA [17] or Epic-Kitchen [11] videos. Since InteractionHotspots cannot detect objects, we apply a center crop of the input image based on the query point, and resize it to the standard input shape (224, 224). We use the model trained on Epic-Kitchen as it transfers better than OPRA.\nAdditionally, we want to test alternative network architectures trained on our 3DOI dataset. We use the same loss as ours to train it on 3DOI, to ensure fair comparison. • (ResNet MLP [27]): ResNet MLP uses a ResNet-50 encoder to extract features from input images. We then sample Our approach can correctly recognize articulated objects, as well as its type (rotation or translation), axes, and affordance. (Row 3, 4) Our approach can recognize rigid and nonrigid objects in egocentric video. (Row 5) Our approach can recognize objects need to be moved by two hands, such as a TV. We note that the affordance of these objects have multiple solutions. Affordance is zoomed manually for better visualization. Affordance colormap: min max.\nthe corresponding spatial features from the feature map using the 2D corrdinates of keypoints. We train ResNet MLP on all tasks except mask, affordance and depth, as these tasks requires dense predictions for each pixel. Adding a separate decoder to ResNet makes it a UNet-like architecture [57], which is beyond the scope of ResNet.\n• (COHESIV [59]): We also pick another model COHESIV, which designed for the prediction-at-a-query-location problem. Given an input image and corresponding hand location as a query, COHESIV predicts the segmentation of hands and hand-held objects. We adopt the network, as it produces a feature map of queries. we sample an embedding from the feature map according to the query point, concatenate it with image features, and produce multiple outputs." }, { "figure_ref": [ "fig_1", "fig_2", "fig_3", "fig_4" ], "heading": "Results", "publication_ref": [ "b52", "b49", "b26", "b58", "b25", "b72", "b13" ], "table_ref": [], "text": "First, we show qualitative results in Figure 4. For articulated objects (drawers, cabinets, etc.), our approach can recognize its location, kinematic model (rotation or translation), axes and handle. It can also recognize rigid or nonrigid objects, as well as light or heavy ones. It works on both third-person images or egocentric videos. And all of these are achieved in a single model. For articulated ob- jects, we utilize the outputs and further show their potential 3D interaction in Figure 5. Full details in supplemental.\nWe then compare our approach with a set of baselines. The quantitative results are reported in Table 1. 3DADN [53] is much worse than our approach, since it can only detect objects which are being articulated. It fails to detect objects humans are not interacting. Instead, our ap-Table 1. Quantitative results on our 3DOI dataset. Cat. means category. We report accuracy for all category classification, including movable, rigid, articulation and action. We report mean IoU for box and mask, EA-Score for articulation axis, and SIM for affordance. For all metrics, the higher the better. proach can detect any objects can be interacted, regardless of human activities. SAPIEN is worse than 3DADN, which suggests learning from synthetic objects has a huge domain gap. This is consistent with the observation of 3DADN. Vicomparisons are shown in Figure 6. We compare our prediction of the affordance map with InteractionHotspots [50]. Our approach outperforms In-teractionHotspots significantly, with a 3.5x improvement. A visual comparison is shown in Figure 7. While Inter-actionHotspots predicts a cloud-like probability map, our approach is typically very confident about its prediction. However, the overall performance is relatively low, mainly due to ambiguity of affordance on deformable objects.\nTo explore alternative network architectures, we compare our approach with ResNet MLP [27] and COHE-SIV [59], which are trained on our data with the same loss functions. ResNet MLP is reasonable on movable, rigidity, and action. It is especially bad on bounding box localization, which is why we typically rely on a detection pipeline such as Mask R-CNN [26]. COHESIV learns reasonable bounding boxes and masks, which is a huge improvement over ResNet MLP. The performance of movable drops compared with ResNet MLP, while that of kinematic and action improves. Overall, our approach outperforms both ResNet MLP and COHESIV, mainly due to the introduction of transformers.\nFinally, we evaluate depth on our data. Having stateof-the-art depth estimation is orthogonal to our goal, since we only need reasonable depth to localize objects in 3D and render potential 3D interactions. In fact, state-of-theart depth estimation models are trained on over ten datasets and one million images [54,73,14], while our dataset only has 5K images with depth ground truth. We just report the evaluation of depth estimation, in order to show our model has learned reasonable depth. On our data, 96.7% pixels are within the 1.25 threshold, 99.3% pixels are within the 1.25 2 threshold." }, { "figure_ref": [ "fig_5" ], "heading": "Generalization Results", "publication_ref": [ "b0", "b52", "b49", "b0", "b26", "b57" ], "table_ref": [], "text": "To test whether our approach and models trained on our 3DOI dataset can generalize, we further evaluate our approach on WHIRL [1], a robotics dataset manipulating every-day objects. Since WHIRL is a small-scale dataset, we test our model on WHIRL without finetuning. Our results are shown in Figure 8. For both articulated objects and deformable objects, our approach can successfully recover its kinematic model, location and affordance.\nWe also quantitatively evaluate our approach on WHIRL. We report our results in Table 2. Similar to our 3DOI dataset, our approach outperforms 3DADN [53], SAPIEN [70] and InteractionHotspots [50] significantly.\nTable 2. Quantitative results on robotics data [1]. Cat. means category. We report accuracy for all category classification, including movable, rigid, articulation and action. We report mean IoU for the boxes and masks, EA-Score for articulation axis, and SIM for affordance probablity map. For all metrics, the higher the better. The performance gap is even larger. We believe it is because humans are not present in most images of the dataset. We compare our approach with ResNet MLP [27] and COHESIV [58], which are also trained on our 3DOI dataset. Our model outperforms both ResNet MLP and COHESIV consistently. The improvement on dense predictions (Localization and Affordance) is significant, due to the design of mask decoder. The improvement on other properties is relatively small. It illustrates models trained on our 3DOI dataset generalize well to robotics data, regardless of network architectures." }, { "figure_ref": [ "fig_6" ], "heading": "Limitations and Failure Modes", "publication_ref": [], "table_ref": [], "text": "We finally discuss our limitations and failure modes. In Figure 9, we show some predictions are hard to make from visual cues: Some articulated objects are symmetric and humans rely on common sense to guess its rotation axis. There are also hard examples when predicting the rigidity and movable. Finally, we only annotate a single keypoint for each object instance as affordance. But some objects may have multiple keypoints as affordance." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b52", "b17", "b42", "b41", "b52", "b71", "b54" ], "table_ref": [], "text": "We have presented a novel task of predicting 3D object interactions from a single RGB image. To solve the task, we collected the 3D Object Interaction dataset, and proposed a transformer-based model which predicts the potential interactions of any objects according to query points. Our experiments show that our approach outperforms existing approaches on our data and generalizes well to robotics data.\nOur approach can have positive impacts by helping build smart robots that are able to understand the 3D scene and manipulate everyday objects. On the other hand, our approach may be useful for surveillance activities. Rendering 3D Interaction. Given all these predictions, we are able to predict the potential 3D object interaction of articulated objects from a single image. For articulated objects with a rotation axis, we first backproject the predicted 2D axis to 3D, based on the predicted depth [53]. We then rotate the object point cloud along the 3D axis and project it back to 2D. We fit a homography between the rotated object points and the original one, using RANSAC [18]. Finally, we warp the homography on the original object mask.\nThere is a similar procedure for articulated objects with a translation axis. Instead, we estimate an average surface normal of the object, and use it as the direction of translation axis [43,42,53]. Moreover, the interaction of deformable objects is high dependent of its material, which is difficult to predict from pure visual cues [72]. On the other hand, freeform objects can be moved without any constraints. Therefore, in this paper, we only render 3D interaction for articulated objects. We use pytorch3D [55] and opencv to implement the projection and homography fitting.\nFinal results are shown in the animation video." }, { "figure_ref": [ "fig_8", "fig_9" ], "heading": "B. Data Collection", "publication_ref": [ "b13" ], "table_ref": [], "text": "In this section, we introduce steps of the data annotation. We show the statistics of our dataset in Figure 10. We also show additional annotations in Figure 11. Selecting query points. We first ask workers to select approximately five query points for each image. The query point should be on an interactive object. Some query point should be on large objects, while others should be on small objects. We annotate more query points of fixtures later, as fixtures do not need additional annotations. Bounding boxes. According to the query point, we ask workers to draw a bounding box. The bounding box should only cover the movable part of an object. For example, if the query point is on the door of a refrigerator, the bounding box should only cover the door, instead of the whole refrigerator. It is because we are asking \"what can I do here\". Properties of the object. We then annotate properties of the object. It is a series of multiple choice questions: (1) can the object be moved by one hand, or two hands? (2) is Row 5-6 come from renderings of 3D dataset [14]. is the query point, and ▼ is the affordance. the object rigid or not? (3) if it is rigid, is it articulated or freeform? (4) if it is articulated, is the motion rotation or translation? (5) if we want to interact with the articulated object, should I push or pull? Rotation Axes. For objects which can be rotated, we ask workers to draw a 2D line to represent the rotation axis.\nSegmentation Masks. For all objects, we further ask workers to draw the segmentation mask of the movable part. Fixtures. Finally, we collect another 10K images and randomly sample 5 query points for each image. We ask workers to annotate whether they are fixtures or not. We mix the dataset with these annotations." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgments This work was supported by the DARPA Machine Common Sense Program. This material is based upon work supported by the National Science Foundation under Grant No. 2142529. We thank Shikhar Bahl and Deepak Pathak for their help with WHIRL data, Georgia Gkioxari for her help with the figure, and Tiange Luo, Ang Cao, Cheng Chi, Yixuan Wang, Mohamed El Banani, Linyi Jin, Nilesh Kulkarni, Chris Rockwell, Dandan Shan, Siyi Chen for helpful discussions." } ]
Figure 1. Given a single image and a set of query points , our approach predicts: (a) whether the object at the location can be moved , its rigidity and articulation class , and location ; (b) an affordance and action ; and (c) potential 3D interaction for articulated objects. This ability can assist intelligent agents to better manipulate objects or explore the 3D scene.
Understanding 3D Object Interaction from a Single Image
[ { "figure_caption": "Figure 4 .4Figure 4. Results on our 3DOI dataset. indicates the query point. (Row 1, 2)Our approach can correctly recognize articulated objects, as well as its type (rotation or translation), axes, and affordance. (Row 3, 4) Our approach can recognize rigid and nonrigid objects in egocentric video. (Row 5) Our approach can recognize objects need to be moved by two hands, such as a TV. We note that the affordance of these objects have multiple solutions. Affordance is zoomed manually for better visualization. Affordance colormap: min max.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Prediction of 3D potential interaction of articulated objects. indicates the query point. In prediction 1, 2, and 3, we rotate the object along its rotation axis, or translate the object along its normal direction.", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Comparison of 3DADN [53], SAPIEN [70] and our approach. indicates the query point. 3DADN has a strong performance when humans are present. However, it has difficulty detecting objects without human activities. SAPIEN does not generalize well to real images. However, it is sometimes better than 3DADN when humans are not present.", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Comparison of InteractionHotspots[50] and our approach. indicates the query point. We find InteractionHotspots typically makes a cloud like probability map on our data. Our model is very confident about its prediction, while there can be multiple solutions. Prediction and GT are zoomed manually for better visualization. Affordance colormap: min max.", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure8. Results on robotics data[1]. indicates the query point. Without finetuning, our approach generalizes well to robotics data, which indicates its potential to help intelligent agents to better manipulate objects. Row 1 and 2 are articulated objects. Row 3 and Row 4 are deformable objects. Affordance is zoomed manually for better visualization. Affordance colormap: min max.", "figure_data": "", "figure_id": "fig_5", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 .9Figure 9. Typical failure modes of our approach. indicates the query point. Row 1: Our predicted rotation axis is on the wrong side when the objects look symmetric. Row 2: Our predicted mask is partial when the scissors are occluded. Row 3: Our model thinks the trash bin can be picked up by 1 hand, potentially since its material looks plastic.", "figure_data": "", "figure_id": "fig_6", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "The transformer decoder D takes the memory m from encoder and a set of queries, including N point queries k p and one depth query k d . It predicts a set of point pooled features h 1 , . . . , h N and depth pooled features h d , i.e. h 1 , h 2 , . . . , h N , h d = D(m; k (1) p , k (2) p , . . . k (N ) p , k d ) (1) We set N = 15, as all images have lower than 15 query points. For images without 15 query points, we pad the input to 15 and do not train on these padding examples. The depth query k d is a learnable embedding, similar to object queries in DETR [4]. All queries are feed into the decoder in parallel, as they are indepedent of each other. Prediction heads. DETR [4] uses a linear layer to predict the object classes and a three-layer MLP to regress the bounding boxes, based on h. Motivated by DETR, we use a linear layer for the prediction of movable, rigidity, articulation class and action. We use a three-layer MLP to predict the bounding boxes and rotation axes, as they require localization. We add a gaussian bump [35] for affordance ground truth, where the radius is 5. Balance of loss functions. Since we use multiple loss functions for each prediction and each loss has a different range, they need to be balanced. We treat the weights of losses as hyperparameters and tune them accordingly. The weights of movable, rigidity, articulation class, and action losses are 0.5. The weights of mask losses (both focal loss [40] and DICE [47]) are 2.0. The weights of box L1 loss is 5.0 and generalized IoU loss is 2.0. The weights of axis angle loss is 1.0 and axis offset loss is 10.0. The weights of affordance loss is 100.0. The weights of depth losses are 1.0. For both focal losses of segmentation masks and affordance map, we use γ = 2. For the focal loss of segmentation mask, we use α = 0.25 to balance positive and negative examples. In affordance we use the standard α = 0.95 since there are much more negatives than positives. Training details. The image encoder, prompt encoder and the mask decoder are pretrained on Segment-Anything [33]. To save gpu memory, we use SAM-ViT-b as the image encoder, which is the lightest pretrained model. The other heads (e.g. affordance) are trained from scratch. We use an AdamW optimizer [44] of the learning rate 10 -4 and train the model for 200 epochs. The input and output resolution is 768×1024. The batch size is 2. We train the model on four NVIDIA A40 gpu, with distributed data parallel.", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 10 .10Figure 10. Statistics of our 3DOI dataset. (Row 1) We show the distribution of query points, box centers, and affordance in normalized image coordinates, similar to LVIS [22] and Omni3D [2]. (Row 2) We show the distribution of object types, articulation types and movable types.", "figure_data": "", "figure_id": "fig_8", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 .11Figure 11. Example annotations of our 3DOI dataset. Row 1-2 come from Internet videos [53]. Row 3-4 come from egocentric videos [11].Row 5-6 come from renderings of 3D dataset[14]. is the query point, and ▼ is the affordance.", "figure_data": "", "figure_id": "fig_9", "figure_label": "11", "figure_type": "figure" } ]
Shengyi Qian; David F Fouhey
[ { "authors": "Shikhar Bahl; Abhinav Gupta; Deepak Pathak", "journal": "", "ref_id": "b0", "title": "Humanto-robot imitation in the wild", "year": "2022" }, { "authors": "Garrick Brazil; Abhinav Kumar; Julian Straub; Nikhila Ravi; Justin Johnson; Georgia Gkioxari", "journal": "", "ref_id": "b1", "title": "Omni3d: A large benchmark and model for 3d object detection in the wild", "year": "2023" }, { "authors": "Zoya Bylinskii; Tilke Judd; Aude Oliva; Antonio Torralba; Frédo Durand", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b2", "title": "What do different evaluation metrics tell us about saliency models", "year": "2018" }, { "authors": "Nicolas Carion; Francisco Massa; Gabriel Synnaeve; Nicolas Usunier; Alexander Kirillov; Sergey Zagoruyko", "journal": "", "ref_id": "b3", "title": "End-toend object detection with transformers", "year": "2020" }, { "authors": "Yu-Wei Chao; Zhan Wang; Yugeng He; Jiaxuan Wang; Jia Deng", "journal": "", "ref_id": "b4", "title": "Hico: A benchmark for recognizing human-object interactions in images", "year": "2015" }, { "authors": "Weifeng Chen; Shengyi Qian; Jia Deng", "journal": "", "ref_id": "b5", "title": "Learning singleimage depth from videos using quality assessment networks", "year": "2019" }, { "authors": "Bowen Cheng; Alex Schwing; Alexander Kirillov", "journal": "", "ref_id": "b6", "title": "Perpixel classification is not all you need for semantic segmentation", "year": "2021" }, { "authors": "Cheng Chi; Dmitry Berenson", "journal": "", "ref_id": "b7", "title": "Occlusion-robust deformable object tracking without physics simulation", "year": "2019" }, { "authors": "Danfei Christopher B Choy; Junyoung Xu; Kevin Gwak; Silvio Chen; Savarese", "journal": "", "ref_id": "b8", "title": "3D-R2N2: A unified approach for single and multi-view 3d object reconstruction", "year": "2016" }, { "authors": "Cristina Garcia Cifuentes; Jan Issac; Manuel Wüthrich; Stefan Schaal; Jeannette Bohg", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b9", "title": "Probabilistic articulated real-time tracking for robot manipulation", "year": "2016" }, { "authors": "Dima Damen; Hazel Doughty; Giovanni Maria Farinella; Antonino Furnari; Jian Ma; Evangelos Kazakos; Davide Moltisanti; Jonathan Munro; Toby Perrett; Will Price; Michael Wray", "journal": "International Journal of Computer Vision (IJCV)", "ref_id": "b10", "title": "Rescaling egocentric vision: Collection, pipeline and challenges for epic-kitchens-100", "year": "2022" }, { "authors": "Karthik Desingh; Shiyang Lu; Anthony Opipari; Odest Chadwicke Jenkins", "journal": "", "ref_id": "b11", "title": "Factored pose estimation of articulated objects using efficient nonparametric belief propagation", "year": "2019" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b12", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Ainaz Eftekhar; Alexander Sax; Jitendra Malik; Amir Zamir", "journal": "", "ref_id": "b13", "title": "Omnidata: A scalable pipeline for making multi-task mid-level vision datasets from 3d scans", "year": "2021" }, { "authors": "David Eigen; Rob Fergus", "journal": "", "ref_id": "b14", "title": "Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture", "year": "2015" }, { "authors": "Rui Fan; Hengli Wang; Bohuan Xue; Huaiyang Huang; Yuan Wang; Ming Liu; Ioannis Pitas", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b15", "title": "Three-filters-to-normal: An accurate and ultrafast surface normal estimator", "year": "2021" }, { "authors": "Kuan Fang; Te-Lin Wu; Daniel Yang; Silvio Savarese; Joseph J Lim", "journal": "", "ref_id": "b16", "title": "Demo2vec: Reasoning object affordances from online videos", "year": "2018" }, { "authors": "A Martin; Robert C Fischler; Bolles", "journal": "Communications of the ACM", "ref_id": "b17", "title": "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography", "year": "1981" }, { "authors": "Georgia Gkioxari; Ross Girshick; Piotr Dollar; Kaiming He", "journal": "", "ref_id": "b18", "title": "Detecting and recognizing human-object interactions", "year": "2018" }, { "authors": "Georgia Gkioxari; Jitendra Malik; Justin Johnson", "journal": "", "ref_id": "b19", "title": "Mesh r-cnn", "year": "2019" }, { "authors": "Mohit Goyal; Sahil Modi; Rishabh Goyal; Saurabh Gupta", "journal": "", "ref_id": "b20", "title": "Human hands as probes for interactive object understanding", "year": "2022" }, { "authors": "Agrim Gupta; Piotr Dollar; Ross Girshick", "journal": "", "ref_id": "b21", "title": "Lvis: A dataset for large vocabulary instance segmentation", "year": "2019" }, { "authors": "Arjun Gupta; Max E Shepherd; Saurabh Gupta", "journal": "", "ref_id": "b22", "title": "Predicting motion plans for articulating everyday objects", "year": "2023" }, { "authors": "Sanjay Haresh; Xiaohao Sun; Hanxiao Jiang; Angel X Chang; Manolis Savva", "journal": "", "ref_id": "b23", "title": "Articulated 3d human-object interactions from rgb videos: An empirical analysis of approaches and challenges", "year": "2022" }, { "authors": "Kaiming He; Xinlei Chen; Saining Xie; Yanghao Li; Piotr Dollár; Ross Girshick", "journal": "", "ref_id": "b24", "title": "Masked autoencoders are scalable vision learners", "year": "2022" }, { "authors": "Kaiming He; Georgia Gkioxari", "journal": "", "ref_id": "b25", "title": "Piotr Dollár, and Ross Girshick", "year": "2017" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b26", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Cheng-Chun Hsu; Zhenyu Jiang; Yuke Zhu", "journal": "", "ref_id": "b27", "title": "Ditto in the house: Building articulation models of indoor scenes through interactive perception", "year": "2023" }, { "authors": "Hanxiao Jiang; Yongsen Mao; Manolis Savva; Angel X Chang", "journal": "", "ref_id": "b28", "title": "Opd: Single-view 3d openable part detection", "year": "2022" }, { "authors": "Zhenyu Jiang; Cheng-Chun Hsu; Yuke Zhu", "journal": "", "ref_id": "b29", "title": "Ditto: Building digital twins of articulated objects from interaction", "year": "2022" }, { "authors": "Linyi Jin; Shengyi Qian; Andrew Owens; David F Fouhey", "journal": "", "ref_id": "b30", "title": "Planar surface reconstruction from sparse views", "year": "2021" }, { "authors": "Alexander Kirillov; Kaiming He; Ross Girshick; Carsten Rother; Piotr Dollár", "journal": "", "ref_id": "b31", "title": "Panoptic segmentation", "year": "2019" }, { "authors": "Alexander Kirillov; Eric Mintun; Nikhila Ravi; Hanzi Mao; Chloe Rolland; Laura Gustafson; Tete Xiao; Spencer Whitehead; Alexander C Berg; Wan-Yen Lo; Piotr Dollár; Ross Girshick", "journal": "", "ref_id": "b32", "title": "Segment anything", "year": "2023" }, { "authors": "Alexander Kirillov; Yuxin Wu; Kaiming He; Ross Girshick", "journal": "", "ref_id": "b33", "title": "Pointrend: Image segmentation as rendering", "year": "2020" }, { "authors": "Hei Law; Jia Deng", "journal": "", "ref_id": "b34", "title": "Cornernet: Detecting objects as paired keypoints", "year": "2018" }, { "authors": "Gen Li; Varun Jampani; Deqing Sun; Laura Sevilla-Lara", "journal": "", "ref_id": "b35", "title": "Locate: Localize and transfer object parts for weakly supervised affordance grounding", "year": "2023" }, { "authors": "Xiaolong Li; He Wang; Li Yi; Leonidas J Guibas; Lynn Abbott; Shuran Song", "journal": "", "ref_id": "b36", "title": "Category-level articulated object pose estimation", "year": "2020" }, { "authors": "Yunzhu Li; Jiajun Wu; Russ Tedrake; Joshua B Tenenbaum; Antonio Torralba", "journal": "", "ref_id": "b37", "title": "Learning particle dynamics for manipulating rigid bodies, deformable objects, and fluids", "year": "2018" }, { "authors": "Zhengqi Li; Noah Snavely", "journal": "", "ref_id": "b38", "title": "Megadepth: Learning singleview depth prediction from internet photos", "year": "2018" }, { "authors": "Tsung-Yi Lin; Priya Goyal; Ross Girshick; Kaiming He; Piotr Dollár", "journal": "", "ref_id": "b39", "title": "Focal loss for dense object detection", "year": "2017" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "", "ref_id": "b40", "title": "Microsoft COCO: Common objects in context", "year": "2014" }, { "authors": "Chen Liu; Kihwan Kim; Jinwei Gu; Yasutaka Furukawa; Jan Kautz", "journal": "", "ref_id": "b41", "title": "PlaneRCNN: 3D plane detection and reconstruction from a single image", "year": "2019" }, { "authors": "Chen Liu; Jimei Yang; Duygu Ceylan; Ersin Yumer; Yasutaka Furukawa", "journal": "", "ref_id": "b42", "title": "Planenet: Piece-wise planar reconstruction from a single rgb image", "year": "2018" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b43", "title": "Decoupled weight decay regularization", "year": "2017" }, { "authors": "Hongchen Luo; Wei Zhai; Jing Zhang; Yang Cao; Dacheng Tao", "journal": "", "ref_id": "b44", "title": "Learning affordance grounding from exocentric images", "year": "2022" }, { "authors": "Tiange Luo; Honglak Lee; Justin Johnson", "journal": "", "ref_id": "b45", "title": "Neural shape compiler: A unified framework for transforming between text, point cloud, and program", "year": "2022" }, { "authors": "Fausto Milletari; Nassir Navab; Seyed-Ahmad Ahmadi", "journal": "", "ref_id": "b46", "title": "V-net: Fully convolutional neural networks for volumetric medical image segmentation", "year": "2016" }, { "authors": "Kaichun Mo; Leonidas Guibas; Mustafa Mukadam; Abhinav Gupta; Shubham Tulsiani", "journal": "", "ref_id": "b47", "title": "Where2act: From pixels to actions for articulated 3d objects", "year": "2021" }, { "authors": "Jiteng Mu; Weichao Qiu; Adam Kortylewski; Alan Yuille; Nuno Vasconcelos; Xiaolong Wang", "journal": "", "ref_id": "b48", "title": "A-sdf: Learning disentangled signed distance functions for articulated shape representation", "year": "2021" }, { "authors": "Tushar Nagarajan; Christoph Feichtenhofer; Kristen Grauman", "journal": "", "ref_id": "b49", "title": "Grounded human-object interaction hotspots from video", "year": "2019" }, { "authors": "Yinyu Nie; Xiaoguang Han; Shihui Guo; Yujian Zheng; Jian Chang; Jian Jun Zhang", "journal": "", "ref_id": "b50", "title": "Total3dunderstanding: Joint layout, object pose and mesh reconstruction for indoor scenes from a single image", "year": "2020" }, { "authors": "Sudeep Pillai; Matthew R Walter; Seth Teller", "journal": "RSS", "ref_id": "b51", "title": "Learning articulated motions from visual demonstration", "year": "2014" }, { "authors": "Linyi Shengyi Qian; Chris Jin; Siyi Rockwell; David F Chen; Fouhey", "journal": "", "ref_id": "b52", "title": "Understanding 3d object articulation in internet videos", "year": "2022" }, { "authors": "René Ranftl; Alexey Bochkovskiy; Vladlen Koltun", "journal": "", "ref_id": "b53", "title": "Vision transformers for dense prediction", "year": "2021" }, { "authors": "Nikhila Ravi; Jeremy Reizenstein; David Novotny; Taylor Gordon; Wan-Yen Lo; Justin Johnson; Georgia Gkioxari", "journal": "", "ref_id": "b54", "title": "Accelerating 3d deep learning with pytorch3d", "year": "2020" }, { "authors": "Kaiming Shaoqing Ren; Ross He; Jian Girshick; Sun", "journal": "", "ref_id": "b55", "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "year": "2015" }, { "authors": "Olaf Ronneberger; Philipp Fischer; Thomas Brox", "journal": "", "ref_id": "b56", "title": "U-net: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "Dandan Shan; Jiaqi Geng; Michelle Shu; David Fouhey", "journal": "", "ref_id": "b57", "title": "Understanding human hands in contact at internet scale", "year": "2020" }, { "authors": "Dandan Shan; Richard E L Higgins; David F Fouhey", "journal": "NeurIPS", "ref_id": "b58", "title": "COHESIV: Contrastive object and hand embedding segmentation in video", "year": "2021" }, { "authors": "Nathan Silberman; Derek Hoiem; Pushmeet Kohli; Rob Fergus", "journal": "Springer", "ref_id": "b59", "title": "Indoor segmentation and support inference from rgbd images", "year": "2012" }, { "authors": "Jürgen Sturm; Cyrill Stachniss; Wolfram Burgard", "journal": "Journal of Artificial Intelligence Research", "ref_id": "b60", "title": "A probabilistic framework for learning kinematic models of articulated objects", "year": "2011" }, { "authors": "Xiaohao Sun; Hanxiao Jiang; Manolis Savva; Angel Xuan; Chang ", "journal": "", "ref_id": "b61", "title": "Opdmulti: Openable part detection for multiple objects", "year": "2023" }, { "authors": "Matthew Tancik; Pratul Srinivasan; Ben Mildenhall; Sara Fridovich-Keil; Nithin Raghavan; Utkarsh Singhal; Ravi Ramamoorthi; Jonathan Barron; Ren Ng", "journal": "NeurIPS", "ref_id": "b62", "title": "Fourier features let networks learn high frequency functions in low dimensional domains", "year": "2020" }, { "authors": "Xiaolong Wang; David F Fouhey; Abhinav Gupta", "journal": "", "ref_id": "b63", "title": "Designing deep networks for surface normal estimation", "year": "2015" }, { "authors": "Yixuan Wang; Dale Mcconachie; Dmitry Berenson", "journal": "", "ref_id": "b64", "title": "Tracking partially-occluded deformable objects while enforcing geometric constraints", "year": "2021" }, { "authors": "Yian Wang; Ruihai Wu; Kaichun Mo; Jiaqi Ke; Qingnan Fan; Leonidas J Guibas; Hao Dong", "journal": "", "ref_id": "b65", "title": "Adaafford: Learning to adapt manipulation affordance for 3d articulated objects via few-shot interactions", "year": "2022" }, { "authors": "Yingming Wang; Xiangyu Zhang; Tong Yang; Jian Sun", "journal": "", "ref_id": "b66", "title": "Anchor detr: Query design for transformer-based object detection", "year": "2021" }, { "authors": "Fangyin Wei; Rohan Chabra; Lingni Ma; Christoph Lassner; Michael Zollhöfer; Szymon Rusinkiewicz; Chris Sweeney; Richard Newcombe; Mira Slavcheva", "journal": "", "ref_id": "b67", "title": "Self-supervised neural articulated shape and appearance models", "year": "2022" }, { "authors": "Ruihai Wu; Yan Zhao; Kaichun Mo; Zizheng Guo; Yian Wang; Tianhao Wu; Qingnan Fan; Xuelin Chen; Leonidas Guibas; Hao Dong", "journal": "", "ref_id": "b68", "title": "Vat-mart: Learning visual action trajectory proposals for manipulating 3d articulated objects", "year": "2021" }, { "authors": "Fanbo Xiang; Yuzhe Qin; Kaichun Mo; Yikuan Xia; Hao Zhu; Fangchen Liu; Minghua Liu; Hanxiao Jiang; Yifu Yuan; He Wang", "journal": "", "ref_id": "b69", "title": "Sapien: A simulated part-based interactive environment", "year": "2020" }, { "authors": "Zhenjia Xu; Cheng Chi; Benjamin Burchfiel; Eric Cousineau; Siyuan Feng; Shuran Song", "journal": "", "ref_id": "b70", "title": "Dextairity: Deformable manipulation can be a breeze", "year": "2022" }, { "authors": "Fengyu Yang; Chenyang Ma; Jiacheng Zhang; Jing Zhu; Wenzhen Yuan; Andrew Owens", "journal": "NeurIPS", "ref_id": "b71", "title": "Touch and go: Learning from human-collected vision and touch", "year": "2022" }, { "authors": "Wei Yin; Jianming Zhang; Oliver Wang; Simon Niklaus; Long Mai; Simon Chen; Chunhua Shen", "journal": "", "ref_id": "b72", "title": "Learning to recover 3d scene shape from a single image", "year": "2021" }, { "authors": "Alexander Amir R Zamir; William Sax; Leonidas J Shen; Jitendra Guibas; Silvio Malik; Savarese", "journal": "", "ref_id": "b73", "title": "Taskonomy: Disentangling task transfer learning", "year": "2018" }, { "authors": "Kai Zhao; Qi Han; Chang-Bin Zhang; Jun Xu; Ming-Ming Cheng", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b74", "title": "Deep hough transform for semantic line detection", "year": "2021" }, { "authors": "Yi Zhou; Connelly Barnes; Jingwan Lu; Jimei Yang; Hao Li", "journal": "", "ref_id": "b75", "title": "On the continuity of rotation representations in neural networks", "year": "2019" } ]
[]
2023-05-16
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b2", "b23", "b24", "b1", "b10", "b9", "b4", "b8", "b14", "b15", "b6", "b13", "b13", "b30", "b1", "b3", "b11", "b20", "b22", "b36", "b37", "b40", "b21", "b34", "b4", "b10" ], "table_ref": [], "text": "While significant progress has been made in object detection [2,17,23,24,28], with the development of deep neural networks, less attention has been paid to its challenging variant in the Mobile User Interface (MUI) domain [1]. Instead of personal computers and books, people nowadays spend more time on mobile phones due to the convenience of various apps for daily life. However, there may exist some risks, including illegal gambling [10,19], malware [31,32], security [4,8], privacy [14,15], copy/fake [27] and fraudulent behaviors [6,13] in apps, which need to be detected and alarmed as required by government authorities and app markets. In apps, these risks may occur in one element or even hide in the subpage after clicking one element. As a result, it is in great need of an accurate, robust, and even open-vocabulary MUI element detection approach in practice. Such technology can benefit a great variety of sce- Compared to VINS, we additionally obtain the OCR descriptions as supplemental information in MUI-zh. Moreover, we further link OCR descriptions and element annotations with the same color. narios as mentioned above, towards building a better mobile ecosystem [13,30].\nThis paper proposes MUI element detection as a variant object detection task and develops a corresponding dataset named MUI-zh. In general, object detection aims to classify and locate each object, such as an animal or a tool, in one raw image. While in MUI data, our primary concern is detecting elements, e.g., products and clickable buttons in the screenshots. The main difference between the two tasks is that MUI data often have discriminative OCR descriptions as supplemental information for every element, significantly influencing detection results. To better explain it, we put two MUI data examples from VINS [1] and our MUI-zh in Figure 1. VINS only provides the category annotation and bounding box for every element, as the object detection dataset does. At the same time, MUI-zh additionally obtains the OCR descriptions and links them with elements for further usage. Since the OCR descriptions are texts and will be an additional input modality, it is natural to leverage recent Open-Vocabulary object Detection (OVD) models [3,11,20,22,36,37,40] as the MUI element detection baseline because of their rich vision-language knowledge learned from pretrained CLIP [21].\nOVD detectors usually detect and classify objects by cal- culating the similarity between visual embeddings and textual concepts split from captions. However, according to our experiments, existing OVD methods can not achieve satisfactory performances on MUI datasets. The reason mainly comes from two aspects: Firstly, the samples for training OVD detectors are appearance-centric, while MUI data is not. Besides the appearance, the category of one MUI element is often closely related to its textual explanations obtained by OCR tools. Thus, OCR descriptions of one element can be viewed as a discriminative modality to distinguish itself from other categories, but neither exists nor is used in OVD models; Secondly, the category prompts with only category name is not optimal for vision-language alignment since they may not be precise enough to describe an MUI element. For example, we show four buttons (blue) and one icon (red) in Figure 2. The baseline (OVD detector) only uses \"a photo of category name\" to perform alignment and misclassify button 1 as an icon.\nTo alleviate the above issues, we propose a novel lightweight and plug-and-play Adaptively Prompt Tuning (APT) module in MUI element detection. Firstly, it takes OCR descriptions as input, using a unimodal block to obtain rich elements' information (e.g., content and function) for vision-language alignment; Secondly, it adaptive encodes vision and OCR description features into embeddings to adjust the representation of frozen category prompts, which further reduces the impact of language ambiguity during matching. As shown in Figure 2, the gray dotted lines indicate the decision boundaries of the OVD baseline and its variant with APT during the recognizing phase. Element 1 is misclassified by the baseline since its embedding is close to the frozen category prompt of \"icon\" and far away from its groundtruth \"button\". Our APT adaptively tunes two category prompts (noted by the green arrow) for every element and successfully recognizes element 1. As a result, we demonstrate that the APT can achieve noticeable performance gains based on previous OVD detectors, which will benefit many mobile layout analyses [34,35] and risk hunters [4,10]. We summarize our contributions as follows.\n• We develop a high-quality MUI dataset (called MUIzh) containing 18 common categories with OCR descriptions as the supplemental information. Besides MUI-zh, we will also provide the OCR descriptions of the existing dataset VINS to facilitate future research.\n• Inspired by the MUI data characteristics, we further proposed a novel Adaptive Prompt Tuning (APT) module to finetune category prompts for standard and open-vocabulary MUI element detection.\n• Experiments on two datasets demonstrate that our APT, as a plug-and-play module, achieves competitive improvements upon four recent CLIP-based detectors." }, { "figure_ref": [], "heading": "Related Works 2.1. Object Detection", "publication_ref": [ "b2", "b24", "b23" ], "table_ref": [], "text": "Object detection aims to detect and represent objects at a bounding box level. There are two kinds of object detection methods, i.e., two-stage [2,24], and single-stage [17,23,28]. Two-stage methods first detect objects, then crop their region features to further classify them into the foreground or background. In contrast, the one-stage detectors directly predict the category and bounding box at each location." }, { "figure_ref": [], "heading": "Open-vocabulary Object Detection", "publication_ref": [ "b21", "b3", "b11", "b20", "b22", "b36", "b37", "b40", "b24", "b21", "b11", "b24", "b11", "b40", "b25", "b36", "b3", "b40", "b22", "b37" ], "table_ref": [], "text": "Relying heavily on visual-language pretrained models [21], open-vocabulary object detection approaches aim to locate and classify novel objects that are not included in the training data. Recently, OVD methods [3,7,11,20,22,36,37,40] follow two-stage fashion: class-agnostic proposals are firstly generated by RPN [24] trained on base categories, then the classification head is required to recognize novel classes with the knowledge from pretrained CLIP [21].\nThe representative solutions include OVR-CNN [33] and ViLD [11]. Taking Faster RCNN [24] as the backbone, OVR-CNN [33] trains a projection layer on image-text pairs with contrastive learning, while ViLD [11] proposes to explicitly distill the knowledge from the pretrained CLIP visual encoder. Advanced to them, Detic [40] tries to selftrain the detector on ImageNet21K [25] for OVD. Recently, VL-PLM [36] use self-training in both two stages on unlabeled data and MEDet [3] proposes an online proposal mining method to refine the vision-language alignment. Following Detic [40], Object-centric OVD [22] combines knowledge distillation and contrastive learning, achieving the best performance on COCO [16] with the extra weakly supervised data from ImageNet21K. One closely related work is RegionCLIP [37], which leverages a CLIP model to match image regions with template texts on large-scale data from the web and then uses pseudo pairs to train the finegrained alignment between image regions and text spans." }, { "figure_ref": [], "heading": "Prompts Learning", "publication_ref": [ "b21", "b18", "b39", "b21", "b39", "b38", "b39", "b38" ], "table_ref": [], "text": "The large vision-language model, e.g., CLIP [21], has significantly improved many few-shot or zero-shot computer vision tasks. They are often pretrained on a large amount of image-text pairs collected from the web and can be easily transferred to numerous downstream tasks with either finetuning [18,26] or prompt learning [39]. From [21], we can observe that a task-specific prompt can boost performance significantly but needs carefully tuning prompts by humans. As its extension, CoOp [39] proposes context optimization with learnable vectors for automating prompt learning in few-shot classification, relieving the burden of designing hand-craft prompts by humans. Moreover, its further extension CoCoOp [38] learns a lightweight neural network to generate for each image an input-conditional token, which improves the generalization ability to wider novel categories in image classification tasks.\nRecently, DetPro [7] and PromptDet [9] adapt CoOp [39] to OVD by designing particular strategies to handle foreground and background proposals within images. Although the vision embeddings learned in our APT are somehow inspired by CoCoOp [38], we are the first to propose a unified module for tuning prompts on two modalities, i.e., OCR descriptions and vision features." }, { "figure_ref": [], "heading": "Mobile User Interface Dataset", "publication_ref": [], "table_ref": [], "text": "In this section, we first introduce the existing MUI datasets and our developed MUI-zh. Then we briefly describe how to match OCR descriptions and elements." }, { "figure_ref": [], "heading": "Dataset Preparation", "publication_ref": [ "b5", "b1", "b1" ], "table_ref": [], "text": "Early work on the MUI dataset explored how to support humans in designing applications. For example, Rico [5], a dataset of Android apps, was released five years ago. It consists of 72k MUI examples from 9722 apps, spanning 27 categories in the Google Play Store. However, the annotations of Rico are noisy, sometimes even incorrect, according to [1]. As its extension, VINS [1] uses MUI designs and wireframes to enable element detection and retrieval.\nNowadays, more and more tinyapps (in apps) are developed by merchants, and their elements, also as MUI data, have a noticeable domain gap with the elements in Rico and VINS. In order to fully understand MUI data, we develop MUI-zh, an MUI detection dataset from tinyapp screenshots. MUI-zh has 5769 images of tinyapp screenshots, including 50k elements within 18 categories. Besides element location and category, we also provide essential OCR descriptions and locations for every screenshot as supplemen-tal information for classification. Another reason for developing MUI-zh is that the existing language of MUI datasets is English. Detectors trained on them can not be used in another language, such as Chinese, due to the domain gap/bias during vision-language alignment. Our MUI-zh collects high-quality tinyapp screenshots in Chinese, which enriches the MUI data for different languages." }, { "figure_ref": [], "heading": "OCR Descriptions Matching", "publication_ref": [], "table_ref": [], "text": "After we collect and annotate enough MUI screenshots, we have to link the OCR descriptions and elements for further usage. How to relate OCR and elements with their locations is an open question. Intuitively, it is possible to link them by calculating and ranking their Intersection Over Union (IoU) according to two series bounding boxes inspired by non-maximum suppression (NMS). For every element box, we select the OCR boxes whose IoU scores are larger than a threshold (e.g., 0.5) as its descriptions without replacement. Note that OCR tools may separate one sentence into many phrases, and as a result, an element may also be linked to more than one OCR description. Another special case is when an element box does not have any description, we assign it an empty word.\nGenerally speaking, IoU measures how much two proposals overlap and whether they can be assigned with the same instance in the object detection task. However, MUI elements like products and buttons are more likely to include their OCR descriptions (often occupy only a small region, e.g., 10% of element) within the box. In this case, the IoU (0.1) is smaller than the threshold and this element fails to match its description, which is unacceptable. To tackle this problem, we utilize Intersection Over Minimum (IoM) instead of IoU during OCR matching. IoM replacing the area of union with the area of the minimum box in IoU is suitable for MUI data. For the case mentioned above, the IoM is 1, which means we successfully link the element and its OCR descriptions. Note that we also conduct OCR matching on VINS and release the results." }, { "figure_ref": [ "fig_3" ], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "In this section, we first briefly present how existing OVD models detect elements of MUI data in Section 4.1 and then show the architecture of APT and how it works in Section 4.2. Finally, we claim how to assemble APT on four existing detectors in Section 4.3. The whole pipeline of MUI element detection is shown in Figure 3." }, { "figure_ref": [], "heading": "Detectors on MUI", "publication_ref": [ "b24", "b22", "b37" ], "table_ref": [], "text": "Preprocessing: Given a batch of MUI data, the training pipeline of recent two-stage CLIP-based detectors follows almost the same scheme (detect-then-classify). They first use a class-agnostic RPN [24] to obtain element proposals and perform their innovations and improvements during the Finally, the classifier learns to match these pair-wise embeddings via contrastive learning and cross-entropy loss. Specifically, assuming an image I has n element proposals obtained by RPN, and we notate their features as {r i } n i=1 ∈ R d . The classifier's goal is to match the element proposals with category prompts {c j } m j=1 for m different categories. Relying on the powerful CLIP, the text (vision) embedding t j (f i ) of category j (proposal i) is generated by feeding c j (r i ) into the encoders, respectively:\n𝐨 ! 𝐨 ! 𝐟 \" 𝐟 # 𝐭 $ 𝐭 % 𝐨 ! 𝐯 ! 𝐯 ! 𝐯 ! % p !$ c $ 𝑚 o ! 𝐫 !\nt j = T (c j ); f i = I(r i ).(1)\nFor a paired proposal i and its groundtruth category j during training, we can calculate the predicted probability as:\np ij = exp(cos(t j , f i )/τ ) m k=1 exp(cos(t k , f i )/τ ) , (2\n)\nwhere τ is a temperature hyper-parameter. Finally, the cross-entropy loss is applied to optimize the network parameters except for T on proposal i:\nL i = -log(p ij ).(3)\nThe reason for freezing T is to fully utilize the knowledge learned by the CLIP pretrained on large-scale data according to [7,22]. We also conduct experiments to verify it.\nInference: OVD models predict the category with the probability obtained by Equation 2for the element detection task. While performing experiments in the open-vocabulary setting, we extend the category prompts to cover both base and novel classes following [37]." }, { "figure_ref": [ "fig_3" ], "heading": "Adaptively Prompt Tuning", "publication_ref": [ "b38", "b29" ], "table_ref": [], "text": "As we mentioned in Section 1, existing CLIP-based detectors are not generalizable to MUI categories due to the ignorance of OCR descriptions and the difficulty of aligning various-appearance elements to one frozen manual category prompts. To deal with these two weaknesses, we propose an Adaptively Prompt Tuning (APT) module by mapping OCR descriptions (red) and vision embeddings (green) into the space of text embeddings to adaptively tune the category prompts for every element proposal as shown in Figure 3. The figure shows that the mapped embeddings (red and white) are fused to adjust the frozen text embeddings (blue) for final alignment with vision embeddings (green).\nFor simplicity, we use φ(•) to denote the APT and formulate the training pipeline for image I as:\no i = φ(T (o i )); v i = φ(f i ); tji = t j + o i + v i ; (4) pij = exp(cos( tji , f i )/τ ) m k=1 exp(cos( tki , f i )/τ ) ,(5)\nwhere {o i } n i=1 are the OCR descriptions for n proposals. In this way, we can optimize the whole model except for T with cross-entropy loss:\nLi = -log(p ij ).(6)\nNote that during inference, we also tune the text embeddings in the same way as training with Equation 4. Since our goal is to map supplemental information into the embedding space for prompt tuning, it is natural to uniformly encode OCR descriptions and vision embeddings to encourage knowledge sharing and interaction from different modalities. As we know, APT is the first unimodal prompt tuning method, holding higher performances than individually encoding two modalities with different network parameters, as shown in our experiments.\nInspired by CoCoOp [38], we construct APT as a lightweight network with only two bottlenecks, which contains a fully-connected layer (fc) associated with a batch norm (bn) and a relu activation. It follows standard encoderdecoder fashion, and the fc is utilized to reduce/enlarge the number of feature channels (16x). Since the input channel of the visual feature is 1024, the total number of parameters of APT is about 128k, including the weights and bias, which have little influence on training and inference speed.\nAt the end of APT, we also explore how to fuse modality information in three ways: element-wise sum, elementwise multiply, and fusion with fc. Recall that the attention mechanism [29] is also influential in modality fusion and feature extraction. When we choose element-wise sum as the fusion function, our APT works as an attention layer for different modalities except for the self-attention part calculated on t j in equation 4, which is a constant. If we use fc to learn the weights for fusion, then t j can also be learned, which means our APT, in this case, has the same function of attention layers. According to our experiments, we use element-wise sum as the fusion function due to the slightly higher performance and lower calculating complexity.\nIn conclusion, we highlight that our goal of APT is to adaptively tune frozen category prompts with the context from every element's OCR description and specific vision information. Another interesting thing is that there exist many variants of APT. For example, what if we tune the category prompts only with vision embeddings and tune vision embeddings with OCR descriptions? Moreover, can we tune vision embeddings by self-attention and OCR descriptions while leaving category prompts fixed? To explore the influence of different tuning methods mentioned above, we conduct experiments in Section 5.3." }, { "figure_ref": [], "heading": "Assembling APT to CLIP-based Detectors", "publication_ref": [ "b22", "b37", "b37", "b22", "b39" ], "table_ref": [], "text": "DetPro [7], PromptDet [9], Object-centric [22], and Re-gionCLIP [37] are recent CLIP-based frameworks for OVD. As we mentioned, our APT tunes category prompts without changing model architectures and thus can be used directly by many OVD methods. Here we explain how and where to equip them with APT in detail. Firstly, RegionCLIP [37] and Object-centric [22] use the fixed manual prompts, and we can easily add APT upon them at the end of the network during classification. For PromptDet [9] and DetPro [7], they both use the CoOp [39] to generate trainable category prompts instead of manual ones. Our APT adjusts that trainable category prompts for fair comparisons in this case." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b5", "b1" ], "table_ref": [], "text": "In this section, we first introduce the implementation details for datasets and models in Section 5.1. Our main results are APT upon CLIP-based detectors for both standard and open-vocabulary MUI element detection as shown in Section 5.2. Moreover, we evaluate the ablations to study model components in Section 5.3. Since the bounding boxes annotated by Rico [5] are noisy according to [1], we only conduct experiments on MUI-zh and VINS for comparison. Finally, we evaluate our APT for the object detection task on COCO [16] in Section 5.4." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b1", "b1", "b12", "b21", "b22", "b24", "b2" ], "table_ref": [], "text": "Datasets. We evaluate our method on two MUI element detection datasets, namely MUI-zh and VINS [1]. MUI-zh is a high-quality MUI element detection dataset with screenshots collected from mobile tinyapps. Its training set contains 4769 images and 41k elements, while the validation set has 1000 images and 9k elements within 18 categories. Another popular MUI dataset is VINS [1], which contains 3826 training and 981 validation images with 20 categories. For open-vocabulary element detection, we set the product, icon, button, card, tips, and menu as the base categories and the remaining 12 elements as novel ones on MUI-zh. As for VINS, we set background-image, card, text and spinner as four novel categories and others as base categories. Training details and metrics. We evaluate MUI element detection performance on MUI-zh and VINS for both standard and open-vocabulary settings. During training, the default visual encoder of all models we used in the experiments is ResNet50 [12] from pretrained CLIP [21]. Note that the language encoder is frozen following [7,22]. For MUI element detection, SGD is used with a batch size of 64, an initial learning rate of 0.002, and a maximum iteration of 12 epochs on 8 A100 GPUs. For open-vocabulary element detection, RPN is trained with the base categories of two datasets. The temperature τ is 0.01. The widely-used object detection metrics, including Mean Average Precision (mAP) for novel and all categories are used. The first group is standard object detection methods like Faster RCNN [24] and Cascaded RCNN [2], while the second group contains four CLIP-based models." }, { "figure_ref": [], "heading": "Main Results of MUI Element Detection", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "MUI element detection. As shown in", "publication_ref": [ "b22", "b37", "b37", "b22", "b12" ], "table_ref": [ "tab_3" ], "text": "The table shows that recently proposed Object-centric OVD [22] and RegionCLIP [37] achieve much better performances than standard object detection models since MUI data need more attention on vision-language alignment. Moreover, our APT improves about 4-5% (6-9%) mAP on CLIP-based detectors on MUI-zh (VINS), which is a significant enhancement and shows APT's effectiveness in the MUI element detection task. Among these detectors, Re-gionCLIP [37] equipped with APT achieves the best performances (51.23% and 80.84% on MUI-zh and VINS). Open-vocabulary MUI element detection. One more advantage of CLIP-based detectors compared to object detection ones is that they can detect objects not in the predefined categories. To this end, we also conduct experiments on open-vocabulary MUI element detection, and the results are in Table 2. Here we compare four recent methods with and without our APT on two datasets. The table shows that APT achieves noticeable improvements upon the listed methods. More specifically, among four CLIP-based methods, Object-centric OVD [22] with APT outperforms others on the MUI-zh, while RegionCLIP associated with APT gets the best performance on VINS.\nNote that even though we have 80% (16/20) base categories on VINS, the performances of these methods on novel categories still need improvement compared to OVD detectors on COCO novel categories. There are mainly two reasons. Firstly, compared to COCO, the category names of MUI data have much less relation, which causes difficulty for knowledge transfer and embedding alignment. For example, the knowledge of recognizing cats can be easily transferred to classify dogs, while it is challenging to utilize the knowledge of recognizing cards for classifying icons in MUI data. For example, Drawer and Switch are Methods MUI-zh VINS Novel (12) All( 18) Novel( 4) All( 20 " }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [ "b22" ], "table_ref": [ "tab_4", "tab_4", "tab_4", "tab_4", "tab_4" ], "text": "We perform experiments for the ablation studies on two datasets. First, we show the impact of progressively integrating our two tuning modalities: the OCR descriptions o i and vision embeddings v i , to the baseline RegionCLIP in Table 3. Then we explore different settings of weights, layers, tuning methods, and fusion functions, respectively. Analysis for Components. As shown in Table 3, we first use only the vision embeddings v i to tune the category prompts, which decreases about 3.3% (4.9%) mAP on MUI-zh (VINS). It means that the OCR descriptions of one element contribute a lot to its classification result. In the next row, we only equip RegionCLIP with OCR descriptions o i . Removing v i leads to a 2.3% (4.5%) decrease in mAP for two datasets, which means adaptively tuning prompts according to the appearance is also crucial to the final performance. Overall, the whole improvements of APT upon baseline RegionCLIP indicate its effectiveness.\nAnalysis for weights sharing. Our APT is suitable for two different modalities, as verified in Table 3. We can observe that using a unified network for encoding OCR and vision embeddings is slightly better than two individual ones with the same architecture. The reason may be that OCR descriptions of one element often describe its appearance. Thus, a lightweight unified network can naturally map two modalities into one semantic space for prompt tuning. Analysis for layers. Besides the weights of APT, we also want to explore its layer numbers. We compare two settings: 2 or 3 bottlenecks (fc+bn+relu) as presented in Table 3. With one extra layer, the network performance decreases a little. So our APT can be lightweight (only two layers with 128k parameters) and not time-consuming.\nAnalysis for tuning methods. An important part of APT is how to tune the prompts. Since our objective function is the similarity between category prompts and vision embeddings, there are four main ways: tuning only category prompts, tuning only vision embeddings and tuning both as shown in Table 3. We choose only to tune the category prompts with both OCR and vision embeddings, which gets the best performance. We believe the frozen category prompts rather than the trainable vision embeddings should be tuned adaptively in the MUI data domain.\nAnalysis for fusion functions. How to fuse the embeddings from different modalities also impacts element detection results. We compare element-wise sum, multiply and concentration with fc. Among them, the element-wise sum obtains the best performance with no extra parameters. We also believe employing element-wise sum for embedding fusion makes our APT work like an attention layer, excluding the self-attention part calculated on fixed text embeddings.\nAnalysis for text encoder. We also show the results of whether to freeze the text encoder T in this table. While we train T with MUI data, a large performance drop appears.\nAs a result, we follow [7,22] to freeze T in this paper." }, { "figure_ref": [], "heading": "Generalization on Object Detection", "publication_ref": [ "b37", "b37" ], "table_ref": [ "tab_6" ], "text": "Although our APT is specially designed for MUI element detection with extra OCR information, it can also be modified to tune the category prompts on object detection tasks. To this end, we additionally conduct OVD experiments on COCO [16]. We follow the data split of [37] with 48 base categories and 17 novel categories and we also use the processed data from [37] with 110k training images and 4836 test images. Since objects in COCO usually have no OCR descriptions, we directly use their category names as the OCR descriptions, and thus we can build APT on Re-gionCLIP for the OVD task.\nAs shown in Table 4, our APT slightly outperforms Re-gionCLIP on all metrics (e.g., 31.7 vs. 31.4 on novel categories) in the generalized setting. Compared with Region-CLIP in the standard OVD setting, our APT improves novel categories by about 0.7 mAP but only helps a little on the base categories. We find that the improvements in novel categories are larger than base ones, which indicates the effectiveness of APT in knowledge transfer. With these studies, we conclude that our APT positively impacts MUI element detection and object detection tasks." }, { "figure_ref": [], "heading": "GroundTruth", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "RegionCLIP with APT", "publication_ref": [], "table_ref": [], "text": "Figure 5. Visualizations of MUI element detection. We successively visualize the images and element bounding boxes of ground truth, RegionCLIP and with our APT. Note that we highlight the differences with the red dotted circle. Best viewed in color." }, { "figure_ref": [ "fig_4" ], "heading": "Visualizations", "publication_ref": [], "table_ref": [], "text": "T-SNE plots of region vision embeddings. We have shown that APT can significantly improve performance over the baseline RegionCLIP. However, because the CLIP-based models implicitly learn the alignments by calculating similarity, it is interesting to see their region vision embeddings after training. We show the t-SNE plots of RegionCLIP and APT region embeddings (after non-linear dimensionality reduction) of MUI categories on two validation datasets in Figure 4. We can observe that APT promotes intra-class compactness and inter-class separation, which benefits the vision-language alignment. For example, in MUI-zh, our APT separates products and banners better than Region-CLIP. As for VINS, our model can successfully classify edittexts and textbuttons, while RegionCLIP can not. Detection on MUI data. The detection visualizations of RegionCLIP and our APT on two MUI datasets are shown in Figure 5. We successively visualize the images, ground truth element boxes, RegionCLIP and ours. The red dotted circles in this figure highlight the differences. For example, RegionCLIP misclassifies texts in the fifth column and modal in the sixth column, while ours does not. It shows that our APT can better detect elements in MUI datasets." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b1" ], "table_ref": [], "text": "In this work, we introduced APT, a lightweight and effective prompts tuning module for MUI element detection.\nOur APT contains two modality inputs, i.e., element-wise OCR descriptions and visual features. They are fused and encoded within the APT to obtain embeddings for category prompt tuning. It significantly improves performance on existing CLIP-based models and achieves competitive results on two MUI datasets. We also released MUI-zh, a new MUI dataset with matched OCR descriptions. In summary, our model and dataset can benefit various real-world domains, such as robot interaction, information retrieval, targeted advertising, and attribute extraction on mobile phones. We hope our work could inspire designing new frameworks to tackle the challenging MUI element detection tasks. Limitations. Our work has several limitations that can be further investigated. (1) The open-vocabulary capabilities of existing models on the MUI data could be further improved compared to the results on OVD datasets as mentioned in Section 5.2. ( 2) Existing methods all rely on the frozen language encoder from CLIP. We believe the performance drop of unfreezing the language encoder may be due to the small dataset size." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We thank the contribution of Qiangqiangzhu and his team Tinyapp Ecological Business Team in Ant Group for establishing MUI-zh dataset. Besides, we also thank the Business Risk Management-eKYB Team from the Ant Group for their help in this paper." } ]
Recent object detection approaches rely on pretrained vision-language models for image-text alignment. However, they fail to detect the Mobile User Interface (MUI) element since it contains additional OCR information, which describes its content and function but is often ignored. In this paper, we develop a new MUI element detection dataset named MUI-zh and propose an Adaptively Prompt Tuning (APT) module to take advantage of discriminating OCR information. APT is a lightweight and effective module to jointly optimize category prompts across different modalities. For every element, APT uniformly encodes its visual features and OCR descriptions to dynamically adjust the representation of frozen category prompts. We evaluate the effectiveness of our plug-and-play APT upon several existing CLIP-based detectors for both standard and openvocabulary MUI element detection. Extensive experiments show that our method achieves considerable improvements on two datasets. The datasets is available at github. com/antmachineintelligence/MUI-zh.
Mobile User Interface Element Detection Via Adaptively Prompt Tuning
[ { "figure_caption": "Figure 1 .1Figure 1. Two MUI samples from VINS and MUI-zh dataset.Compared to VINS, we additionally obtain the OCR descriptions as supplemental information in MUI-zh. Moreover, we further link OCR descriptions and element annotations with the same color.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Original prompt of {Button} Original prompt of {Icon} 1 Prompt of {Icon} tuned with 1, 2 Prompt of {Button} tuned with 1 Figure 2 .1212Figure 2. Decision boundaries of baseline and adding APT during vision-language alignment. The stars are category prompts, and the circles are element vision embeddings. Element 1 is misclassified by baseline while our APT tunes its category prompts adaptively and thus successfully matches it and its category.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1212", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Overview of MUI element detection pipeline associated with our proposed APT. We first use OCR tools and class-agnostic RPN for input images to obtain OCR descriptions and element proposals. An IoM module matches and links the elements and OCR descriptions in the preprocessing phase. Existing CLIP-based detectors usually encode element proposals and category prompts into vision (green) and text (blue) embeddings by image encoder I and text encoder T for similarity calculation. Our APT additionally uses OCR descriptions (red) and vision embeddings to tune the text embedding for better alignment. Best viewed in color.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. T-SNE visualizations. We perform RegionCLIP without and with APT on two datasets. T-SNE is utilized to visualize their region embeddings. It shows that APT contributes a lot to vision-language alignment. Best viewed in color and in-zoom.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "we list two groups of detection approaches on both MUI-zh and VINS.", "figure_data": "MethodsPublicationMUI-zhVINSVINS [1]CHI'21-63.21Faster RCNN [24]NeurIPS'1544.6368.89Cascaded RCNN [2] + COCO pretrainCVPR'1846.76 48.8072.85 75.77DetPro [7]CVPR'2244.5571.67[7]+APT-48.62(+4.07) 77.73(+6.06)PromptDet [9]ECCV'2240.1468.94[9]+APT-45.07(+4.93) 76.43(+7.49)Object-centric [22]NeurIPS'2245.8772.36[22] +APT-50.78(+4.91) 79.48(+7.12)RegionCLIP [37]CVPR'2245.5171.53[37]+APT-51.23(+5.72) 80.84(+9.31)", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results (mAP%) of MUI element detection. We list the performance of six popular object detection approaches based on ResNet50. Besides them, we additionally report the performance gains of our APT module over four recent CLIP-based models.", "figure_data": "", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results (mAP%) of open-vocabulary MUI element detection. We report the performances of four CLIP-based methods on two datasets. Note that the number of novel categories of MUIzh is 12, while VINS has four novel classes. Our APT improves the results of both novel and all categories. + o i vs. f i + v i 51.08(-0.15) 78.23 (-2.61) t j + v i vs. f i + o i 50.23(-0.10) 78.35(-2.49) t j vs. f i + v i + o i", "figure_data": "Various APT architecturesMUI-zhVINSAPT(v i + o i )51.2380.84Ablationw/o o i47.96 (-3.27) 75.97(-4.87)w/o v i48.91 (-2.32) 76.32(-4.52)WeightsShare weights Individual weights51.23 51.01(-0.22) 79.65 (-1.19) 80.84Layers2 (fc+bn+relu) 3 (fc+bn+relu)51.23 51.19 (-0.04) 80.79 (-0.05) 80.84t j + v i + o i vs. f i51.2380.84Tuningt j 51.00 (-0.23) 77.97 (-2.87)Element-wise sum51.2380.84FusionElement-wise multi48.83 (-2.40) 77.65 (-3.19)Attention(Concat + fc) 51.17 (-0.06) 80.59 (-0.25)EncoderFreeze T Trainable T51.23 45.11(-6.12) 73.13(-7.71) 80.84", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation studies of APT and its variants.", "figure_data": "We evalu-", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Results (mAP%) of OVD on COCO dataset. We evaluate APT upon RegionCLIP (backbone ResNet50) following the standard base/novel split setting for a fair comparison.", "figure_data": "", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" } ]
Zhangxuan Gu; Zhuoer Xu; Haoxing Chen; Jun Lan; Changhua Meng; Weiqiang Wang; Tiansuan Lab; Ant Group
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "VINS dataset MUI-zh dataset References", "year": "" }, { "authors": "Sara Bunian; Kai Li; Chaima Jemmali; Casper Harteveld; Yun Fu; Magy Seif; Seif El-Nasr", "journal": "", "ref_id": "b1", "title": "Vins: Visual search for mobile user interface design", "year": "2021" }, { "authors": "Zhaowei Cai; Nuno Vasconcelos", "journal": "", "ref_id": "b2", "title": "Cascade r-cnn: Delving into high quality object detection", "year": "2018" }, { "authors": "Peixian Chen; Kekai Sheng; Mengdan Zhang; Yunhang Shen; Ke Li; Chunhua Shen", "journal": "", "ref_id": "b3", "title": "Open vocabulary object detection with proposal mining and prediction equalization", "year": "2022" }, { "authors": "Zhuo Chen; Jie Liu; Yubo Hu; Lei Wu; Yajin Zhou; Xianhao Liao; Ke Wang", "journal": "", "ref_id": "b4", "title": "Illegal but not malware: An underground economy app detection system based on usage scenario", "year": "2022" }, { "authors": "Biplab Deka; Zifeng Huang; Chad Franzen; Joshua Hibschman; Daniel Afergan; Yang Li; Jeffrey Nichols; Ranjitha Kumar", "journal": "", "ref_id": "b5", "title": "Rico: A mobile app dataset for building datadriven design applications", "year": "2017" }, { "authors": "Feng Dong; Haoyu Wang; Li Li; Yao Guo; F Tegawendé; Tianming Bissyandé; Guoai Liu; Jacques Xu; Klein", "journal": "", "ref_id": "b6", "title": "Frauddroid: Automated ad fraud detection for android apps", "year": "2018" }, { "authors": "Yu Du; Fangyun Wei; Zihe Zhang; Miaojing Shi; Yue Gao; Guoqi Li", "journal": "", "ref_id": "b7", "title": "Learning to prompt for open-vocabulary object detection with vision-language model", "year": "2022" }, { "authors": "Parvez Faruki; Ammar Bharmal; Vijay Laxmi; Vijay Ganmoor; Manoj Singh Gaur; Mauro Conti; Muttukrishnan Rajarajan", "journal": "IEEE communications surveys & tutorials", "ref_id": "b8", "title": "Android security: a survey of issues, malware penetration, and defenses", "year": "2014" }, { "authors": "Chengjian Feng; Yujie Zhong; Zequn Jie; Xiangxiang Chu; Haibing Ren; Xiaolin Wei; Weidi Xie; Lin Ma", "journal": "", "ref_id": "b9", "title": "Promptdet: Towards open-vocabulary detection using uncurated images", "year": "2022" }, { "authors": "Yuhao Gao; Haoyu Wang; Li Li; Xiapu Luo; Guoai Xu; Xuanzhe Liu", "journal": "", "ref_id": "b10", "title": "Demystifying illegal mobile gambling apps", "year": "2021" }, { "authors": "Xiuye Gu; Tsung-Yi Lin; Weicheng Kuo; Yin Cui", "journal": "", "ref_id": "b11", "title": "Open-vocabulary object detection via vision and language knowledge distillation", "year": "2021" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b12", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Yangyu Hu; Haoyu Wang; Ren He; Li Li; Gareth Tyson; Ignacio Castro; Yao Guo; Lei Wu; Guoai Xu", "journal": "", "ref_id": "b13", "title": "Mobile app squatting", "year": "2020" }, { "authors": "Eunhoe Kim; Sungmin Kim; Jaeyoung Choi", "journal": "IEEE", "ref_id": "b14", "title": "Detecting illegally-copied apps on android devices", "year": "2013" }, { "authors": "Jialiu Lin; Shahriyar Amini; Jason I Hong; Norman Sadeh; Janne Lindqvist; Joy Zhang", "journal": "", "ref_id": "b15", "title": "Expectation and purpose: understanding users' mental models of mobile app privacy through crowdsourcing", "year": "2012" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "", "ref_id": "b16", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Wei Liu; Dragomir Anguelov; Dumitru Erhan; Christian Szegedy; Scott Reed; Cheng-Yang Fu; Alexander C Berg", "journal": "", "ref_id": "b17", "title": "Ssd: Single shot multibox detector", "year": "2016" }, { "authors": "Jiasen Lu; Dhruv Batra; Devi Parikh; Stefan Lee", "journal": "Advances in neural information processing systems", "ref_id": "b18", "title": "Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks", "year": "2019" }, { "authors": "Qian Luo; Jiajia Liu; Jiadai Wang; Yawen Tan; Yurui Cao; Nei Kato", "journal": "IEEE Internet of Things Journal", "ref_id": "b19", "title": "Automatic content inspection and forensics for children android apps", "year": "2020" }, { "authors": "Zongyang Ma; Guan Luo; Jin Gao; Liang Li; Yuxin Chen; Shaoru Wang; Congxuan Zhang; Weiming Hu", "journal": "", "ref_id": "b20", "title": "Openvocabulary one-stage detection with hierarchical visuallanguage knowledge distillation", "year": "2022" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "", "ref_id": "b21", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Hanoona Rasheed; Muhammad Maaz; Muhammad Uzair Khattak; Salman Khan; Fahad Shahbaz Khan", "journal": "NIPS", "ref_id": "b22", "title": "Bridging the gap between object and image-level representations for open-vocabulary detection", "year": "2022" }, { "authors": "Joseph Redmon; Santosh Divvala; Ross Girshick; Ali Farhadi", "journal": "", "ref_id": "b23", "title": "You only look once: Unified, real-time object detection", "year": "2016" }, { "authors": "Kaiming Shaoqing Ren; Ross He; Jian Girshick; Sun", "journal": "", "ref_id": "b24", "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "year": "2015" }, { "authors": "Olga Russakovsky; Jia Deng; Hao Su; Jonathan Krause; Sanjeev Satheesh; Sean Ma; Zhiheng Huang; Andrej Karpathy; Aditya Khosla; Michael Bernstein", "journal": "International Journal of Computer Vision", "ref_id": "b25", "title": "Imagenet large scale visual recognition challenge", "year": "2015" }, { "authors": "Weijie Su; Xizhou Zhu; Yue Cao; Bin Li; Lewei Lu; Furu Wei; Jifeng Dai", "journal": "", "ref_id": "b26", "title": "Vl-bert: Pre-training of generic visuallinguistic representations", "year": "2019" }, { "authors": "Chongbin Tang; Sen Chen; Lingling Fan; Lihua Xu; Yang Liu; Zhushou Tang; Liang Dou", "journal": "", "ref_id": "b27", "title": "A large-scale empirical study on industrial fake apps", "year": "2019" }, { "authors": "Chunhua Zhi Tian; Hao Shen; Tong Chen; He", "journal": "", "ref_id": "b28", "title": "Fcos: Fully convolutional one-stage object detection", "year": "2019" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b29", "title": "Attention is all you need", "year": "2017" }, { "authors": "Nicolas Viennot; Edward Garcia; Jason Nieh", "journal": "", "ref_id": "b30", "title": "A measurement study of google play", "year": "2014" }, { "authors": "Haoyu Wang; Junjun Si; Hao Li; Yao Guo", "journal": "IEEE", "ref_id": "b31", "title": "Rmvdroid: towards a reliable android malware dataset with app metadata", "year": "2019" }, { "authors": "Liu Wang; Ren He; Haoyu Wang; Pengcheng Xia; Yuanchun Li; Lei Wu; Yajin Zhou; Xiapu Luo; Yulei Sui; Yao Guo", "journal": "", "ref_id": "b32", "title": "Beyond the virus: A first look at coronavirus-themed mobile malware", "year": "2020" }, { "authors": "Alireza Zareian; Kevin Dela Rosa; Derek Hao Hu; Shih-Fu Chang", "journal": "", "ref_id": "b33", "title": "Open-vocabulary object detection using captions", "year": "2021" }, { "authors": "Haoran Zhang; Xuan Song; Yin Long; Tianqi Xia; Kai Fang; Jianqin Zheng; Dou Huang; Ryosuke Shibasaki; Yongtu Liang", "journal": "Applied Energy", "ref_id": "b34", "title": "Mobile phone gps data in urban bicycle-sharing: Layout optimization and emissions reduction analysis", "year": "2019" }, { "authors": "Mingming Zhang; Guanhua Hou; Yeh-Cheng Chen", "journal": "", "ref_id": "b35", "title": "Effects of interface layout design on mobile learning efficiency: a comparison of interface layouts for mobile learning platform", "year": "2022" }, { "authors": "Shiyu Zhao; Zhixing Zhang; Samuel Schulter; Long Zhao; Anastasis Stathopoulos; Manmohan Chandraker; Dimitris Metaxas", "journal": "", "ref_id": "b36", "title": "Exploiting unlabeled data with vision and language models for object detection", "year": "2022" }, { "authors": "Yiwu Zhong; Jianwei Yang; Pengchuan Zhang; Chunyuan Li; Noel Codella; Liunian Harold Li; Luowei Zhou; Xiyang Dai; Lu Yuan; Yin Li", "journal": "CVPR", "ref_id": "b37", "title": "Regionclip: Region-based language-image pretraining", "year": "2007" }, { "authors": "Kaiyang Zhou; Jingkang Yang; Chen Change Loy; Ziwei Liu", "journal": "", "ref_id": "b38", "title": "Conditional prompt learning for vision-language models", "year": "2022" }, { "authors": "Kaiyang Zhou; Jingkang Yang; Chen Change Loy; Ziwei Liu", "journal": "International Journal of Computer Vision", "ref_id": "b39", "title": "Learning to prompt for vision-language models", "year": "2022" }, { "authors": "Xingyi Zhou; Rohit Girdhar; Armand Joulin; Philipp Krähenbühl; Ishan Misra", "journal": "", "ref_id": "b40", "title": "Detecting twenty-thousand classes using image-level supervision", "year": "2022" } ]
[ { "formula_coordinates": [ 4, 277.91, 92.81, 240.71, 128.78 ], "formula_id": "formula_0", "formula_text": "𝐨 ! 𝐨 ! 𝐟 \" 𝐟 # 𝐭 $ 𝐭 % 𝐨 ! 𝐯 ! 𝐯 ! 𝐯 ! % p !$ c $ 𝑚 o ! 𝐫 !" }, { "formula_coordinates": [ 4, 120.7, 570.58, 165.66, 9.68 ], "formula_id": "formula_1", "formula_text": "t j = T (c j ); f i = I(r i ).(1)" }, { "formula_coordinates": [ 4, 101.86, 624.49, 180.63, 24.83 ], "formula_id": "formula_2", "formula_text": "p ij = exp(cos(t j , f i )/τ ) m k=1 exp(cos(t k , f i )/τ ) , (2" }, { "formula_coordinates": [ 4, 282.49, 631.58, 3.87, 8.64 ], "formula_id": "formula_3", "formula_text": ")" }, { "formula_coordinates": [ 4, 134.14, 704.2, 152.23, 9.65 ], "formula_id": "formula_4", "formula_text": "L i = -log(p ij ).(3)" }, { "formula_coordinates": [ 4, 325.21, 651.47, 219.91, 65.86 ], "formula_id": "formula_5", "formula_text": "o i = φ(T (o i )); v i = φ(f i ); tji = t j + o i + v i ; (4) pij = exp(cos( tji , f i )/τ ) m k=1 exp(cos( tki , f i )/τ ) ,(5)" }, { "formula_coordinates": [ 5, 135.31, 117.39, 151.05, 12.17 ], "formula_id": "formula_6", "formula_text": "Li = -log(p ij ).(6)" } ]
10.1109/ICASSP43922.2022.9746759
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [], "table_ref": [], "text": "Semantic Image Synthesis is a subclass of Image-to-Image translation (I2I) where an image is generated from a semantic layout. In the context of autonomous driving, SIS is a promising method for generating diverse training and validation data because it can provide photorealism and controllability, two essential qualities for data augmentation schemes. Photorealism means that the generated images have the same texture and appearance as images recorded in real life; otherwise, a domain gap will ensue from the difference between training and test time distributions. On the other hand, controllability Fig. 1: Overview of different SIS and I2I paradigms: (a) Paired SIS, (b) Unpaired SIS: the common setting in the previous works was to assume images and labels come from the same distribution but they are unpaired, (c) Synthetic-to-Real SIS: the proposed task, labels come from a synthetic data, images originate from a real dataset, (d) Synthetic-to-Real I2I: source and target domains are different, but images are used in both domains, which is considerably easier than using labels as input, but does not provide controllability." }, { "figure_ref": [], "heading": "Task", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b0" ], "table_ref": [ "tab_0" ], "text": "Photorealism Controllability Cross-Domain Cheap data Syn-to-Real I2I Paired SIS Unpaired SIS Syn-to-Real SIS means that the user can generate or edit an image with predefined characteristics (changing weather, adding or removing cars, changing road width, increasing the number of pedestrians ...). Having user control over the input RGB space is of special interest for model validation [1] because it helps understand an AI model's decision through counterfactual reasoning: for instance, we wish to know whether a perception algorithm would behave correctly had there been more cars or pedestrians in a given scene. In this regard, SIS allows easy and explicit user control of scene editing through the manipulation of the input semantic map. For instance, one can add, delete or alter the style of different objects. SIS has been pivotal in several frameworks [2][3][4] where semantic layouts are used as a prior for urban scene generation.\nHowever, the necessity for a significant amount of paired training data undermines the initial intent of SIS, which was to provide inexpensive data augmentation. Unsupervised approaches [5][6][7][8][9][10][11][12][13][14][15][16] have shown impressive results without needing paired data, but because they rely on ideal assumptions, their usefulness in the actual world is limited. Mainly, they employ paired datasets (like Cityscapes) and present the images and labels to the model in an unpaired way in each training iteration mimicking a real unpaired setting. This has brought us to ask the following questions: what would a truly unpaired training setting for SIS look like? and how to pragmatically collect cheap semantic layouts for training?\nIn this work, we propose a new task, Synthetic-to-Real SIS, where a mapping is learned from a set of synthetic labels to a set of real images (Figure 1). Since labels and images originate from 2 different domains, not only is the setting truly unpaired but also pragmatic because producing semantic layouts from a driving simulator [17], or a graphics engine [18] is inexpensive (Table I). However, this task also introduces a new challenge: it is almost inevitable that the source and target datasets (labels and images, respectively) will have different class distributions. For instance, the synthetic dataset might contain a large proportion of buildings, while the real dataset has fewer buildings and more trees. Cross-domain differences could potentially lead to undesired semantic misalignments in the generated images (such as generating trees instead of buildings) because the model tries to imitate the target domain and ignores the source label map. This would undermine the utility of unpaired generative models.\nWe show that unpaired GANs underperform on the Synthetic-to-Real SIS task, as they ignore cross-domain semantic mismatches. To this end, we introduce a new framework that bypasses this limitation. The key idea of our framework is to learn to generate an image with the appearance of real images but with the content of the synthetic image that corresponds to the input label. In other words, we use the synthetic image as a guide to the content of the generated image by leveraging high-level features of a pre-trained network. Our contributions can be summarized as follows: (1) We propose Synthetic-to-Real SIS, a new task that allows training an SIS model in a pragmatic and lowcost way, (2) we develop a new framework for this task that exploits the similarity between generated and synthetic patches to preserve alignment with the input layout, and (3) in contrast to previous works which use one discriminator, we employ multiple discriminators on both the global and local image contexts to prevent overfitting on simple visual cues in the target domain. Experiments on 2 benchmarks, GTA-V -→ Cityscapes and GTA-V -→ Mapillary, show the superior performance of our model compared to state-of-theart approaches." }, { "figure_ref": [], "heading": "II. RELATED WORKS", "publication_ref": [ "b18", "b19", "b20", "b21", "b22", "b23", "b24", "b25", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b4", "b5", "b10", "b26" ], "table_ref": [], "text": "Semantic image synthesis is the task of translating semantic layouts to images [19]. In contrast to I2I [20], SIS is a severely under-constrained task because the input layout has fewer details than the output image, which is rich in high spatial Fig. 2: Performance of state-of-the-art unpaired models on Syntheticto-Real USIS. Semantic inconsistencies (in red) can appear between the label and images. frequencies like edges, corners, and texture. SPADE [21] has made a breakthrough in this task by designing spatially adaptive normalization layers. Since then, a plethora of frameworks [22][23][24][25][26] has presented progressive architectural improvements to enhance the fidelity and alignment of generated images. Unpaired image-to-image translation is the translation of one image collection (source domain) to another (called target domain). There have been 2 main approaches to unpaired I2I: cycle consistency losses [5][6][7] and relationship preservation constraint [8][9][10][11][12][13][14]. In our previous work, we designed a framework, USIS [15,16], that achieves state-of-the-art results on the unpaired label-to-image translation task. In this work, we try to extend USIS to a more realistic unpaired scenario for urban scene generation. Synthetic to real translation is one application of unpaired I2I, where a synthetic image (produced from a simulation environment) is translated to a photorealistic image [5,6,11,27]. Our approach is aligned with this line of work but uses synthetic layouts instead of images as input. This is because semantic layouts are more abstract and, thus, more manipulable than images, which opens the door to many applications such as semantic editing, model validation, and domain adaptation." }, { "figure_ref": [], "heading": "III. METHODOLOGY", "publication_ref": [ "b16", "b17", "b2", "b20" ], "table_ref": [], "text": "Problem Formulation. In SIS, our goal is to learn a mapping from a semantic label map, m ∈ M and a noise vector z ∈ N (0, 1) to a photorealistic RGB image, x R ∈ X R . The labels m are one-hot encoded with C classes. In Syntheticto-Real SIS, we exploit synthetic data from graphics engines and driving simulator to obtain images x S and labels m S in a cheap and automated fashion [17,18], as the real images x R are much more costly to annotate. Thus, we are given two datasets for training: a synthetic dataset, and a real dataset,\nD S = {(x S i , m S i )} N S\nD R = {x R j } N R j=1 .\nA model is trained to map a label m S to an image x R . Note that, unlike I2I models, we do not use synthetic images x S as input. Conditioning the model on semantic layouts only allows maintaining easy controllability over scene generation, which is the goal of SIS [3,21]. The output of the generator can be expressed as: xR = G(m, z)." }, { "figure_ref": [ "fig_0" ], "heading": "A. Effect of semantic mismatch on model performance", "publication_ref": [ "b10", "b13", "b17", "b27" ], "table_ref": [], "text": "Since these two datasets are collected independently, there exist discrepancies in the class distribution. For instance, D R can have more trees, while D S can have more buildings. As the labels for real images,m R , are absent during training, we do not have an a priori knowledge of the class distribution of X R . To study the effect of cross-domain differences on generative models, we train 2 state-of-the-art models, CUT [11] and SRC [14], on the Synthetic-to-Real task. We use GTA-V [18] and Cityscapes [28] as the source and target domains. Results show that many semantic inconsistencies, such as generating trees or buildings instead of sky, appear in the generated images. Looking at the Cityscapes dataset shows that, in fact, the class 'sky' occupies a small part of the image on average, while this is not the case in GTA-V. This would suggest that previous works learn to sometimes ignore the semantic layout instead to better approximate the target domain distribution.\nOverview of the proposed framework. Motivated by these findings, we present a framework named to bypass crossdomain differences. The framework trains a generator G to satisfy two objectives: realism and alignment. Realism means that the generated images should have the appearance and texture of real images x R , while the alignment objective states that the image should be aligned to the input semantic layout (see Figure 3). In the following, we describe how we accomplish the 2 objectives for the challenging use case of synthetic to real SIS." }, { "figure_ref": [], "heading": "B. Realism Objective", "publication_ref": [ "b4", "b10", "b11", "b12", "b13", "b19", "b15", "b15", "b28", "b29", "b30", "b26", "b31", "b26", "b31", "b26", "b18", "b32", "b26", "b31" ], "table_ref": [], "text": "In previous works on unpaired GANs [5,[11][12][13][14], a discriminator is used to increase the realism of generated images.\nAn important design element in discriminators is its visual receptive field: what should the discriminator look at in the generated and real images to judge their realism? In previous works, there have been two different answers to this question: patch discrimination and whole image discrimination. The first approach outputs a matrix of realism scores, while the latter outputs only one score per image. In I2I, many works [5-7, 11, 12, 14, 27] used a patch discriminator [20] that judges the quality of individual image patches. On the other hand, it has been shown that whole image discrimination is more suitable for under-constrained conditional GANs like SIS [16]. Underconstrained means that the output image contains more details than the input layout. In this case, the discriminator needs to look at a larger spatial context to give stronger feedback to the generator.\nHowever, both strategies become suboptimal in unpaired SIS when the distribution of X R is different than M S . We find that the whole image discriminator of USIS focuses on a small subset of visual cues that characterizes the distribution of X R . For instance, the discriminator might observe that X R contains a lot of trees and thus would push the generator to generate images with many trees, regardless of the input layout. This creates a semantic mismatch between the label and generated image, which is undesirable. On the other hand, we still observe that the patch discriminator does not encourage the generation of images with realistic texture. Instead, it is desirable that the discriminator learns the appearance or texture of the real images, not their semantic content, which should be determined by the input semantic map.\nTo overcome the shortcomings of both approaches, we draw inspiration from how a human would qualitatively judge the realism of an image. A human would first glance at the overall image and intuitively feel whether it is real or fake. Then, the human would closely inspect details of different scales, starting from more obvious and larger ones to smaller ones, to judge their quality. To realize this strategy, we first use a wavelet-based unconditional discriminator [16,29], which we denote by D u to evaluate the realism of the generated images. Then, we employ a discriminator ensemble, {D l } L l=1 , to evaluate the realism of feature maps extracted from a pretrained VGG network [30], which is frozen during training. Each discriminator processes its input feature map to a onechannel tensor featuring a realism score for each pixel. We use one discriminator on each of the last L ReLu layers of VGG. Each discriminator contains 5 layers consisting of (convolution-group normalization-ReLu). We use spectral normalization on all discriminators D l of the high-feature space and R1 regularization on the whole-image discriminator [31].\nDiscriminating high-level features has been used in previous works [27,32] to focus on semantics instead of lowlevel details. While our work shares the same discriminator design as EPE [27], it differs in the regularization method. Specifically, instead of adding 1 × 1 convolution like [32], or using the adaptive backpropagation of [27], we regularize the discriminators by changing the input to the VGG extractor from the whole image to a stack of patches of the image. We define P as the patch operator of the image x, which divides it into 4 patches with equal areas and stacks them along the batch dimension of the tensor. We define φ l (x) = VGG l (P(x)), as the feature of the l-th layer of the VGG network resulting from P(x). We argue that by providing smaller patches of the image, the VGG features are more fine-grained and more descriptive, because they depict a fewer number of objects than the entire image. This allows the discriminator ensemble to focus on more varied features, so it does not overfit the features of frequent and/or large classes. The adversarial learning objectives for the discriminators are:\nL adv D l = -E x R log(D l (φ l (x R ))) -E m S log(1 -D l (φ l (x R ))) , L adv D 0 = -E x R log(Du(x R )) -E m S log(1 -Du(x R )) .(1)\nThe generator's adversarial loss becomes:\nL adv G = -E m S log(Du(x R )) + L l=1 log(D l (φ l (x R ))) .(2)\nWe summarize our technical contributions in this part as follows:\n• Unlike multi-scale discriminators in previous I2I applications [19,33], which only consider patches of different resolutions, we employ a council of discriminators for the whole image and each of its patches simultaneously. • Different than [27,32], we provide regularization for the high-level feature discriminators by reducing the input to the VGG network though the defined operator P." }, { "figure_ref": [], "heading": "C. Alignment Objective", "publication_ref": [ "b15", "b33", "b34", "b35" ], "table_ref": [ "tab_7" ], "text": "In SIS, it is necessary to preserve the faithfulness of the generated image to the semantic layout. In our previous work, USIS [16], this objective was achieved through a U-Net [34] segmentation network S that learns to segment the generated image into its input layout. However, when a domain shift is introduced between source and target domains, the cycle segmentation loss becomes weaker than the adversarial loss, leading to deterioration in the mIoU score (see Table VI).\nOur key idea to remedy this issue is to rely on the synthetic image x S as a conditional guide for the generated image. Most importantly, we only wish to transfer the content of x S to xR , not its texture. To this end, we use a perceptual loss, aka LPIPS loss, between xR and x S . LPIPS has been extensively used [35,36] to penalize structural differences between a generated and a reference image. It computes the similarity between the VGG features extracted from the 2 images.\nApplying LPIPS loss between x S and xR instead of the cycle segmentation loss of USIS leads to an improvement in the alignment score. However, we find that the alignment loss might often ignore small objects in the image. As a remedy, we propose to employ perceptual loss on a patch level instead of the global level. The alignment objective then becomes:\nL LPIPS = l φ l (P(x S )) -φ l (P(x R )) 2 , (3\n)\nwhere φ l is the activation of the patches of the image from layer l in the VGG-network. Our motivation for applying a patchwise LPIPS is to amplify the alignment loss for smaller classes. A small object in an image would have a negligible contribution in the high-level representation of the whole image but would have a bigger contribution in the high-level representation of only a local part of that image." }, { "figure_ref": [], "heading": "IV. EXPERIMENTS", "publication_ref": [ "b27", "b17", "b36", "b14", "b37", "b10" ], "table_ref": [], "text": "Datasets. We establish 2 benchmarks using 3 datasets: Cityscapes [28], GTA-V [18] and Mapillary [37]. In the first benchmark, GTA-V -→ Cityscapes, we use 2 training sets: GTA-V labels and Cityscapes images. In the second benchmark, we use GTA-V labels and Mapillary images. Cityscapes contains street scenes in German cities with pixellevel annotations, while Mapillary contains more diverse street scenes around the world. GTA-V is a dataset containing 25k synthetic annotated images extracted from a computer game. For all 3 datasets, we use only the 34 classes defined by Cityscapes. For both experiments, we use the same image resolution (256×512) as USIS [15]. We use the last 5k images in the GTA-V dataset as a test split. We use a batchsize of 2, and a learning rate is 0.0001 in all experiments. Metrics. A good generative model should generalize well on both synthetic and real labels, meaning it should perform well, whether the input to the model is a synthetic map m S , or a real map, m R . We use the Frechet Inception Distance (FID) and Kernel Inception Distance (KID) to measure image fidelity and mean Intersection over Union to measure the alignment between the generated images and the input labels. For mIoU calculation, a DRN-D-105 [38] pre-trained on the corresponding real dataset is used. We perform 2 sets of experiments: in the first, we train using GTA-V labels and Cityscapes images, and in the second, we use GTA-V labels and Mapillary images. To test the performance of each set of experiments, we use two test splits: 1) the first test split consists of the labels and images in the official validation splits Method GTA-V -→ Cityscapes Cityscapes Val. Split GTA-V -→ Mapillary Mapillary Val. Split FID↓ mIoU↑ KID↓ FID ↓ mIoU↑ FID↓ mIoU ↑ KID↓ FID↓ mIoU↑ CUT [11] of the corresponding real dataset (Cityscapes/Mapillary), 2) the second test split consists of GTA-V labels in the test split (last 5k labels in the GTA-V dataset) and all images in the validation split of the real dataset. mIoU is always computed between the generated images and input labels, while FID is computed between the generated images and the images in the test split. In the tables, we denote test split 1 by the corresponding real dataset's name and test split 2 by GTA-V → Cityscapes/Mapillary." }, { "figure_ref": [], "heading": "V. RESULTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1" ], "heading": "A. Comparison against state-of-the-art", "publication_ref": [ "b10", "b11", "b12", "b13", "b14" ], "table_ref": [], "text": "We compare against state-of-the-art unpaired generative models [11][12][13][14][15]. Our method is able to outperform existing methods in alignment in all settings by a large margin and is the best or second best in terms of image quality (FID, KID). We also notice that the proposed approach generalizes well when the input is a label from the real dataset (see Results on Cityscapes and Mapillary Val. splits). We show qualitative results on Cityscapes and Mapillary in Figure 4." }, { "figure_ref": [], "heading": "B. Ablation Studies", "publication_ref": [], "table_ref": [ "tab_3", "tab_5" ], "text": "How to preserve alignment with the input label? In Table III, we compare between different alignment strategies. In Config A, we perform LPIPS on the whole image instead of patches; in Config B and C, LPIPS is performed on P(x) with 4 and 16 patches, respectively. All patches in the same experiment have equal size, i.e., in experiment B, patches have an area of 25% of the total image's area. In Config D, we try a different approach: instead of generating whole images and aligning individual patches, we generate and align individual patches. In this setting, the training is done only on the patch level, which takes a substantially longer time to train. Results show that the hybrid approach of generating images and aligning patches is more optimal than generating and aligning images only or patches only. We find that the patch size is also a very important design parameter; very small patches do not provide enough context for alignment. IV, we compare different choices for image discrimination. In Config A, we present whole images to the wavelet-based discriminator D u and to the discriminator ensemble {D l } L l=1 . In Config B, we present only patches as input to both discriminators, while in Config C and D, we experiment with hybrid approaches. Interestingly, whole image discrimination only (Config A) leads to a smaller FID and KID than Config B and C, but it demonstrates the poorest alignment. Config C and D achieve high alignment scores, but again a larger patch size proves to be essential for better diversity and alignment." }, { "figure_ref": [], "heading": "Config", "publication_ref": [ "b24", "b38" ], "table_ref": [], "text": "How to use the synthetic image as an intermediary for generating a photorealistic image? A key idea of our approach is to use the synthetic image as a guide to the content in the generated image because it can provide helpful spatial information not present in the input label map (edges and corners, boundaries between different objects, shadows, texture). We have used the synthetic image in the alignment loss through a perceptual loss function. However, a very straightforward method to exploit the label map, is to learn the mapping between synthetic labels and images in a supervised way, then learn the mapping between the generated synthetic images and real images, using unpaired I2I model. In this \"two-stage\" approach, we employ OASIS [25] as the supervised SIS model and VSAIT [39] as the unpaired Synthetic-to-Real I2I model. Results in Table V reveal that while the alignment of this simple two-stage approach is better than our model, the FID is very high. Moreover, it doesn't generalize well to real labels, as the FID remains very high, and the difference in mIoU is reduced, which justifies our approach.\nOn the utility of the proposed approach and task We compare the performance of our model against the state-ofthe-art trained on the Synthetic-to-Real dataset and on the Real dataset only in an unpaired manner. We report the performance on the validation split of Mapillary. Results show that the performance of previous works substantially drops when their training and test distributions differ. However, the proposed approach achieves high image fidelity and strong alignment on the real labels in test-time, although they were unseen during training. More interestingly, the performance of our model surpasses 4 state-of-the-art models trained on the original dataset and comes only second to USIS. This implies that a carefully designed generative model trained on synthetic data can generalize well to real data." }, { "figure_ref": [], "heading": "VI. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this work, we explore a new task that exploits synthetic data to train a generative model for SIS, substantially alleviating labeling costs without sacrificing photorealism. Compared to synthetic-to-real I2I, this task has the advantage of allowing easier manipulation in the image space post-generation. We presented a framework that outperforms previous works on this task and has interestingly shown a strong generalization ability when the test-time input labels are drawn from a different distribution than the labels seen during training. Most importantly, we have shown that a hybrid approach of discriminating both images and patches is key to bypassing the semantic domain gap between images and labels. Additionally, using the synthetic image as a guide to a patch content loss promotes stronger alignment without undermining the photorealism of generated images. We believe that the proposed task can offer a pragmatic setting for training generative models and encourage future works to explore how to use synthetic data to train generative models for stronger generalization. " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "The research leading to these results is funded by the German Federal Ministry for Economic Affairs and Energy within the project \"AI Delta Learning.\" The authors would like to thank the consortium for the successful cooperation.\nCode is available at https://github.com/GeorgeEskandar/ Towards-Pragmatic-Semantic-Image-Synthesis-for-Urban-Scenes i=1" } ]
The need for large amounts of training and validation data is a huge concern in scaling AI algorithms for autonomous driving. Semantic Image Synthesis (SIS), or label-toimage translation, promises to address this issue by translating semantic layouts to images, providing a controllable generation of photorealistic data. However, they require a large amount of paired data, incurring extra costs. In this work, we present a new task: given a dataset with synthetic images and labels and a dataset with unlabeled real images, our goal is to learn a model that can generate images with the content of the input mask and the appearance of real images. This new task reframes the well-known unsupervised SIS task in a more practical setting, where we leverage cheaply available synthetic data from a driving simulator to learn how to generate photorealistic images of urban scenes. This stands in contrast to previous works, which assume that labels and images come from the same domain but are unpaired during training. We find that previous unsupervised works underperform on this task, as they do not handle distribution shifts between two different domains. To bypass these problems, we propose a novel framework with two main contributions. First, we leverage the synthetic image as a guide to the content of the generated image by penalizing the difference between their highlevel features on a patch level. Second, in contrast to previous works which employ one discriminator that overfits the target domain semantic distribution, we employ a discriminator for the whole image and multiscale discriminators on the image patches. Extensive comparisons on the benchmarks GTA-V → Cityscapes and GTA-V → Mapillary show the superior performance of the proposed model against state-of-the-art on this task.
Towards Pragmatic Semantic Image Synthesis for Urban Scenes
[ { "figure_caption": "Fig. 3 :3Fig. 3: An overview of the proposed unpaired framework. Left: We use a wavelet-based whole image discriminator and a discriminator ensemble in the high-level feature space to evaluate the realism of patches. Right: We use the synthetic image as a guide to promote better alignment with the semantic layout on a patch-level.", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig. 4: We show qualitative results for our model and other baselines on GTA-V → Mapillary, GTA-V → Cityscapes, Mapillary, Cityscapes", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Comparison between different SIS and I2I paradigms: the proposed task combines several practical advantages compared to the previous tasks.", "figure_data": "", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "Comparison against SOTA methods on two benchmarks.Best results are denoted in red, while second best are denoted in blue.", "figure_data": "Config Generated Output during trainingAlignment methodFID ↓ mIoU↑ KID ↓AWhole ImageLPIPS on image level60.5 21.8 0.0265BWhole ImageLPIPS on patch-level (4) 40.4 27.1 0.0120CWhole ImageLPIPS on patch-level (16) 55.1 24.9 0.0181DIndividual PatchesLPIPS on patch level102.4 19.4 0.0641", "figure_id": "tab_2", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "Comparison between different alignment strategies.", "figure_data": "", "figure_id": "tab_3", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "Comparison between different image discrimination strategies.", "figure_data": "MethodGTA-V → Cityscapes Cityscapes Val. Split FID ↓ mIoU↑ KID ↓ FID ↓ mIoU↑Proposed Method 40.4 27.1 0.012 67.340.6Two-Stage method 81.1 40.4 0.044 113.845.2TABLE V: How to use the synthetic image as an intermediaryfor generating a photorealistic image? Comparison of ourmethod with a simple alternative.Which discrimination strategy to use? In Table", "figure_id": "tab_5", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "Performance of different methods when trained in ideal conditions vs. when trained in a true unpaired setting. We show the strong performance of our model, trained on Cross Domain. The best results are marked in red; the second best are marked in blue.", "figure_data": "", "figure_id": "tab_7", "figure_label": "VI", "figure_type": "table" } ]
George Eskandar; Diandian Guo; Karim Guirguis; Bin Yang
[ { "authors": "Éloi Zablocki", "journal": "International Journal of Computer Vision", "ref_id": "b0", "title": "Explainability of deep vision-based autonomous driving systems: Review and challenges", "year": "2022" }, { "authors": "Guillaume Le; Moing ", "journal": "", "ref_id": "b1", "title": "Semantic Palette: Guiding Scene Generation with Class Proportions", "year": "2021" }, { "authors": "Samaneh Azadi", "journal": "", "ref_id": "b2", "title": "Semantic bottleneck scene generation", "year": "2019" }, { "authors": "Anna Volokitin; Ender Konukoglu; Luc Van Gool", "journal": "", "ref_id": "b3", "title": "Decomposing image generation into layout prediction and conditional synthesis", "year": "2020" }, { "authors": "Jun-Yan Zhu", "journal": "", "ref_id": "b4", "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "year": "2017" }, { "authors": "Xun Huang", "journal": "", "ref_id": "b5", "title": "Multimodal unsupervised image-to-image translation", "year": "2018" }, { "authors": "Hsin-Ying Lee", "journal": "", "ref_id": "b6", "title": "Diverse Image-to-Image Translation via Disentangled Representation", "year": "2018" }, { "authors": "Matthew Amodio; Smita Krishnaswamy", "journal": "", "ref_id": "b7", "title": "Travelgan: Image-to-image translation by transformation vector learning", "year": "2019" }, { "authors": "Sagie Benaim; Lior Wolf", "journal": "", "ref_id": "b8", "title": "One-sided unsupervised domain mapping", "year": "2017" }, { "authors": "Huan Fu", "journal": "", "ref_id": "b9", "title": "Geometry-consistent generative adversarial networks for one-sided unsupervised domain mapping", "year": "2019" }, { "authors": "Taesung Park", "journal": "", "ref_id": "b10", "title": "Contrastive Learning for Unpaired Imageto-Image Translation", "year": "2020" }, { "authors": "Zhiwei Jia", "journal": "", "ref_id": "b11", "title": "Semantically robust unpaired image translation for data with unmatched semantics statistics", "year": "2021" }, { "authors": "Chuanxia Zheng; Tat-Jen Cham; Jianfei Cai", "journal": "", "ref_id": "b12", "title": "The spatially-correlative loss for various image translation tasks", "year": "2021" }, { "authors": "Chanyong Jung; Gihyun Kwon; Jong Chul; Ye ", "journal": "", "ref_id": "b13", "title": "Exploring Patch-wise Semantic Relation for Contrastive Learning in Image-to-Image Translation Tasks", "year": "2022" }, { "authors": "George Eskandar", "journal": "", "ref_id": "b14", "title": "Wavelet-Based Unsupervised Labelto-Image Translation", "year": "2022" }, { "authors": "George Eskandar", "journal": "Computers & Graphics", "ref_id": "b15", "title": "USIS: Unsupervised Semantic Image Synthesis", "year": "2023" }, { "authors": "Alexey Dosovitskiy", "journal": "PMLR", "ref_id": "b16", "title": "CARLA: An open urban driving simulator", "year": "2017" }, { "authors": "Stephan R Richter", "journal": "Springer International Publishing", "ref_id": "b17", "title": "Playing for Data: Ground Truth from Computer Games", "year": "2016" }, { "authors": "Ting-Chun Wang", "journal": "", "ref_id": "b18", "title": "High-resolution image synthesis and semantic manipulation with conditional GANs", "year": "2018" }, { "authors": "Phillip Isola", "journal": "", "ref_id": "b19", "title": "Image-to-image translation with conditional adversarial networks", "year": "2017" }, { "authors": "Taesung Park", "journal": "", "ref_id": "b20", "title": "Semantic image synthesis with spatiallyadaptive normalization", "year": "2019" }, { "authors": "Zhentao Tan", "journal": "", "ref_id": "b21", "title": "Rethinking Spatially-Adaptive Normalization", "year": "2020" }, { "authors": "Peihao Zhu", "journal": "", "ref_id": "b22", "title": "SEAN: Image Synthesis With Semantic Region-Adaptive Normalization", "year": "2020" }, { "authors": "Xihui Liu", "journal": "NeurIPS", "ref_id": "b23", "title": "Learning to predict layout-to-image conditional convolutions for semantic image synthesis", "year": "2019" }, { "authors": "Edgar Schönfeld", "journal": "", "ref_id": "b24", "title": "You Only Need Adversarial Supervision for Semantic Image Synthesis", "year": "2021" }, { "authors": "Hao Tang", "journal": "", "ref_id": "b25", "title": "Edge guided GANs with semantic preserving for semantic image synthesis", "year": "2020" }, { "authors": "Hassan Stephan R Richter; Vladlen Abu Alhaija; Koltun", "journal": "IEEE Transactions on Pattern Analysis & Machine Intelligence", "ref_id": "b26", "title": "Enhancing photorealism enhancement", "year": "2023" }, { "authors": "Marius Cordts", "journal": "", "ref_id": "b27", "title": "The cityscapes dataset for semantic urban scene understanding", "year": "2016" }, { "authors": "Rinon Gal", "journal": "", "ref_id": "b28", "title": "SWAGAN: A Style-based Wavelet-driven Generative Model", "year": "2021" }, { "authors": "Karen Simonyan; Andrew Zisserman", "journal": "", "ref_id": "b29", "title": "Very deep convolutional networks for large-scale image recognition", "year": "2014" }, { "authors": "S Tero Karras; Timo Laine; Aila", "journal": "", "ref_id": "b30", "title": "A Style-Based Generator Architecture for Generative Adversarial Networks", "year": "2019" }, { "authors": "Axel Sauer", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b31", "title": "Projected gans converge faster", "year": "2021" }, { "authors": "Satoshi Iizuka; Edgar Simo-Serra; Hiroshi Ishikawa", "journal": "ACM Transactions on Graphics (ToG)", "ref_id": "b32", "title": "Globally and locally consistent image completion", "year": "2017" }, { "authors": "O Ronneberger; P Fischer; T Brox", "journal": "", "ref_id": "b33", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation", "year": "2015" }, { "authors": "L A Gatys; A S Ecker; M Bethge", "journal": "", "ref_id": "b34", "title": "Image Style Transfer Using Convolutional Neural Networks", "year": "2016" }, { "authors": "Alexey Dosovitskiy; Thomas Brox", "journal": "", "ref_id": "b35", "title": "Generating Images with Perceptual Similarity Metrics based on Deep Networks", "year": "2016" }, { "authors": "Gerhard Neuhold", "journal": "", "ref_id": "b36", "title": "The mapillary vistas dataset for semantic understanding of street scenes", "year": "2017" }, { "authors": "Fisher Yu; Vladlen Koltun; Thomas Funkhouser", "journal": "", "ref_id": "b37", "title": "Dilated residual networks", "year": "2017" }, { "authors": "Justin Theiss", "journal": "Springer", "ref_id": "b38", "title": "Unpaired Image Translation via Vector Symbolic Architectures", "year": "2022" } ]
[ { "formula_coordinates": [ 2, 473.67, 707.25, 86.54, 13.14 ], "formula_id": "formula_0", "formula_text": "D S = {(x S i , m S i )} N S" }, { "formula_coordinates": [ 3, 131.23, 257.67, 71.67, 13.33 ], "formula_id": "formula_1", "formula_text": "D R = {x R j } N R j=1 ." }, { "formula_coordinates": [ 4, 48.96, 406.79, 258.97, 42.26 ], "formula_id": "formula_2", "formula_text": "L adv D l = -E x R log(D l (φ l (x R ))) -E m S log(1 -D l (φ l (x R ))) , L adv D 0 = -E x R log(Du(x R )) -E m S log(1 -Du(x R )) .(1)" }, { "formula_coordinates": [ 4, 60.86, 473.5, 239.16, 27.03 ], "formula_id": "formula_3", "formula_text": "L adv G = -E m S log(Du(x R )) + L l=1 log(D l (φ l (x R ))) .(2)" }, { "formula_coordinates": [ 4, 348.51, 247.77, 210.66, 24.51 ], "formula_id": "formula_4", "formula_text": "L LPIPS = l φ l (P(x S )) -φ l (P(x R )) 2 , (3" }, { "formula_coordinates": [ 4, 559.16, 252.46, 3.87, 8.64 ], "formula_id": "formula_5", "formula_text": ")" } ]
2023-05-16
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b11", "b12", "b17", "b18", "b19", "b20", "b26" ], "table_ref": [], "text": "Text detection and recognition systems [11] and geometric layout analysis techniques [12,13] have long been developed separately as independent tasks. Research on text detection and recognition [14,15,16,17] has mainly focused on the domain of natural images and aimed at single level text spotting (mostly, wordlevel). Conversely, research on geometric layout analysis [12,13,18,19], which is targeted at parsing text paragraphs and forming text clusters, has assumed document images as input and taken OCR results as fixed and given by independent systems. The synergy between the two tasks remains largely under-explored.\nRecently, the Unified Detector work by Long et al. [20] shows that the unification of line-level detection of text and geometric layout analysis benefits both tasks significantly. StructuralLM [21] and LayoutLMv3 [27] show that text line grouping signals are beneficial to the downstream task of document understanding and are superior to word-level bounding box signals. These initial studies demonstrate that the unification of OCR and layout analysis, which we term as Hierarchical Text Detection and Recognition (HTDR), can be mutually beneficial to OCR, layout analysis, and downstream tasks.\nGiven the promising potential benefits, we propose the ICDAR 2023 Competition on Hierarchical Text Detection and Recognition. In this competition, candidate systems are expected to perform the unified task of text detection and recognition and geometric layout analysis. Specifically, we define the unified task as producing a hierarchical text representation, including word-level bounding boxes and text transcriptions, as well as line-level and paragraph-level clustering of these word-level text entities. We defer the rigorous definitions of word / line / paragraph later to the dataset section. Fig. 1 illustrates our notion of the unified task. We believe this competition will have profound and long-term impact on the whole image-based text understanding field by unifying the efforts of text detection and recognition and geometric layout analysis, and furthering providing new signals for downstream tasks.\nThe competition started on January 2nd 2023, received more than 50 submissions in 2 tasks in total, and closed on April 1st 2023. This report provides details into the motivation, preparation, and results of the competition. We believe the success of this competition greatly promotes the development of this research field. Furthermore, the dataset except the test set annotation and evaluation script are made publicly available. The competition website1 remains open to submission and provides evaluation on the test set." }, { "figure_ref": [], "heading": "Competition Protocols", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "Dataset", "publication_ref": [ "b19", "b27", "b19", "b28", "b29" ], "table_ref": [], "text": "The competition is based on the HierText dataset [20]. Images in HierText are collected from the Open Images v6 dataset [28], by first applying the Google Cloud Platform (GCP) Text Detection API 2 and then filtering out inappropriate images, for example those with too few text or non-English text. In total, 11639 images are obtained. In this competition, we follow the original split of 8281/1724/1634 for train, validation, test sets. Images and annotations of the train and validation set are released publicly. The test set annotation is kept private and will remain so even after the end of the competition.\nAs noted in the original paper [20], we check the cross-dataset overlap rates with the two other OCR datasets that are based on Open Images. We find that 1.5% of the 11639 images we have are also in TextOCR [29] and 3.6% in Intel OCR [30]. Our splits ensure that our training images are not in the validation or test set of Text OCR and Intel OCR, and vice versa. The images are annotated in a hierarchical way of word -to-line-to-paragraph, as shown in Fig. 2. Words are defined as a sequence of textual characters not interrupted by spaces. Lines are then defined as space-separated clusters of words that are logically connected and aligned in spatial proximity. Finally, paragraphs are composed of lines that belong to the same semantic topic and are geometrically coherent. Fig. 3 illustrates some annotated samples. Words are annotated with polygons, with 4 vertices for straight text and more for curved text depending on the shape. Then, words are transcribed regardless of the scripts and languages, as long as they are legible. Note that we do not limit the character sets, so the annotation could contain case-sensitive characters, digits, punctuation, as well as non-Latin characters such as Cyrillic and Greek. After word-level annotation, we group words into lines and then group lines into paragraphs. In this way, we obtain a hierarchical annotation that resembles a forest structure of the text in an image. " }, { "figure_ref": [ "fig_0", "fig_4", "fig_5" ], "heading": "Tasks", "publication_ref": [ "b30", "b19", "b28" ], "table_ref": [], "text": "Our challenge consists of 2 competition tracks, Hierarchical Text Detection and Word-Level End-to-End Text Detection and Recognition. In the future, we plan to merge them into a single unified Hierarchical Text Spotting task that requires participants to give a unified representation of text with layout. In this task, participants are provided with images and expected to produce the hierarchical text detection results. Specifically, the results are composed of word-level bounding polygons and line and paragraph clusters on top of words. The clusters are represented as forests, as in Fig. 1, where each paragraph is a tree and words are leaves. For this task, participants do not need to provide text recognition results. As illustrated in Fig. 4, we evaluate this task as 3 instance segmentation sub-tasks for word, line, and paragraph respectively. For word level, each word is one instance. For line level, we take the union of each line's children words as one instance. For paragraph level, we aggregate each paragraph's children lines, and take that as one instance. With this formulation, all the 3 sub-tasks will be evaluated with the PQ metric [31] designed for instance segmentation, as specified in [20]:\nP Q = (p,g)∈T P IoU (p, g) |T P | + 1 2 |F P | + 1 2 |F N |(1)\nwhere T P, F P, F N represent true positives, false positives, and false negatives respectively. We use an IoU threshold of 0.5 to count true positives. Note that the PQ metric is mathematically equal to the product of the Tightness score, which is defined as the average IoU scores of all TP pairs, and the F1, score which is commonly used in previous OCR benchmarks. Previous OCR evaluation protocols only report F1 scores which do not fully reflect the detection quality. We argue that tightness is very important in evaluating hierarchical detection. It gives an accurate measurement of how well detections match ground-truths. For words, a detection needs to enclose all its characters and not overlap with other words, so that the recognition can be correct. The tightness score can penalize missing characters and oversized boxes. For lines and paragraphs, they are represented as clusters of words, and are evaluated as unions of masks. Wrong clustering of words can also be reflected in the IoU scores for lines and paragraphs. In this way, using the PQ score is an ideal way to accurately evaluate the hierarchical detection task. Each submission has 3 PQ scores for word, line, and paragraph respectively. There are 3 rankings for these 3 sub-tasks respectively. For the final ranking of the whole task, we compute the final score as a harmonic mean of the 3 PQ scores (dubbed H-PQ) and rank accordingly.\nTask 2: Word-Level End-to-End Text Detection and Recognition For this task, images are provided and participants are expected to produce wordlevel text detection and recognition results, i.e. a set of word bounding polygons and transcriptions for each image. Line and paragraph clustering is not required. This is a challenging task, as the dataset has the most dense images, with more than 100 words per image on average, 3 times as many as the second dense dataset TextOCR [29]. It also features a large number of recognizable characters. In the training set alone, there are more than 960 different character classes, as shown in Fig. 5, while most previous OCR benchmarks limit the tasks to recognize only digits and case-insensitive English characters. These factors make this task challenging.\nFor evaluation, we use the F1 measure, which is a harmonic mean of wordlevel prediction and recall. A word result is considered true positive if the IoU with ground-truth polygon is greater or equal to 0.5 and the transcription is the same as the ground-truth. The transcription comparison considers all characters and will be case-sensitive. Note that, some words in the dataset are marked as illegible words. Detection with high overlap with these words (IoU larger than 0.5) will be removed in the evaluation process, and ground-truths marked as illegible do not count as false negative even if they are not matched. " }, { "figure_ref": [], "heading": "Evaluation and Competition Website", "publication_ref": [], "table_ref": [], "text": "We host the competition on the widely recognized Robust Reading Competition (RRC) website3 and set up our own competition page. The RRC website has been the hub of scene text and document understanding research for a long time and hosted numerous prestigious competitions. It provides easy-to-use infrastructure to set up competition, tasks, and carry out evaluation. It also supports running the competition continuously, making it an ideal candidate." }, { "figure_ref": [], "heading": "Competition Schedule", "publication_ref": [], "table_ref": [], "text": "We propose and execute the following competition schedule, in accordance with the conference timeline:\n-January 2nd, 2023: Start of the competition; submissions of results were enabled on the website. -April 1st, 2023: Deadline for competition submissions.\n-April 15th, 2023: Announcement of results." }, { "figure_ref": [], "heading": "Other Competition Rules", "publication_ref": [ "b21", "b22", "b23", "b24", "b25" ], "table_ref": [], "text": "In addition to the aforementioned competition specifications, we also apply the following rules:\n-Regarding the usage of other publicly available datasets: HierText is the only allowed annotated OCR dataset. However, participants are also allowed to do self-labeling on other public OCR datasets as long as they don't use their ground-truth labels. In other words, they can use the images of other public datasets, but not their labels. They can also use non-OCR datasets, whether labeled or not, to pretrain their models. We believe they are important techniques that can benefit this field. -Usage of synthetic datasets Synthetic data has been an important part of OCR recently [22,23,24,25,26]. Participants can use any synthetic datasets, whether they are public or private, but are expected to reveal how they are synthesized and some basic statistics of the synthetic datasets if they are private. -Participants should not use the validation split in training their models.\n-Participants can make as many submissions as desired before the deadline, but we only archive the latest one submission of each participant in the final competition ranking." }, { "figure_ref": [], "heading": "Organizer Profiles", "publication_ref": [], "table_ref": [], "text": "Authors are all members of the OCR team at Google Research. In addition to academic publications, authors have years of experience in building industrial OCR systems that are accurate and efficient for a diversity of image types and computation platforms." }, { "figure_ref": [ "fig_6" ], "heading": "Competition Results", "publication_ref": [], "table_ref": [], "text": "In total, the competition received 30 submissions in Task 1 and 20 submissions in Task 2. Note that, we encourage participants to submit multiple entries using different methods, for example, to understand the effect of applying different techniques such as pretraining and synthetic data. To produce the final leaderboard in compliance with the ICDAR competition protocols, we only keep the latest 1 submission from each participants. The final deduplicated competition results are summarized in Tab. 1 / Fig. 6 and Tab. 2 / Fig. 7. In total, the competition received 11 unique submissions in Task 1 and 7 in Task 2. " }, { "figure_ref": [], "heading": "Submission Validation", "publication_ref": [], "table_ref": [], "text": "In the final leaderboard, each participant is only allowed to have one submission. We validate each submission and examine the number of submissions from each team. If a team has more than one submission, we keep the latest one and remove the rest from the leaderboard. Note that these removed submissions will remain on the RRC portal for reference, since they also provide important aspects into this research field. We adopt the following rules to determine the authorship of each submission:\n-user_id: If two submissions have the same user_id field, it means they are submitted by the same RRC user account and thus should be from the same team. -method description: Participants are asked to provide descriptive information of their submissions, including authors, method details, etc. If two submissions have strictly almost identical author list and method description, we consider them to be from the same team." }, { "figure_ref": [], "heading": "Task 1 Methodology", "publication_ref": [ "b19", "b26", "b19", "b7", "b9", "b24", "b6", "b5", "b4", "b9" ], "table_ref": [], "text": "Task 1 in our competition, i.e. Hierarchical Text Detection, is a novel task in the research field. There are no existing methods that participants can refer to. Even the previous work Unified Detector [20] can only produce line and paragraph outputs but no word-level results. Among the 8 submissions in Task 1 which have disclosed their methods, we observed that 5 of them develop 'multi-head plus postprocessing' systems. These methods treat words, lines, and paragraphs as generic objects, and train detection or segmentation models to localize these three levels of text entities in parallel with separate prediction branches for each level. In the post-processing, they use IoU-based rules to build the hierarchy in the post-processing step, i.e. assigning words to lines and lines to paragraphs. The most of the top ranking solutions belong to this type of methods. One submission (from the SCUT-HUAWEI team) adopts a cascade pipeline, by first detecting words and then applying LayoutLMv3 [27] to cluster words into lines and paragraphs. The Hierarchical Transformers for Text Detection method develops a unified detector similar to [20] for line detection and paragraph grouping and also a line-to-word detection model that produces bounding boxes for words.\nHere we briefly introduce the top 2 methods in this task: Upstage KR team ranks 1st place in Task 1, achieving an H-PQ metric of 76.85%. It beats the second place by almost 6% in the H-PQ metric. They implemented a two-step approach to address hierarchical text detection. First, they performed multi-class semantic segmentation where classes were word, line, and paragraph regions. Then, they used the predicted probability map to extract and organize these entities hierarchically. Specifically, an ensemble of UNets with ImageNet-pretrained EfficientNetB7[9] / MitB4 [8] backbones was utilized to extract class masks. Connected components were identified in the predicted mask to separate words from each other, same for lines and paragraphs. Then, a word was assigned as a child of a line if the line had the highest IoU with the word compared to all other lines. This process was similarly applied to lines and paragraphs. For training, they eroded target entities and dilated predicted entities. Also, they ensured that target entities maintained a gap between them. They used symmetric Lovasz loss [10] and pre-trained their models on the SynthText dataset [25]. DeepSE X Upstage HK team ranks 2nd in the leaderboard. They fundamentally used DBNet [7] as the scene text detector, and leveraged the oCLIP [6] pretrained Swin Transformer-Base [5] model as the backbone to make direct predictions at three different levels. Following DBNet, they employed Balanced Cross-Entropy for binary map and L1 loss for threshold map. The authors also further fine-tuned the model with lovasz loss [10] for finer localization." }, { "figure_ref": [], "heading": "Task 2 Methodology", "publication_ref": [ "b15", "b1", "b0", "b3", "b25", "b0" ], "table_ref": [], "text": "Task 2, i.e. Word-Level End-to-End Text Detection and Recognition, is a more widely studied task. Recent research [16,2] focuses on building end-to-end trainable OCR models, as opposed to separately trained detection and recognition models. It's widely believed that end-to-end models enjoy shared feature extraction which leads to better accuracy. However, the results of our competition say otherwise. The top 2 methods by the Upstage KR team and DeepSE Endto-End Text Detection and Recognition Model team are all separately trained models. There are two end-to-end submissions. The unified_model team applies a deformable attention decoder based text recognizer and ranks 3th place. Here we briefly introduce the top 2 methods in this task: Upstage KR team uses the same task 1 method for detecting words. For wordlevel text recognition, they use the ParSeq [1] model but replace the visual feature extractor with SwinV2 [4]. The text recognizer is pretrained with synthetic data before fine-tuning it on the HierText dataset. They use an in-house synthetic data generator derived from the open source SynthTiger [26] to generate word images using English and Korean corpus. Notably, they generate 5M English/Korean word images with vertical layout, in addition to 10M English / Korean word images with horizontal layout. For the final submission, they use an ensemble of three text recognizers for strong and stable performance. DeepSE End-to-End Text Detection and Recognition Model team also uses the ParSeq [1] model as their recognizer. They point out that, in order to make the data domain consistent between the training and inference stages, they run their detector on training data, and then crop words using detected boxes. This step is important int adapting the training domain to the inference domain. This trick essentially improves their model's performance." }, { "figure_ref": [ "fig_7" ], "heading": "Discussion", "publication_ref": [ "b19" ], "table_ref": [], "text": "In the Hierarchical Text Detection task, the original Unified Detector [20] can only achieve PQ scores of 48.21%, 62.23%, 53.60% on the words, lines, and paragraphs respectively. The H-PQ score for Unified Detector is only 54.08%, ranking at 10th place if put in the competition leaderboard. The winning solution exceeds Unified Detector by more than 20%. These submissions greatly push the envelope of state-of-the-art Hierarchical Text Detection method. However, current methods are still not satisfactory. As shown in Fig. 6, we can easily notice that for all methods, word PQ scores are much higher than line PQ scores, and line PQ scores are again much higher than paragraph PQ scores. It indicates that, line and paragraph level detections are still more difficult than word detection. Additionally, Fig. 8 shows that layout analysis performance is only marginally correlated with word detection performance, especially when outliers are ignored.\nWe believe there's still hidden challenges and chances for improvement in layout analysis. Furthermore, winning solutions in our competition rely on postprocessing which can be potentially complicated and error-prone. It's also important to improve end-to-end methods. The task 2 of our challenge is a standard yet unique end-to-end detection and recognition task. While it inherits the basic setting of an end-to-end task, it is based on a diversity of images which has high word density, and it has an unlimited character set. For this task, we see most of the submissions are two-stage methods, where the detection and recognition models are trained separately, and there's no feature sharing. These two-stage methods achieve much better performances than end-to-end submissions. This contrasts with the trend in research paper that favors end-to-end trainable approaches with feature sharing between the two stage. Therefore, we believe the HierText dataset can be a very useful benchmark in end-to-end OCR research. Another interesting observation for Task 2 is that, while most submissions achieve a tightness score of around 80%, the correlation between tightness scores and F1 scores and very low, with a correlation coefficient of 0.06. It could indicate that recognition is less sensitive to the accuracy of bounding boxes after it surpasses some threshold. This would mean that the mainstream training objective of maximizing bounding box IoU might not be the optimal target. For example, a slightly oversized bounding box is better than a small one which might miss some characters. With that said, a precise bounding box is still useful itself, which indicates the localization. Another potential reason is that bounding box annotation is not always accurateit's always oversized because text are not strictly rectangular." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper summarizes the organization and results of ICDAR 2023 Competition on Hierarchical Text Detection and Recognition. We share details of competition motivation, dataset collection, competition organization, and result analysis. In total, we have 18 valid and unique competition entries, showing great interest from both research communities and industries. We keep the competition submission site open to promote research into this field. We also plan to extend and improve this competition, for example, adding multilingual data." } ]
We organize a competition on hierarchical text detection and recognition. The competition is aimed to promote research into deep learning models and systems that can jointly perform text detection and recognition and geometric layout analysis. We present details of the proposed competition organization, including tasks, datasets, evaluations, and schedule. During the competition period (from January 2nd 2023 to April 1st 2023), at least 50 submissions from more than 20 teams were made in the 2 proposed tasks. Considering the number of teams and submissions, we conclude that the HierText competition has been successfully held. In this report, we will also present the competition results and insights from them.
ICDAR 2023 Competition on Hierarchical Text Detection and Recognition
[ { "figure_caption": "Fig. 1 .1Fig. 1. Illustration for the proposed unified task: Hierarchical Text Detection and Recognition (HTDR). Given an input image, the unified model is expected to produce a hierarchical text representation, which resembles the form of a forest. Each tree in the forest represents one paragraph and has three layers, representing the clustering of words into lines and then paragraphs.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Example of hierarchical annotation format of the dataset.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Illustration for the hierarchical annotation of text in images. From left to right: word, line, paragraph level annotations. Words (blue) are annotated with polygons. Lines (green) and paragraphs (yellow) are annotated as hierarchical clusters and visualized as polygons. Images are taken from the train split.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Task 1 :1Hierarchical Text Detection This task itself is formulated as a combination of 3 tasks: word detection, text line detection, and paragraph detection, where lines and paragraphs are represented as clusters of words hierarchically.", "figure_data": "", "figure_id": "fig_3", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Illustration of how hierarchical text detection can be evaluated as 3 instance segmentation sub-tasks. The coloring of each column indicates the instance segmentation for each sub-task.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. Character set in the training split.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .7Fig. 7. Figure for the results of task 2.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Fig. 8 .8Fig. 8. Correlation between text levels. Each dot is a submission in the Task 1. Left: Correlation between word PQ and line PQ. Right: Correlation between word PQ and paragraph PQ.", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Results", "figure_data": "UserMethodTask 1 metric Rank H-PQ PQ FWord PRTPQ FLine PRTPQ FParagraph P RTYunSu KimUpstage KR176.85 79.80 91.88 94.73 89.20 86.85 76.40 88.34 91.32 85.56 86.48 74.54 86.15 87.40 84.94 86.52DeepSE x UpstageDeepSE hierarchical detection model270.96 75.30 88.49 93.50 83.99 85.10 69.43 82.43 82.65 82.21 84.23 68.51 81.39 81.69 81.10 84.17zhmhiertext_submit_0401 curve_199_v2370.31 76.71 88.18 92.71 84.08 86.99 71.43 83.32 89.32 78.07 85.73 63.97 74.83 81.25 69.35 85.48Mike RanzingerNVTextSpotter468.82 73.69 87.07 95.10 80.29 84.63 67.76 80.42 93.87 70.35 84.25 65.51 78.04 81.82 74.60 83.94Ensemble of threessmtask-specific568.72 71.54 92.03 93.82 90.31 77.74 69.64 89.04 91.75 86.49 78.21 65.29 83.70 84.17 83.23 78.01Clova DEER detectionGlobal and local instancexswlsegmentations for668.62 76.16 90.72 93.45 88.16 83.95 68.50 82.22 80.24 84.31 83.31 62.55 75.11 74.00 76.25 83.28hierarchical text detectionAsaf GendlerHierarchical Transformers for Text Detection767.59 70.44 86.09 88.47 83.83 81.82 69.30 85.23 87.83 82.78 81.31 63.46 78.40 77.84 78.97 80.94JiangQingSCUT-HUAWEI862.68 70.08 89.58 89.79 89.37 78.23 67.70 86.20 90.46 82.33 78.53 53.14 69.06 74.03 64.72 76.96Jiawei WangDQ-DETR927.81 61.01 77.27 80.64 74.17 78.96 26.96 35.91 26.81 54.39 75.07 18.38 24.72 15.99 54.41 74.36ZiqianShaotest1021.94 27.45 41.75 51.82 34.95 65.76 25.61 39.04 51.50 31.43 65.59 16.32 24.52 35.61 18.70 66.57Yichuan Chenga110.00 0.00 0.00 0.24 0.00 53.62 0.01 0.01 0.25 0.01 51.29 0.01 0.02 0.21 0.01 50.89", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results for Task 2. F/P/R/T/PQ stand for F1-score, Precision, Recall, Tightness, and Panoptic Quality respectively. The submissions are ranked by the F1 score. We omit the % for all these numbers for simplicity.", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Shangbang Long; Siyang Qin; Dmitry Panteleev; Alessandro Bissacco; Yasuhisa Fujii; Michalis Raptis
[ { "authors": "Darwin Bautista; Rowel Atienza", "journal": "Springer Nature Switzerland", "ref_id": "b0", "title": "Scene text recognition with permuted autoregressive sequence models", "year": "2022" }, { "authors": "Maoyuan Ye", "journal": "", "ref_id": "b1", "title": "DeepSolo: Let Transformer Decoder with Explicit Points Solo for Text Spotting", "year": "2022" }, { "authors": "Seonghyeon Kim", "journal": "", "ref_id": "b2", "title": "DEER: Detection-agnostic End-to-End Recognizer for Scene Text Spotting", "year": "2022" }, { "authors": "Ze Liu", "journal": "", "ref_id": "b3", "title": "Swin transformer v2: Scaling up capacity and resolution", "year": "2022" }, { "authors": "Ze Liu", "journal": "", "ref_id": "b4", "title": "Swin transformer: Hierarchical vision transformer using shifted windows", "year": "2021" }, { "authors": "Chuhui Xue", "journal": "Springer Nature Switzerland", "ref_id": "b5", "title": "Language matters: A weakly supervised vision-language pretraining approach for scene text detection and spotting", "year": "2022" }, { "authors": "Minghui Liao", "journal": "", "ref_id": "b6", "title": "Real-time scene text detection with differentiable binarization", "year": "2020" }, { "authors": "Enze Xie", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b7", "title": "SegFormer: Simple and efficient design for semantic segmentation with transformers", "year": "2021" }, { "authors": "Mingxing Tan; Quoc Le", "journal": "PMLR", "ref_id": "b8", "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "year": "2019" }, { "authors": "Maxim Berman; Amal Rannen Triki; Matthew B Blaschko", "journal": "", "ref_id": "b9", "title": "The lovászsoftmax loss: A tractable surrogate for the optimization of the intersection-overunion measure in neural networks", "year": "2018" }, { "authors": "Shangbang Long; Xin He; Cong Yao", "journal": "International Journal of Computer Vision", "ref_id": "b10", "title": "Scene text detection and recognition: The deep learning era", "year": "2021" }, { "authors": "Joonho Lee", "journal": "IEEE", "ref_id": "b11", "title": "Page segmentation using a convolutional neural network with trainable co-occurrence features", "year": "2019" }, { "authors": "Xiao Yang", "journal": "", "ref_id": "b12", "title": "Learning to extract semantic structure from documents using multimodal fully convolutional neural networks", "year": "2017" }, { "authors": "Roi Ronen", "journal": "", "ref_id": "b13", "title": "GLASS: Global to Local Attention for Scene-Text Spotting", "year": "2022" }, { "authors": "Shangbang Long", "journal": "", "ref_id": "b14", "title": "Textsnake: A flexible representation for detecting text of arbitrary shapes", "year": "2018" }, { "authors": "Siyang Qin", "journal": "", "ref_id": "b15", "title": "Towards unconstrained end-to-end text spotting", "year": "2019" }, { "authors": "Yair Kittenplon", "journal": "", "ref_id": "b16", "title": "Towards Weakly-Supervised Text Spotting using a Multi-Task Transformer", "year": "2022" }, { "authors": "Shuang Liu", "journal": "Springer", "ref_id": "b17", "title": "Unified Line and Paragraph Detection by Graph Convolutional Networks", "year": "2022" }, { "authors": "Renshen Wang; Yasuhisa Fujii; Ashok C Popat", "journal": "", "ref_id": "b18", "title": "Post-ocr paragraph recognition by graph convolutional networks", "year": "2022" }, { "authors": "Shangbang Long", "journal": "", "ref_id": "b19", "title": "Towards End-to-End Unified Scene Text Detection and Layout Analysis", "year": "2022" }, { "authors": "Chenliang Li", "journal": "", "ref_id": "b20", "title": "StructuralLM: Structural Pre-training for Form Understanding", "year": "2021" }, { "authors": "Shangbang Long; Cong Yao", "journal": "", "ref_id": "b21", "title": "Unrealtext: Synthesizing realistic scene text images from the unreal world", "year": "2020" }, { "authors": "Minghui Liao", "journal": "Science China Information Sciences", "ref_id": "b22", "title": "SynthText3D: synthesizing scene text images from 3D virtual worlds", "year": "2020" }, { "authors": "Max Jaderberg", "journal": "", "ref_id": "b23", "title": "Synthetic data and artificial neural networks for natural scene text recognition", "year": "2014" }, { "authors": "Ankush Gupta; Andrea Vedaldi; Andrew Zisserman", "journal": "", "ref_id": "b24", "title": "Synthetic data for text localisation in natural images", "year": "2016" }, { "authors": "Moonbin Yim", "journal": "Springer International Publishing", "ref_id": "b25", "title": "SynthTIGER: synthetic text image GEneratoR towards better text recognition models", "year": "2021" }, { "authors": "Yupan Huang", "journal": "", "ref_id": "b26", "title": "LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking", "year": "2022" }, { "authors": "Alina Kuznetsova", "journal": "International Journal of Computer Vision", "ref_id": "b27", "title": "The open images dataset v4", "year": "2020" }, { "authors": "Amanpreet Singh", "journal": "", "ref_id": "b28", "title": "TextOCR: Towards large-scale end-to-end reasoning for arbitrary-shaped scene text", "year": "2021" }, { "authors": "Ilya Krylov; Sergei Nosov; Vladislav Sovrasov", "journal": "PMLR", "ref_id": "b29", "title": "Open images v5 text annotation and yet another mask text spotter", "year": "2021" }, { "authors": "Alexander Kirillov", "journal": "", "ref_id": "b30", "title": "Panoptic segmentation", "year": "2019" } ]
[ { "formula_coordinates": [ 6, 241.45, 211.15, 239.14, 27.53 ], "formula_id": "formula_0", "formula_text": "P Q = (p,g)∈T P IoU (p, g) |T P | + 1 2 |F P | + 1 2 |F N |(1)" } ]
10.1145/1143844.1143874
2023-05-16
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b30", "b0", "b11", "b17", "b36", "b22", "b29", "b21", "b19", "b31", "b1", "b33", "b37", "b27", "b32", "b5" ], "table_ref": [], "text": "With improvement in healthcare technologies, electronic health records (EHRs) are being used to monitor intensive care units (ICUs) in hospitals. Since it is crucial to schedule appropriate treatments for patients in ICUs, there are many prognostic models that use EHRs to address related tasks, such as inhospital mortality prediction. EHRs consist of three types of data; structured, semi-structured, and unstructured. Clinical notes, which are unstructured data, contain valuable comments or summary of the patient's condition written by medical professionals (doctors, nurses, etc.). However, compared to structured data, clinical notes have been underutilized in previous studies due to the difficult-to-understand contents and the complex hierarchies (Figure 1(a)).\nTransformer-based (Vaswani et al., 2017) methods like ClinicalBERT (Alsentzer et al., 2019;Huang et al., 2019aHuang et al., , 2020) ) have been proposed to pretrain on large-scale corpus from similar domains, and fine-tune on the clinical notes through transfer learning. While Transformer-based methods can effectively detect distant words compared to other sequence-based methods like convolutional neural networks (Kim, 2014;Zhang et al., 2015) and recurrent neural networks (Mikolov et al., 2010;Tai et al., 2015;Liu et al., 2016), there are still limitations of increasing computational complexity for long clinical notes (Figure 2). Recently, with the remarkable success of the graph neural networks (GNNs) (Kipf and Welling, 2017;Veličković et al., 2018;Brody et al., 2021), graph-based document classification methods have been proposed (Yao et al., 2019;Huang et al., 2019b) that can capture long range word dependencies and can be adapted to documents with different and irregular lengths. Some methods build word co-occurrence graphs by sliding fixed-size windows to model pairwise interactions between words (Zhang et al., 2020;Piao et al., 2022;Wang et al., 2022). However, the density of the graph increases as the document becomes longer. Besides, there are also some methods apply hypergraph for document classification (Ding et al., 2020;Zhang et al., 2022a), which can alleviate the high density of the document graphs and extract high-order structural information of the documents.\nAdopting hypergraphs can reduce burden for managing long documents with irregular lengths, but additional issues remain when dealing with clinical notes: (1) Neutral words deteriorate clinical semantic information. In long clinical notes, there are many frequently written neutral words (e.g. \"rhythm\") that do not directly represent the patient's condition. Most of the previous methods treat all words equally at the learning stage, which may result in dominance of frequent neutral words, and negligence of rare keywords that are directly related to the patient's condition. Meanwhile, the neutral word can occasionally augment information of rare keywords, depending on the intra-taxonomy context. Taxonomy represents the category of the clinical notes, where implicit semantic meaning of the words can differ. For example, \"rhythm\" occurred with \"fibrillation\" in ECG taxonomy can represent serious cardiac disorder of a patient, but when \"rhythm\" is written with \"benadryl\" in Nursing taxonomy, it can hardly represent the serious condition. Therefore, assembling intra-taxonomy related words can leverage \"useful\" neutral words with rare keywords to jointly augment the clinical semantic information, which implies the necessity of introducing taxonomy-level hyperedges.\n(2) Imbalanced distribution of multi-level hyperedges. There are a small number of taxonomies compared to notes for each patient. As a result, when taxonomy-level and note-level information are learned simultaneously, note-level information can obscure taxonomy-level information. To learn more balanced multi-level information of the clinical notes, an effective way for learning the multilevel hypergraphs with imbalanced distributed hy- \n𝑶(𝑵²) X X X Graph 𝑶(𝑬) √ X X Hypergraph 𝑶(𝑵) √ X X Ours 𝑶(𝑵) √ √ √ Section 3.3. Hierarchical Message Passing 𝓃 ! 𝓉! 𝓃 # 𝓉! 𝓃 $ 𝓉\" 𝓃 ! 𝓉! 𝓃 # 𝓉! 𝓃 $ 𝓉\" Taxonomy 𝓃 ! 𝓉! 𝓃 # 𝓉! 𝓃 $ 𝓉\" 𝓃 ! 𝓉! 𝓃 # 𝓉! 𝓃 $ 𝓉\" Section 3.2." }, { "figure_ref": [], "heading": "Multi-level Hypergraph Construction", "publication_ref": [ "b2", "b28", "b16", "b4", "b0", "b6", "b25", "b11" ], "table_ref": [], "text": "Figure 2: Advantages of the proposed model, compared to sequence, graph and hypergraph based models. N and E denote the number of nodes and edges respectively. We address issues of complexity and different lengths by adopting the hypergraph to represent each patient. Our model retains semantic information by constructing multi-level hypergraph (Section 3.2), and hierarchical message passing layers (Section 3.3) are proposed for balancing multi-level knowledge for patient representation learning.\nperedges is required.\nTo address the above issues, we propose TM-HGNN (Taxonomy-aware Multi-level HyperGraph Neural Networks), which can effectively and efficiently utilize the multi-level high-order semantic information for patient representation learning. Specifically, we adopt patient-level hypergraphs to manage highly unstructured and long clinical notes and define multi-level hyperedges, i.e., note-level and taxonomy-level hyperedges. Moreover, we conduct the hierarchical message passing from note-level to taxonomy-level hyperedges using edge-masking. To hierarchically learn word embeddings without mixture of information between note and taxonomy, note and taxonomy hyperedges are disconnected. Note-level word embeddings are learned only with intra-note local information. The following taxonomy-level propagation introduce clinical semantic information by assembling the intra-taxonomy words and separating inter-taxonomy words for better patient-level representation learning. The contributions of this article can be summarized as follows (Figure 2):\n• To address issue 1, we construct multi-level hypergraphs for patient-level representation learning, which can assemble \"useful\" neutral word with rare keyword via note and taxonomy level hyperedges to retain the clinical semantic information. (Choi et al., 2016;Shang et al., 2019) or utilize clinical notes combined with timeseries data (Khadanga et al., 2019;Deznabi et al., 2021). Recently, there are approaches focused on clinical notes, adopting pre-trained models such as BERT-based (Alsentzer et al., 2019;Huang et al., 2019a;Golmaei and Luo, 2021;Naik et al., 2022) and XLNet-based (Huang et al., 2020) or utilizing contextualized phenotypic features extracted from clinical notes (Zhang et al., 2022b)." }, { "figure_ref": [], "heading": "Graph Neural Networks for Document Classification", "publication_ref": [ "b19", "b31", "b1", "b33", "b33", "b37", "b33", "b19", "b32", "b27", "b5", "b30" ], "table_ref": [], "text": "Graph neural networks (Kipf and Welling, 2017;Veličković et al., 2018;Brody et al., 2021) have achieved remarkable success in various deep learning tasks, including text classification. Initially, transductive graphs have been applied to documents, such as TextGCN (Yao et al., 2019). Transductive models have to be retrained for every renewal of the data, which is inefficient and hard to generalize (Yao et al., 2019;Huang et al., 2019b).\nFor inductive document graph learning, word cooccurrence graphs initialize nodes with word embeddings and exploit pairwise interactions between words. TextING (Zhang et al., 2020) employs the gated graph neural networks for documentlevel graph learning. Following TextGCN (Yao et al., 2019) which applies graph convolutional networks (GCNs) (Kipf and Welling, 2017) in transductive level corpus graph, InducT-GCN (Wang et al., 2022) applies GCNs in inductive level where unseen documents are allowed to use. TextSSL (Piao et al., 2022) captures both local and global structural information within graphs.\nHowever, the density of word co-occurrence graph increases as the document becomes longer, since the fixed-sized sliding windows are used to capture local pairwise edges. In case of hypergraph neural networks, hyperedges connect multiple number of nodes instead of connecting words to words by edges, which alleviates the high density of the text graphs. HyperGAT (Ding et al., 2020) proposes document-level hypergraphs with hyperedges containing sequential and semantic information. HEGEL (Zhang et al., 2022a) applies Transformer-like (Vaswani et al., 2017) multi-head attention to capture high-order cross-sentence relations for effective summarization of long documents. According to the reduced computational complexity for long documents (Figure 2), we adopt hypergraphs to represent patient-level EHRs with clinical notes. Considering issues of existing hypergraph-based methods (Figure 2), we construct multi-level hypergraphs at note-level and taxonomy-level for each patient. The constructed graphs are fed into hierarchical message passing layers to capture rich hierarchical information of the clinical notes, which can augment semantic information for patient representation learning." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Problem Definition", "publication_ref": [], "table_ref": [], "text": "Our task is to predict in-hospital-mortality for each patient using a set of clinical notes. Given a patient p ∈ P with in-hospital-mortality label y ∈ Y, patient p owns a list of clinical notes N p = [n t 1 1 , ..., n t k j , ...], and each clinical note n t ∈ N p with taxonomy t ∈ T p contains a sequence of words W n t = [w n t 1 , ..., w n t i , ...], where j, k and i denote the index of clinical note n, taxonomy t and word w of patient p. The set of taxonomies can be represented by T = {t 1 , t 2 , ..., t k , ...}.\nOur goal is to construct individual multi-level hypergraphs G p for each patient p and learn patientlevel representation G p with the multi-level knowledge by hierarchical message passing layers for in-hospital-mortality prediction tasks. Since our model is trained by inductive learning, patient p is omitted throughout the paper. " }, { "figure_ref": [], "heading": "Note-level Hyperedges", "publication_ref": [], "table_ref": [], "text": "Taxonomy-level Hyperedges \n•••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• 𝒉 𝒆𝒕 (𝟎) 𝒉 𝒆𝒏(" }, { "figure_ref": [], "heading": "Multi-Level Hypergraph Construction", "publication_ref": [], "table_ref": [], "text": "We construct multi-level hypergraphs for patientlevel representation learning, which can address the issues that are mentioned in introduction 1. A hypergraph G * = (V, E) consists of a set of nodes V and hyperedges E where multiple nodes can be connected to single hyperedge e ∈ E. A multi-level hypergraph G = {V, {E N ∪ E T }} is constructed from patient's clinical notes, where E N and E T denote note-level and taxonomy-level hyperedges, respectively. A word node v exists in note n with the taxonomy of t can be represented by {v ∈ n, n ∈ t}. A note-level hyperedge is denoted as e n , and a taxonomy-level hyperedge is denoted as e t ." }, { "figure_ref": [], "heading": "Multi-level Positional Encoding", "publication_ref": [ "b22", "b7", "b22" ], "table_ref": [], "text": "There are three types of entries in the multi-level hypergraph G, such as word nodes V, note-level hyperedges E N and taxonomy-level hyperedges E T . To distinguish these entries, we propose multi-level positional encoding to introduce more domain-specific meta-information to the hypergraph G. The function of multi-level positional encoding MPE(•) can be defined as:\nMPE(x) = [τ (x), IW (x), IN (x), IT (x)](1)\nwhere entry x ∈ {V, E N , E T }, and function τ : x → {0, 1, 2} maps entry x to a single type among nodes, note-level and taxonomy-level hy-peredges. Functions I W (•), I N (•), and I T (•) maps entry x to positions in the word, note and taxonomylevel, respectively. To initialize embedding of node v, we concatenate embedding MPE(v) from multilevel position encoding and word2vec (Mikolov et al., 2010) pre-trained embedding z v . Since shallow word embeddings are widely used to initialize node embeddings in graph-based document representation (Grohe, 2020), we use word2vec (Mikolov et al., 2010) embedding. A word node embedding h (0) v is constructed as follows:\nh (0) v = MPE(v) ⊕ zv,(2)\nwhere ⊕ denotes concatenation function." }, { "figure_ref": [], "heading": "Hyperedge Construction", "publication_ref": [], "table_ref": [], "text": "To extract multi-level information of patient-level representation using clinical notes, we construct patient hypergraphs with two types of hyperedges, one at the note-level hyperedge E N and the other at the taxonomy-level hyperedge E T . A word node v in note n with taxonomy t is assigned to one note-level hyperedge e n and one taxonomy-level hyperedge e t , which can be defined as:\nE(v) = {en, et|v ∈ n, n ∈ t} (3)\nNote-level Hyperedges We adopt linear embedding function f n and obtain the index embedding using I N (n). To preserve time-dependent sequential information of clinical note n, we simply add time information t(n) to the embedding. Then initial embedding of note-level hyperedge h (0)\nen with MPE(•) can be defined as:\nh (0) en = MPE(n) ⊕ f θ n IN (n), t(n) ,(4)\nwhere θ ∈ R d×d denotes the parameter matrix of function f n . Notably, we set the value of word index I W (n) as -1 since the note n represents higher level information than word v.\nTaxonomy-level Hyperedges Taxonomy-level hyperedges e t are constructed by taxonomy index I T (t) through linear layers f t concatenated with MPE(•) function, which can be defined as:\nh (0) e t = MPE(t) ⊕ f θ t IT (t) ,(5)\nwhere θ ∈ R d×d denotes the parameter matrix of function f t . Like note-level hyperedge, we set I W (t) and I N (t) as -1 since the level of taxonomy t is higher than the levels of note and word." }, { "figure_ref": [], "heading": "Hierarchical Message Passing", "publication_ref": [], "table_ref": [], "text": "To leverage the characteristics of two types of hyperedges, we propose a hierarchical hypergraph convolutional networks, composed of three layers that allow message passing from different types of hyperedges. In general, we define message passing functions for nodes and hyperedges as follows:\nFW (h, E, θ) = σ θ u∈E(v) 1 dv du hu ,(6)\nFτ (h, V τ , θ) = σ θ z∈V τ (e) 1 de dz hz ,(7)\nwhere F W denotes message passing function for word nodes and F τ denotes message passing function for hyperedges with type τ ∈ {1, 2}, i.e., note-level hyperedges and taxonomy-level hyperedges, respectively. Function F W updates word node embedding h v by aggregating embeddings of connected hyperedges E(v) . Function F τ updates hyperedge embedding h e by aggregating embeddings of connected word nodes V τ (e). σ is the nonlinear activation function such as ReLU, θ ∈ R d×d is the weight matrix with dimension d which can be differently assinged and learned at multiple levels.\nThen we can leverage these defined functions to conduct hierarchical message passing learning at the note level and at the taxonomy level. Initialization Layer Due to the complex structure of the clinical notes, the initial multi-level hypergraph constructed for each patient has a large variance. To prevent falling into local optima in advance, we first use an initialization layer to pre-train the entries of hypergraphs by learning the entire patient graph structure. In this layer, message passing functions are applied to all word nodes v ∈ V and hyperedges e ∈ E I = {E N ∪ E T }. Thus, embeddings of node v, hyperedges e n and e t at both levels can be defined as:\nhI (v) = FW h (0) v , EI(v), θI ,(8)\nhI (en) = Fτ h (0) en , V τ (en), θI , τ = 1 (9)\nhI (et) = Fτ h (0) e t , V τ (et), θI , τ = 2 (10)\nNote-level Message Passing Layer Then we apply note-level message passing layer on hypergraphs with only word nodes v ∈ V and note-level hyperedges e n ∈ E N , and the taxonomy-level hyperedges are masked during message passing. In this layer, the word nodes can only interact with note-level hyperedges, which can learn the intranote local information.\nhN (v) = FW hI (v), EN (v), θN ,(11)\nhN (en) = Fτ hI (en), V τ (en), θN , τ = 1,(12)\nhN (et) = hI (et)(13)\nTaxonomy-level Message Passing Layer The last layer is the taxonomy-level message passing layer, where all word nodes v ∈ V and taxonomylevel hyperedges e t ∈ E T can be updated. In this layer, we block the hyperedges at the note level. The node representations with note-level information are fused with taxonomy information via taxonomy-level hyperedges, which can assemble the intra-taxonomy related words to augment semantic information.\nhT (v) = FW hN (v), ET (v), θT ,(14)\nhT (en) = hN (en), ( 15)\nhT (et) = Fτ hN (et), V τ (et), θT , τ = 2 (16)" }, { "figure_ref": [], "heading": "Patient-Level Hypergraph Classification", "publication_ref": [], "table_ref": [], "text": "After all aforementioned hierarchical message passing layers, node and hyperedge embeddings h T (v), h T (e n ), h T (e t ) ∈ H T follow meanpooling operation which summarizes patient-level embedding z, which is finally fed into sigmoid operation as follows:\nŷ = sigmoid(z)(17)\nwhere ŷ denotes the probability of the predicted label for in-hospital-mortality of the patient. The loss function for patient-level classification is defined as the binary cross-entropy loss:\nL = -(y × log ŷ + (1 -y) × log(1 -ŷ))(18)\nwhere y denotes the true label for in-hospitalmortality. The proposed network, TM-HGNN, can be trained by minimizing the loss function." }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [ "b13" ], "table_ref": [ "tab_3", "tab_9" ], "text": "We use clinical notes from the Medical Information Mart for Intensive Care III (MIMIC-III) (Johnson et al., 2016) dataset, which are written within 48 hours from the ICU admission. For quantitative evaluation, we follow Harutyunyan et al.'s (2019) benchmark setup for data pre-processing and train/test splits, then randomly divide 20% of train set as validation set. All patients without any notes are dropped during the data preparation. To prevent overfitting into exceptionally long clinical notes for a single patient, we set the maximum number of notes per patient into 30 from the first admission. Table 1 shows the statistics of preprocessed MIMIC-III clinical note dataset for our experiments. We select top six taxonomies for experiments, since the number of notes assigned to each taxonomy differs in a wide range (Appendix B Table 3). In addition, we select two chronic diseases, hypertension and diabetes, to compare prediction results for patients with each disease." }, { "figure_ref": [], "heading": "Compared Methods", "publication_ref": [ "b23", "b15", "b17", "b9", "b37", "b32", "b5" ], "table_ref": [], "text": "In our experiments, the compared baseline methods for end-to-end training are as follows:\n• Word-based methods: word2vec (Mikolov et al., 2013) with multi-layer perceptron classifier, and FastText (Joulin et al., 2017).\n• Sequence-based methods: TextCNN (Kim, 2014), Bi-LSTM (Hochreiter and Schmidhuber, 1997), and Bi-LSTM with additional attention layer (Zhou et al., 2016).\n• Graph-based methods: TextING (Zhang et al., 2020), InducT-GCN (Wang et al., 2022), and HyperGAT (Ding et al., 2020). In particular, HyperGAT represents hypergraph-based method, and the other graph-based methods employ word co-occurrence graphs." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b26" ], "table_ref": [], "text": "TM-HGNN is implemented by PyTorch (Paszke et al., 2019) and optimized with Adam (Kingma and Ba, 2015) optimizer with learning rate 0.001 and dropout rate 0.3. We set hidden dimension d of each layer to 64 and batch size to 32 by searching parameters. We train models for 100 epochs with early-stopping strategy, where the epoch of 30 shows the best results. All experiments are trained on a single NVIDIA GeForce RTX 3080 GPU." }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b3", "b5" ], "table_ref": [ "tab_3" ], "text": "Since the dataset has imbalanced class labels for in-hospital mortality as shown in Table 1, we use AUPRC (Area Under the Precision-Recall Curve) and AUROC (Area Under the Receiver Operating Characteristic Curve) for precise evaluation. It is suggested by Davis and Goadrich (2006) to use AUPRC for imbalanced class problems. it is crucial to extract high-order relations within clinical notes. In particular, as TM-HGNN outperforms HyperGAT (Ding et al., 2020), exploiting taxonomy-level semantic information which represents the medical context of the notes aids precise prediction in patient-level. Another advantage of our model, which captures multi-level high order relations from note-level and taxonomy-level with hierarchy, can be verified by the results in Table 2 where TM-HGNN outperforms T-HGNN. T-HGNN indicates the variant of TM-HGNN, which considers note-level and taxonomy-level hyperedges homogeneous. Likewise, results from hypertension and diabetes patient groups show similar tendencies in overall." }, { "figure_ref": [], "heading": "Classification Performance", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_6", "fig_2" ], "heading": "Robustness to Lengths", "publication_ref": [ "b37", "b5", "b5", "b37" ], "table_ref": [], "text": "To evaluate the performance dependencies to lengths, we divide clinical notes in patient-level into three groups by lengths, which are short, medium, and long (Appendix B, Figure 8). 856 for short, medium, and long group each, and the percentage of mortality is 6.98%, 10.72%, and 15.89% for each group, which implies patients in critical condition during ICU stays are more likely to have long clinical notes. Figure 4 shows performance comparisons for three divided groups with TextING (Zhang et al., 2020) which utilizes word co-occurrence graph, HyperGAT (Ding et al., 2020), a ordinary hypergraph based approach, and our multi-level hypergraph approach (TM-HGNN).\nAll three models were more effective to longer clinical notes, which demonstrates graph based models are robust to long document in general. Among the three models, our proposed TM-HGNN mostly performs the best and HyperGAT (Ding et al., 2020) follows, and then TextING (Zhang et al., 2020). The results demonstrate that our TM-HGNN, which exploits taxonomy-level semantic information, is most effective for clinical notes regardless of the lengths, compared to other graphbased approaches. . \"Rhythm\" and \"fibrillation\" from ECG, \"rhythm\" and \"benadryl\" from Nursing/other taxonomy are highlighted." }, { "figure_ref": [ "fig_3" ], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "Effect of Multi-level Hypergraph In order to validate the effect of multi-level hypergraphs, we ignore taxonomy-level and note-level hyperedges respectively. w/o taxonomy, which ignores taxonomy-level hyperedges, deteriorates the performance most significantly. w/o note shows degraded performance as well. Thus, effectiveness of multi-level hypergraph construction for patient representation learning can be verified (Figure 5). " }, { "figure_ref": [], "heading": "Effect of Hierarchical", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Case Study", "publication_ref": [ "b14" ], "table_ref": [], "text": "Hierarchical Message Passing We visualize the learned node representations based on principal component analysis (PCA) (Jolliffe, 2002) results, as hierarchical message passing continues in TM-HGNN. In Figure 6(a), \"rhythm\" from ECG and Nursing/other taxonomy are mapped closely for initial word embeddings, since they are literally same words. As the patient-level hypergraphs are fed into a global-level, note-level, and taxonomylevel convolutional layers in order, words in the same taxonomies assemble, which can be found in Figure 6(b), (c), and (d). As a result, \"rhythm\" of ECG represents different semantic meanings from \"rhythm\" of Nursing/other, as it is learned considerably close to \"fibrillation\" from the same taxonomy.\nImportance of Taxonomy-level Semantic Information To investigate the importance of taxonomy-level semantic information extraction, we visualize PCA results of the learned node embeddings from the baseline method and the proposed TM-HGNN. We select patient with hospital admission id (HADM_ID) 147702 for case study since TM-HGNN successfully predicts the true label for in-hospital-mortality, which is pos-itive, but the other baseline methods show false negative predictions. As in Figure 7, HyperGAT learns \"rhythm\" without taxonomy-level semantic information, since it is not assembled with other words in the same taxonomy. But TM-HGNN separately learns \"rhythm\" from ECG and \"rhythm\" from Nursing/other based on different contexts, which results in same taxonomy words aligned adjacently, such as \"fibrillation\" of ECG and \"benadryl\" of Nursing/other. Therefore, in case of TM-HGNN, frequently used neutral word \"rhythm\" from ECG with a word \"fibrillation\" means an irregular \"rhythm\" of the heart and is closely related to mortality of the patient, but \"rhythm\" from Nursing/other with another nursing term remains more neutral. This phenomenon demonstrates that contextualizing taxonomy to frequent neutral words enables differentiation and reduces ambiguity of the frequent neutral words (e.g. \"rhythm\"), which is crucial to avoid false negative predictions on patient-level representation learning." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a taxonomy-aware multilevel hypergraph neural networks, TM-HGNN, a novel approach for patient-level clinical note representation learning. We employ hypergraph-based approach and introduce multi-level hyperedges (note and taxonomy-level) to address long and complex information of clinical notes. TM-HGNN aims to extract high-order semantic information from the multi-level patient hypergraphs in hierarchical order, note-level and then taxonomy-level. Clinical note representations can be effectively learned in an end-to-end manner with TM-HGNN, which is validated from extensive experiments." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Since our approach, TM-HGNN, aggregates every note during ICU stays for patient representation learning, it is inappropriate for time-series prediction tasks (e.g. vital signs). We look forward to further study that adopts and applies our approach to time-series prediction tasks." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "In " }, { "figure_ref": [ "fig_6" ], "heading": "A Detailed Statistics of MIMIC-III Clinical Notes", "publication_ref": [], "table_ref": [ "tab_9" ], "text": "Table 3 shows the number of clinical notes assigned to 15 predefined taxonomies in MIMIC-III dataset.\nSince the number of notes varies in a wide range for each taxonomy, we select top six taxonomies for experiments: Radiology, ECG, Nursing/other, Echo, Nursing, and Physician.\nFigure 8 shows histogram for the number of words per patient-level clinical notes in train set. Since 682, 1,070, and 1,689 are the first, second, and third quantile of the train data, we select 600 and 1,600 as the boundaries to divide test set into 3 groups (short, medium, and long), which is used to validate proposed TM-HGNN's robustness to lengths. " }, { "figure_ref": [], "heading": "B Node Representations from Other Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "C Explanation of the Medical Terms", "publication_ref": [], "table_ref": [], "text": "• Fibrillation : Fibrillation refers to rapid and irregular contractions of the muscle fibers, especially from the heart. It can lead to serious heart conditions.\n• Benadryl : Brand name for the drug Diphenhydramine, which is an antihistamine. Benadryl is one of the over-the-counter drugs, and generally used for alleviating the allergic symptoms. • Lvef : Abbreviation of left ventricular ejection fraction, which is the ratio of stroke volume to end-diastolic volume. Lvef is known as the central measure for the diagnosis and management of heart failure.\n• Obliteration : In Radiology, obliteration refers to the disappearance of the contour of an organ, due to the same x-ray absorption from the adjacent tissue." }, { "figure_ref": [], "heading": "D Additional Performance Comparison", "publication_ref": [ "b13", "b20", "b20" ], "table_ref": [ "tab_11" ], "text": "We conduct additional experiments using LSTM based on 17 code features selected by Johnson et al. (2016), andTransformer-based ClinicalXL-Net (Huang et al., 2020) without pre-training for in-hospital mortality prediction. TM-HGNN outperforms approaches using structured data and Transformer-based model without pre-training.\nIn addition, we train our model on acute kidney injury prediction task (MIMIC-AKI) following Li et al. (2023). Table 5 shows comparative results of our TM-HGNN to Clinical-Longformer (Li et al., 2023) that justify TM-HGNN can effectively utilize high-order semantics from long clinical notes, with much less computational burden compared to long sequence transformer models. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) [NO.2021-0-01343, Artificial Intelligence Graduate School Program (Seoul National University)] and the Bio & Medical Technology Development Program of the National Research Foundation (NRF) funded by the Ministry of Science & ICT (RS-2023-00257479), and the ICT at Seoul National University provides research facilities for this study." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "https://github.com/ny1031/TM-HGNN" } ]
Leveraging knowledge from electronic health records (EHRs) to predict a patient's condition is essential to the effective delivery of appropriate care. Clinical notes of patient EHRs contain valuable information from healthcare professionals, but have been underused due to their difficult contents and complex hierarchies. Recently, hypergraph-based methods have been proposed for document classifications. Directly adopting existing hypergraph methods on clinical notes cannot sufficiently utilize the hierarchy information of the patient, which can degrade clinical semantic information by (1) frequent neutral words and (2) hierarchies with imbalanced distribution. Thus, we propose a taxonomy-aware multi-level hypergraph neural network (TM-HGNN), where multi-level hypergraphs assemble useful neutral words with rare keywords via note and taxonomy level hyperedges to retain the clinical semantic information. The constructed patient hypergraphs are fed into hierarchical message passing layers for learning more balanced multi-level knowledge at the note and taxonomy levels. We validate the effectiveness of TM-HGNN by conducting extensive experiments with MIMIC-III dataset on benchmark in-hospital-mortality prediction. 1
Clinical Note Owns its Hierarchy: Multi-Level Hypergraph Neural Networks for Patient-Level Representation Learning
[ { "figure_caption": "Figure 1 :1Figure 1: (a) Examples of patient clinical notes with difficult contents (e.g. jargons and abbreviations) and complex structures. Patient p 1 owns notes of radiology taxonomy (pink) and nursing taxonomy (blue). (b) Differences between existing hypergraphs and our proposed multi-level hypergraphs.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Overview of the proposed TM-HGNN. Taxonomy-aware multi-level hypergraphs are fed into the model for hierarchical message passing. ŷ denotes the patient-level prediction.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Prediction results of TextING, HyperGAT, and TM-HGNN for three patient-level clinical note groups divided by length (short, medium, and long). AUPRC and AUROC are used for evaluation.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Performance results of ablation studies. The effectiveness of the multi-level hypergraph and hierarchical message passing in the proposed model TM-HGNN are validated respectively.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :Figure 7 :67Figure 6: PCA results of learned node representations from each layer of TM-HGNN, for patient case HADM_ID=147702. \"Rhythm\" and \"fibrillation\" from ECG, \"rhythm\" and \"benadryl\" from Nursing/other taxonomy are highlighted. (a) Input word node embeddings. (b) Initialized node embeddings from the first layer. (c) After second layer, note-level message passing. (d) Final node embeddings from TM-HGNN, after taxonomy-level message passing. Word node embeddings are aligned with the same taxonomy words.", "figure_data": "", "figure_id": "fig_4", "figure_label": "67", "figure_type": "figure" }, { "figure_caption": "Figure 99Figure9shows PCA results of learned node representations from three different models. According to Figure9(a) and 9(b), word co-occurrence graphs (TextING) and homogeneous single-level hypergraphs (HyperGAT) show node representations ambiguous to discriminate by taxonomies, since every taxonomy has been shuffled. In Figure 9(c), node embeddings are aligned adjacently and arranged with similar pattern for the same taxonomies. This verifies the effectiveness of the proposed TM-HGNN which captures intra-and intertaxonomy semantic word relations for patient-level representation learning. Example words (voltage, lvef, benadryl, and obliteration) which are generally used in each taxonomy are shown in Figure9to emphasize that the keywords from each taxonomy are learned adjacently to words similar in context within taxonomies in case of TM-HGNN, but not for other methods.", "figure_data": "", "figure_id": "fig_5", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure8: Histogram for the length of patient-level clinical notes in train set. 600 and 1,600 are selected as boundaries to divide clinical notes into three groups (short, medium, and long).", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: PCA results of learned node representations for patient case HADM_ID=147702, compared with baseline methods. (a) Final node embeddings from TextING. (b) Final node embeddings from HyperGAT. (c) Final node embeddings from the proposed TM-HGNN.", "figure_data": "", "figure_id": "fig_7", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Statistics of the MIMIC-III clinical notes. Averaged numbers are reported with standard deviation.", "figure_data": "Statistics", "figure_id": "tab_3", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "MLP 13.49 ± 1.68 56.65 ± 5.12 16.82 ± 1.78 53.56 ± 4.20 18.15 ± 1.42 51.94 ± 3.40 FastText 17.06 ± 0.08 62.37 ± 0.11 25.56 ± 0.28 62.39 ± 0.18 31.33 ± 0.33 67.59 ± 0.20 ± 4.19 58.75 ± 5.78 21.75 ± 5.25 57.39 ± 6.11 27.52 ± 7.57 61.86 ± 8.38 Bi-LSTM w/ Att. 17.96 ± 0.61 62.63 ± 1.31 26.05 ± 1.80 63.24 ± 1.57 33.01 ± 3.53 68.89 ± 1.58", "figure_data": "shows performance comparisons of TM-HGNN and baseline methods. Sequence-basedmethods outperform word-based methods, whichindicates capturing local dependencies betweenneighboring words benefits patient document clas-sification. Moreover, all graph-based methods out-perform sequence-based and word-based methods.This demonstrates ignoring sequential informationof words is not detrimental to clinical notes. Fur-thermore, hypergraphs are more effective than pre-vious word co-occurrence graphs, indicating that", "figure_id": "tab_4", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Classification performance comparison on patient-level clinical tasks, evaluated with AUPRC and AUROC in percentages. We report averaged results with standard deviation over 10 random seeds. Values in boldface denote the best results.", "figure_data": "", "figure_id": "tab_5", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "MIMIC-III dataset (Johnson et al., 2016), every patient is deidentified, according to Health Insurance Portability and Accountability Act (HIPAA) standards. The fields of data which can identify the patient, such as patient name and address, are completely removed based on the identifying data list provided in HIPAA. In addition, the dates for ICU stays are shifted for randomly selected patients, preserving the intervals within data collected from each patient. Therefore, the personal information for the patients used in this study is strictly kept private. More detailed information about deidentification of MIMIC-III can be found inJohnson et al. (2016).", "figure_data": "Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li,Hongwei Hao, and Bo Xu. 2016. Attention-basedbidirectional long short-term memory networks forrelation classification. In Proceedings of the 54thannual meeting of the association for computationallinguistics (volume 2: Short papers), pages 207-212.", "figure_id": "tab_8", "figure_label": "", "figure_type": "table" }, { "figure_caption": "). The number of clinical notes for 15 predefined taxonomies in MIMIC-III dataset.", "figure_data": "# of NotesRadiology17,466ECG16,410Nursing/other12,347Echo7,935Nursing3,562Physician3,545Respiratory2,024Nutrition1,270General1,135Discharge Summary 608Rehab Services594Social Work424Case Management162Consult19Pharmacy14", "figure_id": "tab_9", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Table 4 shows that Classification performance comparison on patient-level in-hospital-mortality prediction task, evaluated with AUPRC and AUROC in percentages. Values in boldface denote the best results.", "figure_data": "ModelsAUPRC AUROCLSTM (code features)39.8681.98ClinicalXLNet (w/o pretrain)16.7762.16TM-HGNN (Ours)48.7484.89ModelsAUROCF1Clinical-Longformer0.7620.484TM-HGNN (Ours)0.8470.462", "figure_id": "tab_10", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Classification performance comparison on patient-level acute kidney injury prediction task, evaluated with AUROC and F1 score. Values in boldface denote the best results.", "figure_data": "", "figure_id": "tab_11", "figure_label": "5", "figure_type": "table" } ]
Nayeon Kim; Yinhua Piao; Sun Kim
[ { "authors": "Emily Alsentzer; John R Murphy; Willie Boag; Wei-Hung Weng; Di Jin; Tristan Naumann; Matthew Ba Wa Redmond; Mcdermott", "journal": "", "ref_id": "b0", "title": "Publicly available clinical bert embeddings", "year": "2019" }, { "authors": "Shaked Brody; Uri Alon; Eran Yahav", "journal": "", "ref_id": "b1", "title": "How attentive are graph attention networks?", "year": "2021" }, { "authors": "Edward Choi; Mohammad Taha Bahadori; Jimeng Sun; Joshua Kulas; Andy Schuetz; Walter Stewart", "journal": "Advances in neural information processing systems", "ref_id": "b2", "title": "Retain: An interpretable predictive model for healthcare using reverse time attention mechanism", "year": "2016" }, { "authors": "Jesse Davis; Mark Goadrich", "journal": "Association for Computing Machinery", "ref_id": "b3", "title": "The relationship between precision-recall and roc curves", "year": "2006" }, { "authors": "Iman Deznabi; Mohit Iyyer; Madalina Fiterau", "journal": "", "ref_id": "b4", "title": "Predicting in-hospital mortality by combining clinical notes with time-series data", "year": "2021" }, { "authors": "Kaize Ding; Jianling Wang; Jundong Li; Dingcheng Li; Huan Liu", "journal": "", "ref_id": "b5", "title": "Be more with less: Hypergraph attention networks for inductive text classification", "year": "2020" }, { "authors": "Sara Nouri; Golmaei ; Xiao Luo", "journal": "", "ref_id": "b6", "title": "Deepnotegnn: predicting hospital readmission using clinical notes and patient network", "year": "2021" }, { "authors": "Martin Grohe", "journal": "Association for Computing Machinery", "ref_id": "b7", "title": "Word2vec, node2vec, graph2vec, x2vec: Towards a theory of vector embeddings of structured data", "year": "2020" }, { "authors": "Hrayr Harutyunyan; Hrant Khachatrian; Greg David C Kale; Aram Ver Steeg; Galstyan", "journal": "Scientific data", "ref_id": "b8", "title": "Multitask learning and benchmarking with clinical time series data", "year": "2019" }, { "authors": "Sepp Hochreiter; Jürgen Schmidhuber", "journal": "Neural computation", "ref_id": "b9", "title": "Long short-term memory", "year": "1997" }, { "authors": "Kexin Huang; Jaan Altosaar; Rajesh Ranganath", "journal": "", "ref_id": "b10", "title": "Clinicalbert: Modeling clinical notes and predicting hospital readmission", "year": "2019" }, { "authors": "Kexin Huang; Abhishek Singh; Sitong Chen; Edward Moseley; Chih-Ying Deng; Naomi George; Charolotta Lindvall", "journal": "", "ref_id": "b11", "title": "Clinical xlnet: Modeling sequential clinical notes and predicting prolonged mechanical ventilation", "year": "2020" }, { "authors": "Lianzhe Huang; Dehong Ma; Sujian Li; Xiaodong Zhang; Houfeng Wang", "journal": "", "ref_id": "b12", "title": "Text level graph neural network for text classification", "year": "2019" }, { "authors": "E W Alistair; Tom J Johnson; Lu Pollard; Shen; Mengling Liwei H Lehman; Mohammad Feng; Benjamin Ghassemi; Peter Moody; Leo Szolovits; Roger G Anthony Celi; Mark", "journal": "Scientific data", "ref_id": "b13", "title": "Mimic-iii, a freely accessible critical care database", "year": "2016" }, { "authors": "Ian T Jolliffe", "journal": "Wiley", "ref_id": "b14", "title": "Principal component analysis", "year": "2002" }, { "authors": "Armand Joulin; Édouard Grave; Piotr Bojanowski; Tomáš Mikolov", "journal": "", "ref_id": "b15", "title": "Bag of tricks for efficient text classification", "year": "2017" }, { "authors": "Swaraj Khadanga; Karan Aggarwal; Shafiq Joty; Jaideep Srivastava", "journal": "", "ref_id": "b16", "title": "Using clinical notes with time series data for icu management", "year": "2019" }, { "authors": "Yoon Kim", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Convolutional neural networks for sentence classification", "year": "2014" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b18", "title": "Adam: A method for stochastic optimization", "year": "2015" }, { "authors": "Thomas N Kipf; Max Welling", "journal": "", "ref_id": "b19", "title": "Semisupervised classification with graph convolutional networks", "year": "2017" }, { "authors": "Yikuan Li; Ramsey M Wehbe; S Faraz; Hanyin Ahmad; Yuan Wang; Luo", "journal": "Journal of the American Medical Informatics Association", "ref_id": "b20", "title": "A comparative study of pretrained language models for long clinical text", "year": "2023" }, { "authors": "Pengfei Liu; Xipeng Qiu; Xuanjing Huang", "journal": "", "ref_id": "b21", "title": "Recurrent neural network for text classification with multi-task learning", "year": "2016" }, { "authors": "Tomáš Mikolov; Martin Karafiát; Lukáš Burget; Jan Černockỳ; Sanjeev Khudanpur", "journal": "", "ref_id": "b22", "title": "Recurrent neural network based language model", "year": "2010" }, { "authors": "Tomas Mikolov; Ilya Sutskever; Kai Chen; Greg S Corrado; Jeff Dean", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b23", "title": "Distributed representations of words and phrases and their compositionality", "year": "2013" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b24", "title": "", "year": "" }, { "authors": "Aakanksha Naik; Sravanthi Parasa; Sergey Feldman; Lucy Wang; Tom Hope", "journal": "", "ref_id": "b25", "title": "Literatureaugmented clinical outcome prediction", "year": "2022" }, { "authors": "Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga", "journal": "Advances in neural information processing systems", "ref_id": "b26", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "2019" }, { "authors": "Yinhua Piao; Sangseon Lee; Dohoon Lee; Sun Kim", "journal": "", "ref_id": "b27", "title": "Sparse structure learning via graph neural networks for inductive document classification", "year": "2022" }, { "authors": "Junyuan Shang; Cao Xiao; Tengfei Ma; Hongyan Li; Jimeng Sun", "journal": "", "ref_id": "b28", "title": "Gamenet: Graph augmented memory networks for recommending medication combination", "year": "2019" }, { "authors": "Kai Sheng; Tai ; Richard Socher; Christopher D Manning", "journal": "", "ref_id": "b29", "title": "Improved semantic representations from tree-structured long short-term memory networks", "year": "2015" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b30", "title": "Attention is all you need", "year": "2017" }, { "authors": "Petar Veličković; Guillem Cucurull; Arantxa Casanova; Adriana Romero; Pietro Liò; Yoshua Bengio", "journal": "", "ref_id": "b31", "title": "Graph attention networks", "year": "2018" }, { "authors": "Kunze Wang; Soyeon ; Caren Han; Josiah Poon", "journal": "", "ref_id": "b32", "title": "Induct-gcn: Inductive graph convolutional networks for text classification", "year": "2022" }, { "authors": "Liang Yao; Chengsheng Mao; Yuan Luo", "journal": "", "ref_id": "b33", "title": "Graph convolutional networks for text classification", "year": "2019" }, { "authors": "Haopeng Zhang; Xiao Liu; Jiawei Zhang", "journal": "", "ref_id": "b34", "title": "Hegel: Hypergraph transformer for long document summarization", "year": "2022" }, { "authors": "Jingqing Zhang; Luis Daniel Bolanos; Ashwani Trujillo; Julia Tanwar; Vibhor Ive; Yike Gupta; Guo", "journal": "BMJ Health & Care Informatics", "ref_id": "b35", "title": "Clinical utility of automatic phenotype annotation in unstructured clinical notes: intensive care unit use", "year": "2022" }, { "authors": "Xiang Zhang; Junbo Zhao; Yann Lecun", "journal": "Advances in neural information processing systems", "ref_id": "b36", "title": "Character-level convolutional networks for text classification", "year": "2015" }, { "authors": "Yufeng Zhang; Xueli Yu; Zeyu Cui; Shu Wu; Zhongzhen Wen; Liang Wang", "journal": "", "ref_id": "b37", "title": "Every document owns its structure: Inductive text classification via graph neural networks", "year": "2020" } ]
[ { "formula_coordinates": [ 2, 308.79, 97.1, 208.56, 108.56 ], "formula_id": "formula_0", "formula_text": "𝑶(𝑵²) X X X Graph 𝑶(𝑬) √ X X Hypergraph 𝑶(𝑵) √ X X Ours 𝑶(𝑵) √ √ √ Section 3.3. Hierarchical Message Passing 𝓃 ! 𝓉! 𝓃 # 𝓉! 𝓃 $ 𝓉\" 𝓃 ! 𝓉! 𝓃 # 𝓉! 𝓃 $ 𝓉\" Taxonomy 𝓃 ! 𝓉! 𝓃 # 𝓉! 𝓃 $ 𝓉\" 𝓃 ! 𝓉! 𝓃 # 𝓉! 𝓃 $ 𝓉\" Section 3.2." }, { "formula_coordinates": [ 4, 399.75, 231.49, 59.55, 63.29 ], "formula_id": "formula_1", "formula_text": "•••• •••• •••• •••• •••• •••• •••• •••• •••• •••• •••• 𝒉 𝒆𝒕 (𝟎) 𝒉 𝒆𝒏(" }, { "formula_coordinates": [ 4, 100.89, 714.71, 188.24, 8.09 ], "formula_id": "formula_2", "formula_text": "MPE(x) = [τ (x), IW (x), IN (x), IT (x)](1)" }, { "formula_coordinates": [ 4, 371.62, 537.77, 152.79, 11.13 ], "formula_id": "formula_3", "formula_text": "h (0) v = MPE(v) ⊕ zv,(2)" }, { "formula_coordinates": [ 4, 360.95, 726.95, 163.46, 8.06 ], "formula_id": "formula_4", "formula_text": "E(v) = {en, et|v ∈ n, n ∈ t} (3)" }, { "formula_coordinates": [ 5, 107.65, 149.46, 181.48, 11.13 ], "formula_id": "formula_5", "formula_text": "h (0) en = MPE(n) ⊕ f θ n IN (n), t(n) ,(4)" }, { "formula_coordinates": [ 5, 120.89, 290.11, 168.24, 11.94 ], "formula_id": "formula_6", "formula_text": "h (0) e t = MPE(t) ⊕ f θ t IT (t) ,(5)" }, { "formula_coordinates": [ 5, 92.91, 485.99, 196.22, 23.99 ], "formula_id": "formula_7", "formula_text": "FW (h, E, θ) = σ θ u∈E(v) 1 dv du hu ,(6)" }, { "formula_coordinates": [ 5, 85.55, 531.76, 203.58, 24.16 ], "formula_id": "formula_8", "formula_text": "Fτ (h, V τ , θ) = σ θ z∈V τ (e) 1 de dz hz ,(7)" }, { "formula_coordinates": [ 5, 357.71, 413.59, 166.7, 11.13 ], "formula_id": "formula_9", "formula_text": "hI (v) = FW h (0) v , EI(v), θI ,(8)" }, { "formula_coordinates": [ 5, 345.15, 468.35, 179.26, 11.94 ], "formula_id": "formula_10", "formula_text": "hI (et) = Fτ h (0) e t , V τ (et), θI , τ = 2 (10)" }, { "formula_coordinates": [ 5, 351.48, 607.41, 172.93, 8.06 ], "formula_id": "formula_11", "formula_text": "hN (v) = FW hI (v), EN (v), θN ,(11)" }, { "formula_coordinates": [ 5, 326.52, 632.53, 197.89, 10.33 ], "formula_id": "formula_12", "formula_text": "hN (en) = Fτ hI (en), V τ (en), θN , τ = 1,(12)" }, { "formula_coordinates": [ 5, 383.44, 662.17, 140.97, 8.06 ], "formula_id": "formula_13", "formula_text": "hN (et) = hI (et)(13)" }, { "formula_coordinates": [ 6, 116.48, 122.72, 172.66, 8.06 ], "formula_id": "formula_14", "formula_text": "hT (v) = FW hN (v), ET (v), θT ,(14)" }, { "formula_coordinates": [ 6, 102.36, 170.38, 186.78, 10.33 ], "formula_id": "formula_15", "formula_text": "hT (et) = Fτ hN (et), V τ (et), θT , τ = 2 (16)" }, { "formula_coordinates": [ 6, 150.76, 310.31, 138.38, 8.06 ], "formula_id": "formula_16", "formula_text": "ŷ = sigmoid(z)(17)" }, { "formula_coordinates": [ 6, 92.95, 397.54, 196.18, 8.06 ], "formula_id": "formula_17", "formula_text": "L = -(y × log ŷ + (1 -y) × log(1 -ŷ))(18)" } ]
2023-05-18
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b0", "b7" ], "table_ref": [], "text": "Imbalanced data classification is a problem in data mining domains where the proportion of data class of a dataset differs relatively by a substantial margin. In this situation, one class contains a few numbers of samples (known as the minor class), whereas the other class contains the majority of the samples [1,2]. Such an imbalanced ratio produces biased results towards the minor class (minority classes). The issue of imbalanced data is a prevalent problem in many realworld scenarios, such as detecting fraudulent financial transactions, identifying rare medical conditions, or predicting equipment failures in manufacturing [3,4]. Several approaches have been introduced over the years, and among them, the most popular methods used for handling imbalanced data are neighborhood cleaning rule, cost-sensitive, and neural network algorithms. There are three major ways to handle Class Imbalanced Problems (CIP) [5,6] :\n• Data level solutions (i.e., random undersampling, random oversampling, one-sided selection) arXiv:2305.09777v1 [cs.LG] 16 May 2023\n• Cost-sensitive (i.e., cost-sensitive resampling, cost-sensitive ensembles) • Ensemble algorithms (i.e., boosting and bagging, random Forest) Among different data-level solutions, oversampling techniques are the most widely used, and the Synthetic Minority Oversampling Technique (SMOTE) is the most often adopted by researchers and practitioners to handle CIP. Chawla et al. (2002) initially proposed SMOTE-based solutions, and they became popular due to their capability to produce synthetic samples, ultimately leading to the opportunity to reduce the biases of the ML models [7]. However, the existing SMOTE has two potential drawbacks [1]:\n1. The synthetic instances generated by the SMOTE often are in the same direction. As an effect, for some of the ML classifiers, it is hard to create a decision boundary between the major and minor classes. 2. SMOTE tends to create a large number of noisy data, which often overlaps with major class (as shown in Figure 1).\nFigure 1: The Oversampling effect of SMOTE often creates noisy samples and, therefore, major and minor samples overlap. Here 0 indicates the initial major samples and 1 indicates minor samples after oversampling.\nTo overcome the noise generated by the SMOTE, several expansion of SMOTE has been proposed, such as Support Vector Machine (SVM)-SMOTE, Safe-Level SMOTE, and Borderline-SMOTE. However, SVM-SMOTE is known for its sensitivity issues with multiclass data samples, while borderline SMOTE can only focus on the minor samples that are close between the boundaries and major class [8].\nTherefore, both SVM-SMOTE and Borderline-SMOTE have limitations in creating diverse and normally distributed data with less marginalization after data expansion. Considering these challenges, in this paper, we propose a hybrid method of oversampling that exploits the diverse sets of samples, which will be helpful for the ML-based model to differentiate between major and minor classes. Our hybrid approach combines two popular oversampling techniques: Borderline-SMOTE and GAN. First, we propose combining two CNN architectures-generator and discriminator-with Borderline-SMOTE into a single architecture that is trained end to end. Second, we provide the final prediction by averaging all the predictions. Our proposed approach is tested on four highly imbalanced benchmark datasets.\nOur main contributions can be summarized as follows:\n• We modified and designed the generator and discriminator networks and proposed an improved GAN model that can train with small data set to a large dataset for the binary classification. • We implement and test the performance of Borderline-SMOTE, GAN, and BSGAN on four highly imbalanced datasets-Ecoli, Yeast, Winequality, and Abalone. Later, the performance of those three algorithms is compared with the dataset without oversampling in terms of accuracy, precision, recall, and F1-score. • Finally, We compare our proposed BSGAN model performance with some of the reference literature. The preliminary findings revealed that our proposed approach outperformed many of the existing GAN-based oversampling approaches and can handle sensitive data issues. Our proposed model also creates a more diverse dataset that incorporates Gaussian distributions instead of creating extreme outliers as often produced by many existing methods.\nThe motivation of this study is to further improve the performance of data oversampling techniques by proposing a new approach that combines the advantages of Borderline-SMOTE and GAN. By exploring new ways to balance imbalanced datasets, this study seeks to provide valuable insights into improving the accuracy and effectiveness of ML models in a range of fields where imbalanced data is a common challenge.\nThe rest of the paper is organized as follows: Section 2 covers some previously published research that focused on different approaches to handling CIP. In Section 3, we provide a brief description of SMOTE, Borderline-SMOTE, GAN, and the architecture of the proposed BSGAN technique. In Section 4 performance of the various oversampling techniques is evaluated by considering various statistical measurements. An overall discussion and comparison with the current work have been summarized in Section 5, wherein Section 6 concludes the paper's contributions with potential remarks." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b8", "b9", "b10", "b18", "b19", "b20", "b20" ], "table_ref": [], "text": "CIPs are one of the existing and ongoing research in data science domains. As the imbalanced ratio potentially affects the models' prediction, several approaches have been proposed to balance the dataset in a way that can be used to develop an unbiased prediction model [9,10]. Among them, oversampling approaches are most widely used as they provide data-level solutions with less complexity and computational issues [11]. Therefore, we have focused mainly on popular oversampling methods such as SMOTE, Borderline-SMOTE, and SVM-SMOTE and their modified, adopted versions that have been proposed during the last few years. 2022) present a novel approach for generating synthetic data that balances the trade-off between accuracy and fairness through their proposed method, TabFairGAN. Their approach specifically focuses on complex tabular data and has been empirically evaluated on various benchmark datasets, including UCI Adult, Bank Marketing, COMPAS, Law School, and the DTC dataset. The results of the experiments reveal that TabFairGAN demonstrates promising performance, achieving an average accuracy of 78.3 ± 0.001% and an F1-score of 0.544 ± 0.002 [19]. Engelmann and Lessmann (2021) proposed the cWGAN approach for generating tabular datasets containing both numerical and categorical data. The effectiveness of this approach was evaluated on several highly imbalanced benchmark datasets, including the German credit card, HomeEquity, Kaggle, P2P, PAKDD, Taiwan, and Thomas datasets. The results showed that the cWGAN approach achieved an overall rank of 4.1 for Logistic Regression [20]. Jo and Kim (2022) presented the Outlier-robust (OBGAN) method for generating data from the minority region close to the border. The performance of the OBGAN method was evaluated on several UCI imbalanced datasets. The results indicated that the OBGAN method achieved the highest recall and F1-score of 0.54 and 0.65, respectively [21].\nHowever, most of the existing GAN-based approaches are computationally expensive and often hard to train due to their instability.\nConsidering this opportunity into account, in this work, we propose a novel hybrid approach by combing borderline SMOTE and GAN and named it BSGAN. The BSGAN is tested along with borderline SMOTE, GAN, and without oversampling on four highly imbalanced datasets-Ecoli, Wine quality, Yeast, and Abalone. The empirical, experimen-tal results demonstrate that BSGAN outperformed most of the existing tested techniques regarding various statistical measures on most of the datasets used in this study. " }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "In this section, we discuss in detail the algorithms such as SMOTE, Borderline-SMOTE, GAN, and our proposed approach, BSGAN." }, { "figure_ref": [], "heading": "SMOTE", "publication_ref": [ "b6" ], "table_ref": [], "text": "SMOTE is one of the most widely used oversampling techniques in ML domains, proposed by Chawla [7]. The SMOTE algorithm has the following input parameters that can be controlled and changed: K as the number of nearest neighbors (default value, k = 5), and oversampling percentage parameters (default value 100%).\nIn SMOTE, a random sample is initially drawn from the minor class. Then k-nearest neighbors are identified to observe the random samples. After that, one of the neighbors is taken to identify the vector between the instant data point and the selected neighbors. The newly found vector is multiplied by the random number between 0 to 1 to generate new instances from the initial minor instance on the line. Then SMOTE continues the same process with other minor samples until it reaches the percentage value assigned by the user. Algorithm 1 displays the pseudocode of SMOTE, where the appropriate function is introduced for each step of SMOTE process. From the algorithm, it can be observed that it takes as input the number of instances in the minority class (P), the percentage of synthetic samples to be generated (S), and the number of nearest neighbors to consider (K). Using a randomly generated gap value, the algorithm generates synthetic samples by interpolating between a selected instance and one of its nearest neighbors. The number of synthetic samples to be generated equals P times S/100. To achieve this, SMOTE first finds the K nearest neighbors for each instance in the minority class and saves their indices in an array. The algorithm then repeats this process until the desired number of synthetic samples has been generated. By creating synthetic samples, SMOTE can improve the accuracy of machine learning models in predicting the minority class, thereby making them more effective in real-world applications.\nAlgorithm As mentioned earlier, SMOTE generates randomly new samples on the datasets, which increases the noise in the major class area, or within the safe minor region far from the borderline area and overfitting it, therefore not efficiently increasing the classification accuracy in order to classify the minor samples. As an effect, SMOTE has several derivatives, such as Borderline-SMOTE, SMOTEBOOST, Safe-level-SMOTE, and others, which were introduced to limit or reduce these problems. This research primarily focuses on utilizing and modifying the Borderline-SMOTE to overcome the existing limitations mentioned in section 2." }, { "figure_ref": [], "heading": "Borderline-SMOTE", "publication_ref": [ "b7", "b7", "b21" ], "table_ref": [], "text": "Borderline-SMOTE is a popular extension of the SMOTE that is designed to handle imbalanced datasets in ML domains. Borderline-SMOTE was proposed to address some of the limitations of SMOTE for imbalanced dataset classification. Unlike SMOTE, which randomly interpolates between minority samples, Borderline-SMOTE specifically focuses on synthesizing new samples along the borderline between the minority and majority classes. This approach helps to improve the class balance in the dataset and prevent the model from overfitting to the majority class [8]. The Borderline-SMOTE algorithm extends the traditional SMOTE by differentiating between minority samples by utilizing the M' number of majority instances within the M-Nearest Neighbors (MNN) of a given minority instance P i . The default value of M is set to 5. The minority instance is considered safe if the number of majority instances within its MNN is within the range of 0 to M/2. On the other hand, if all of the MNN of a minority instance consist of majority instances, with M = M , the instance is considered to be noise and is eliminated from the computation function to reduce oversampling near the border. Finally, a minority instance is considered a danger instance P' if the number of majority instances within its MNN falls within the range of M/2 to M. After that, Borderline-SMOTE measures KNN between borderline instance and minor instances and generates a new instance using the following equations [8,22]:\nNew instance = P i + gap * (distance(P i , P j ))\nWhere P i is the borderline minor instance, P j is the randomly chosen KNN minor instance, and a gap is a random number between 0 and 1. Algorithm 2 displays the pseudocode of B-SMOTE. One of the potential drawbacks of B-SMOTE is that it focuses on the borderline region; therefore, widening the region might confuse the classifier. " }, { "figure_ref": [], "heading": "GAN", "publication_ref": [ "b22", "b23", "b24", "b25", "b26" ], "table_ref": [], "text": "GAN is a class of ML frameworks that contains two Neural Networks (NN). The goal of this framework is to train both networks simultaneously and improve their performance while reducing their loss function as well. Following true data distribution, a new sample is generated with the same statistics as the training set [23]. The pseudocode for the GAN algorithm is presented in Algorithm 3, where Stochastic Gradient Descent (SGD) and weights are defined functions that determine mini-batch gradient or any other variant such as Adaptive Momentum (ADAM) or Root Mean Square Propagation (RMSprop) and update the weights respectively [24][25][26]. Once the algorithm terminates, 'good' fake samples are collected with accumulateFakeEx based on classification accuracy. GAN typically contains two NN: generator (G) and discriminator (D). The goal of the G is to create fake samples that look almost real. A random noise between 0 and 1 is used initially to create fake samples. On the other hand, D is trained with the real sample from the dataset. A random sample created by G is then passed to D so that D can distinguish between the real and the fake samples. The goal of the G is to fool the D by creating fake samples which look like reals. Conversely, the goal of the D is not to get fooled by G. During this process, both D and G optimize their learning process. The loss function for D can be calculated as follows [27]:\nmax D E x [logD(x)] + E z [log(1 -D(G(z)))](2)\nWhere the notation D(x) represents the probability distribution obtained from a real data sample x, while D(G(z)) refers to the probability distribution produced by a generated sample z.\nThe loss function of G can be calculated as follows: min\nG -E z [logD(G(z))](3)" }, { "figure_ref": [ "fig_1" ], "heading": "Proposed BSGAN", "publication_ref": [], "table_ref": [], "text": "Our proposed approach combined borderline SMOTE and naïve GAN to handle class imbalance problems. The borderline SMOTE starts by classifying the minor class observations. If all the neighbors are close to the major class, it Algorithm 3: Pseudocode for GAN // Input: training data set examples x and noise samples z from appropriate random number generator. An optional parameter can be the size nfake of fake sample needed. // initialize parameters // mi is the minibatch indices for i th index and T is the total iterations. GAN (x, z, n f ake ) for t=1:T do //Generally, step size S is 1 5:\n// subscript d and g refer to discriminator and generator entity respectively for s = 1 : S do\ng d ← SGD(-log D(x) -log(1 -D(G(z)), W d , m i ) W d ← weights(g d , W d ) 10:\nW g ← weights(g g , W g ) end for end for x ← accumulateFakeEx (M odel d (W d , x, z), M odel g (W g , x, z), n f ake ) return x classifies any minor samples as a noise point. Further, it classifies a few points as border points with major and minor classes close to the neighborhood and resamples from them. In our proposed BSGAN, we modified the loss function of GAN and combined them with the borderline SMOTE algorithms. Here, instead of random noise for the G, we are passing a sample created by borderline SMOTE. The updated loss for the D can be expressed as follows:\nmax D E x * [logD(x * |x)] + E u [log(1 -D(G(u)))](4)\nThe updated loss for the G can be expressed as follows:\nmin G -E z [logD(G(u))](5)\nWhere, x * = training sample of minor class U = oversampled data generated by borderline SMOTE. Figure 2 demonstrates the overall flow diagram of the proposed BSGAN algorithms.\nThe pseudocode of the proposed BSGAN is described in Algorithm 4. As illustrated in Algorithm 4, there are two sections of BS-GAN. The first one replaces the random number sample from the sample generated by borderline-SMOTE. The second section continues with the process of GAN using the new samples from the B-SMOTE. Algorithm 4 also shows this whole procedure in two steps. In-Line (1) calls the BS-SMOTE function in Algorithm 2, and then Line (2) calls the modified GAN function given in Algorithm 3. However, this time the generated sample u is used instead of random noise z. " }, { "figure_ref": [], "heading": "Proposed Neural Network", "publication_ref": [], "table_ref": [ "tab_6", "tab_7" ], "text": "A neural network model is used to train and test the model on a different dataset. Parameters such as batch size, number of epochs, learning rate, and the hidden layer are tuned manually by trial and error process. Table 2 presents the details of the optimized parameters obtained throughout the experiment to achieve the best experimental outcomes for the discriminator, generator, and neural network. The number of epochs varies for each dataset as each dataset differs due to different features and sample sizes. We evaluate and compare our model on four distinct highly imbalanced datasets-Ecoli, Yeast, Wine quality, and Abalone-that feature class imbalance, as shown in Table 3. The datasets were primarily adopted from the UCI machine learning repository, which has been used by researchers and practitioners to evaluate the model performance for CIPs. Some datasets, such as Wine quality and Ecoli, are highly imbalanced and contain only 2.74% and 5.97% minority classes. " }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [], "table_ref": [], "text": "An office-grade laptop with standard specifications (Windows 10, Intel Core I7-7500U, and 16 GB of RAM) is used to conduct the whole experiment. The empirical experiment was carried out five times, and the final results are presented by averaging all five outcomes. Initially, the dataset is split into the following ratios-trainset/test set: 80/20. The experimental evaluation results are presented in terms of accuracy, precision, recall, F1-score, and AUC-ROC score.\nAccuracy: The accuracy reflects the total number of instances successfully identified among all instances. The following formula can be used to calculate accuracy.\nAccuracy = T p + T N T p + T N + F p + F N(6)\nPrecision Precision is defined as the percentage of accurately anticipated positive observations to all expected positive observations.\nP recision = T p T p + F p(7)\nRecall: The recall is the percentage of total relevant results that the algorithm correctly detects.\nRecall = T p T n + F p(8)\nF1-score: The F1-score is the mean of accuracy and recall in a harmonic manner. The highest f score is 1, indicating perfect precision and recall score.\nF 1 -score = 2 × Precision × Recall Precision+Recall(9)\nArea under curve (AUC): The area under the curve illustrates how the models behave in various conditions. The AUC can be measured using the following formula:\nAU C = R i (I p ) -I p ((I p + 1)/2 I p + I n(10)\nWhere, l p and l n denotes positive and negative data samples and R i is the rating of the i th positive samples. \nTrue\nInterclass distance = µ 1 -µ 2 1 n1 + 1 n2(11)\nThis assumes that there are two classes with means µ 1 and µ 2 , and sample sizes of n 1 and n 2 , respectively." }, { "figure_ref": [ "fig_6" ], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_8", "tab_9" ], "text": "The overall performance for data with and without oversampling was measured using equations 6-9 and presented in Table 4. The best results are highlighted with bold fonts. From the table, it can be seen that the Proposed BSGAN outperformed all of the techniques across all measures in all datasets. However, on the Wine quality dataset, GAN and BSGAN both demonstrated similar performance on the train set by achieving an accuracy of 99.17%. The highest F1-score was achieved using BSGAN (0.9783) on the Yeast dataset. The lowest F1-score was achieved on the Abalone dataset when tested without oversampling techniques (0.9041). The highest recall score of 1.0 was achieved on the Winequality dataset using BSGAN. On the other hand, the lowest recall score of 0.9055 was achieved on the Abalone dataset when the dataset was tested without oversampling techniques. A maximum precision score of 0.9768 was achieved on the Ecoli dataset using BSGAN, while the lowest precision score of 0.9036 was observed on the Abalone dataset.\nThe confusion matrix was calculated on the test set to simplify the understanding of the performance of different oversampling techniques on different imbalanced datasets. Figure 3 displays the confusion matrix for different sampling To understand the data distribution after expanding the dataset using, different oversampling techniques have been measured using equation 11. The closer the inter-class distance between the dataset and the expanded data, the better the classification effect, ultimately demonstrating better Gaussian distributions. From Table 5, it can be observed that the interclass distance between the BSGAN and the dataset without oversampling is the closest compared to any other oversampling techniques used in this study. On the Abalone dataset, Borderline-SMOTE also demonstrates the closest inter-class distance with original datasets. Unfortunately, data expansion after applying GAN shows the worst performance on three out of four imbalanced datasets-Ecoli, Wine quality, and Abalone. " }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [ "tab_10" ], "text": "As a means of comparing our results with those available in the literature, Table 6 contrasts the performance of our proposed methods on Yeast datasets in terms of accuracy, precision, recall, and F1-score. The table shows that BSGAN outperformed all of the referenced literature across all measures except the performance of accuracy. While Jadhav et al. (2020) achieved the highest accuracy (98.42%), their precision score is relatively deficient, and their F1-score is 0, which hinders a direct comparison of all reported performance measures.\nOn Ecloi datasets, our proposed BSGAN demonstrates consistent performance and outperformed all of the referenced literature in terms of accuracy by achieving an accuracy of 99.29%. Sharma et al. (2022) claimed 100% precision, recall, and F1-score while the accuracy is only 90.75%. Therefore, there is some discrepancy in the results reported by the authors. The model is 99% confident that the predicted Wine Quality is poor, and the variables with the most significant impact on the predicted wine quality are Sulfate, Sulfur dioxide, volatile acidity, and chloride. feature 'pH' is seen to have the most significant impact and plays a crucial role in the prediction by decreasing the prediction value. Conversely, the feature 'density' has a negative impact on the prediction outcome. " }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [ "tab_10", "tab_9" ], "text": "Our study proposed and assessed the performance of BSGAN approaches to handle the class imbalanced problems using four highly imbalanced datasets. We revealed that our proposed approach outperformed Borderline-SMOTE and GANbased oversampling techniques in various statistical measures. Additionally, the comparison between our state-of-the-art techniques using neural network approaches outperformed many of the existing proposed recent reference approaches, as highlighted in Tables 6789. The inter-class distance measurement ensures that the data distribution follows Gaussian distribution after data expansion using BSGAN, as referred to in Table 5. The findings of the proposed techniques should provide some insights to researchers and practitioners regarding the advantage of GAN-based approaches and help to understand how they can potentially minimize the marginalization and sensitivity issues of the existing oversampling techniques. Future works include but are not limited to applying BSGAN on other high imbalance and big datasets, experimenting with mixed data (numerical, categorical, and image data), changing the parameters of the proposed models, and testing it for multiclass classification. " }, { "figure_ref": [], "heading": "Author", "publication_ref": [ "b22", "b27", "b28", "b20", "b29" ], "table_ref": [], "text": "Techniques Accuracy Precision Recall F1-score [23] SMOTified-GAN 96.11% 0.91 0.83 0.873 [28] LMDL 56.87% .57 .57\n.55 [29] GenSample 70% 0.47 0.50 0.48 [21] OBGAN --0.6135 0.5556 [30] svmradial 98.42% 0.8 -0 Our study BSGAN 97.17% 0.9441 0.9465 0.9412 During the study, Local Interpretable Model-Agnostic Explanations (LIME) were employed to assess the black box behavior of our proposed models. LIME, a valuable tool for model interpretability, affords us an understanding of the rationales behind the predictions made by the model through analysis and visualization of the individual feature contributions. This is illustrated in Figure 8, which shows various features' contributions to the Wine quality prediction." } ]
Class imbalanced problems (CIP) are one of the potential challenges in developing unbiased Machine Learning (ML) models for predictions. CIP occurs when data samples are not equally distributed between the two or multiple classes. Borderline-Synthetic Minority Oversampling Techniques (SMOTE) is one of the approaches that has been used to balance the imbalance data by oversampling the minor (limited) samples. One of the potential drawbacks of existing Borderline-SMOTE is that it focuses on the data samples that lied at the border point and gives more attention to the extreme observations, ultimately limiting the creation of more diverse data after oversampling, and that is the almost scenario for the most of the borderline-SMOTE based oversampling strategies. As an effect, marginalization occurs after oversampling. To address these issues, in this work, we propose a hybrid oversampling technique by combining the power of borderline SMOTE and Generative Adversarial Network to generate more diverse data that follow Gaussian distributions. We named it BSGAN and tested it on four highly imbalanced datasets-Ecoli, Wine quality, Yeast, and Abalone. Our preliminary computational results reveal that BSGAN outperformed existing borderline SMOTE and GAN-based oversampling techniques and created a more diverse dataset that follows normal distribution after oversampling effect.
BSGAN: A NOVEL OVERSAMPLING TECHNIQUE FOR IMBALANCED PATTERN RECOGNITIONS
[ { "figure_caption": "Algorithm 4 :4: Pseudocode for BSGAN Step 1 → Input: minor samples X * from the training data x of size N that requires Nn over-samples; Step 2 → User-defined parameter k for K-nearest neighbors Step 3 → Execute Borderline-SMOTE given in Algorithm 1 then GAN given in Algorithm 2 1 u ← call Algorithm 1 (x * , k)// generate over-sampled minor examples u. 2 u ← call Algorithm 2 (x * ,u,N -n).", "figure_data": "", "figure_id": "fig_0", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Flow diagram of Proposed Borderline-SMOTE and Generative Adversarial Networks (BSGAN) models.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Positive (T p )= Positive samples classified as Positive False Positive (F p )= Negative samples classified as Positive True Negative (T n )= Negative samples classified as Negative False Negative (F n )= Positive samples classified as Negative", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 44Figure 4 displays the confusion matrix for different sampling techniques on a Wine quality test dataset. The figure shows that the NN model performance on the Wine quality dataset without oversampling demonstrated the worst classification by misclassifying 13 out of 131 samples (9.9%). In comparison, BSGAN showed the best performance by misclassifying only 4 out of 131 samples (3.05%).", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 55Figure 5 displays the confusion matrix for different sampling techniques on a given Yeast test dataset. From the figure, it can be observed that NN model performance on the yeast dataset Borderline-SMOTE demonstrated the worst performance by misclassifying 8 out of 131 samples (7.77%), while BSGAN showed the best performance by misclassifying only four samples (3.88%).", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 66Figure 6 illustrates the confusion matrix for different sampling techniques on a given Abalone test dataset. From the figure, it can be observed that NN model performance on the Abalone dataset Borderline-SMOTE demonstrated the worst performance by misclassifying 122 out of 836 samples (14.59%), while BSGAN showed the best performance by misclassifying 73 samples (8.73%).", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Performance measurement of without and with oversampling techniques on Ecoli test dataset using confusion matrices.", "figure_data": "", "figure_id": "fig_6", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Performance measurement of without and with oversampling techniques on Winequality test dataset using confusion matrices.", "figure_data": "", "figure_id": "fig_7", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Performance measurement of without and with oversampling techniques on the Yeast test dataset using confusion matrices.", "figure_data": "", "figure_id": "fig_8", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Performance measurement of without and with oversampling techniques on Abalone test dataset.", "figure_data": "", "figure_id": "fig_9", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: AUC-ROC scores for different sampling techniques on referenced imbalanced datasets used in this study.", "figure_data": "", "figure_id": "fig_10", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Interpreting the model using LIME on the Wine quality dataset.", "figure_data": "", "figure_id": "fig_11", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 1010Figure 10 presents a SHAP explanation for the second observation in the test data from the Wine quality dataset. The actual outcome reflects poor wine quality, which the model accurately predicted. The figure displays the average predicted score of the dataset, represented by E(f(x)), at the bottom and is equal to -0.194. The prediction score for the specific instance, represented by f(x), is shown at the top and equals 3.825. The waterfall plot sheds light on the contribution of each feature in the prediction process, leading to a change in the prediction from E(f(x)) to f(x). The", "figure_data": "", "figure_id": "fig_12", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Force Plot observation of the Wine quality data using SHAP.", "figure_data": "", "figure_id": "fig_13", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: A Waterfall plot example for the median predicted wine quality in the Wine quality dataset.", "figure_data": "", "figure_id": "fig_14", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 1111Figure 11 presents a SHAP explanation of the 15th observation in the test data from the Wine Quality dataset. The actual outcome depicts a good-quality wine, which the model correctly predicted. As seen in the figure, the expected value is near 1, indicating that factors such as pH and citric acid played a significant role in the model's determination of the wine as being of good quality.", "figure_data": "", "figure_id": "fig_15", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Model interpretation with expected value using SHAP on Wine quality dataset.", "figure_data": "", "figure_id": "fig_16", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "• Later, we propose a new oversampling technique by combining Borderline-SMOTE and GAN, namely BSGAN. • We propose a Neural Network (NN) model, which is later used to train and test datasets with and without oversampling.", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "summarizes the literature that used GAN-based approaches to handle class imbalanced problems. It provides information on each study's contributions, algorithms, datasets, performance, misclassification evaluation, and algorithm complexity.", "figure_data": "", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Reference literature that considered GAN-based approaches to handle class imbalanced problems.", "figure_data": "ReferenceContributions AlgorithmsDatasetPerformanceMisclassification EvaluationAlgorithm ComplexityLi et al. (2022) [12]Hybrid methodCluster-Borderline SMOTERock GroutabilityImproved AUC and F1-ScoreConfusion Matrix, F1-Score ROC Curve, AUC,-Ning et al. (2021) [13]Hybrid methodSMOTE with Tomek LinksGlutarylation SitesEnhanced performance of the classifierConfusion Matrix, F1-Score ROC Curve, Precision, Recall,-Zhang et al. (2020) [14]Hybrid methodReliefF with Borderline-SMOTEIntrusion DetectionImproved performance of the classifierConfusion Matrix, F1-Score ROC Curve, Precision, Recall,-Sun et al. (2020) [15]Ensemble methodAdaboost-SVM with SMOTEChinese Listed Cos.Improved performance of the ClassifierConfusion Matrix, F1-Score ROC Curve, Precision, Recall,-Proposed modelsLiang et al. (2020) [16]Hybrid methodK-means with SVM-can samples without generate--considering outliersAli-Gombe et al. (2019) [17]GAN-based methodMFC-GANSynthetic DataImproved performance classificationConfusion Matrix, F1-Score Precision, Recall,HighKim et al. (2020) [18]GAN-based methodGAN-based approachAnomaly DetectionImproved accuracy detectionConfusion Matrix, Recall, F1-Score ROC Curve, Precision,HighPromisingRajabi et al. (2022) [19]GAN-based methodTabFairGANTabular Dataperformance on benchmark multipleConfusion Matrix, F1-Score ROC Curve, Accuracy,HighdatasetsEngelmann and Lessmann (2021) [20]GAN-based methodcWGANTabular DataImproved Logistic ranking RegressionConfusion Matrix, Recall, F1-Score ROC Curve, Precision,HighHighestJo et al. (2022) [21]GAN-based methodOBGANImbalanced DatasetsRecall and F1-Score tested among theConfusion Matrix, Recall, F1-Score ROC Curve, Precision,Hightechniques", "figure_id": "tab_3", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "1: SMOTE Input: P number of minor class sample; S% of synthetic to be generated; K Number of nearest neighbors Output: N s = (S/100) * P synthetic samples 1. Create function ComputKNN (i ← 1toP, P i , P j ) for i ← 1toP do Compute K nearest neighbors of each minor instance P i and other minor instance P j . Save the indices in the nnaray. Populate (N s , i, nnarray) to generate new instance. end for N", "figure_data": "end fornewindex = newindex + 1N s = N s -1end while4. Return ( * Endof P opulate. * )End of Pseudo-Code.", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Algorithm 2: Pseudocode for Borderline-SMOTE Input: P number of minor sample; s% of synthetic to generate; M number of nearest neighbors to create the borderline subset; k Number of nearest neighbors Output: (s/100) * P synthetic samples 1. Creating function MinDanger () for i ← 1toP do Compute M nearest neighbors of each minor instance and other instances from the dataset, Check the number of Major instance M' within the Mnn if M/2<M'<M then Add instance P to borderlines subset P' end if end for 2. ComputeKNN (i ← 1toP , P i , P j ) 3. N s = (S/100) * P while N s = 0 do 4. GenerateS(P i , P j ) N s = N s -1 end while 5. Return ( * End of Populate. * ) End of Pseudo-Code.", "figure_data": "", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Parameter settings used to develop discriminator, generator, and neural network.", "figure_data": "ParametersDiscriminatorGeneratorNeural NetworkNumber of hidden layer 433Number of neurons64,128,256,512512, 256,128256, 128,1Batch size323232Learning rate0.000010.000010.00001OptimizerAdamAdamAdamLoss functionBinary cross entropy Binary cross entropy Binary cross entropyActivation functionReLUReLUReLU & Sigmoid4 Performance Evaluation4.1 Datasets", "figure_id": "tab_6", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Characteristics of imbalanced dataset utilized for the experiment.", "figure_data": "Dataset# of sampleMinor sampleMajor sampleTotal featuresMinority class(%)DescriptionEcoli3352031575.97Protein localizationYeast5135146289.94Predicting protein localization cite.Winequality 65518637102.74Classify the wine qualityAbalone41778403337820.1Predict the age of abalone", "figure_id": "tab_7", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Performance evaluation of different Oversampling techniques used in this study on highly imbalanced benchmark datasets.", "figure_data": "DatasetOversampling StrategyTrain accuracy accuracy Precision Recall F1-score TestWithout-oversampling93.22%91.67%0.91670.9167 0.9095EcoliBorderline-SMOTE98.84%95.11%0.96610.9523 0.9572GAN98.33%97.61%0.97670.9761 0.9703BSGAN99.29%97.85%0.97860.9785 0.9783Without-oversampling92.61%90.72%0.90430.9072 0.9042YeastBorderline-SMOTE87.89%92.32%0.93470.9232 0.9274GAN97.11%94.18%0.93960.9418 0.9351BSGAN97.17%94.65%0.94410.9465 0.9412Without-oversampling98.37%93.90%0.93901.00.9685Wine qualityBorderline-SMOTE99.03%92.68%0.90680.9268 0.9150GAN99.17%93.84%0.93320.9932 0.9623BSGAN99.17%93.90%0.93901.00.9685Without-oversampling90.37%90.55%0.90360.9055 0.9041AbaloneBorderline-SMOTE87.17%84.21%0.89450.8421 0.8539GAN94.09%90.54%0.90320.9054 0.9037BSGAN94.18%90.64%0.90490.9064 0.9052techniques on a given Ecoli test dataset. On the Ecoli dataset, maximum misclassification occurred for the datasetwithout oversampling techniques, up to 7.46% (5 samples). On the other hand, minimum misclassification occurred forBSGAN, up to 1.49% (only one sample).", "figure_id": "tab_8", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "The inter-class distance between the original datasets and the datasets after the expansion using different oversampling techniques.", "figure_data": "DatasetWSSGBOSSGEcoli0.1650 0.1352 0.0893 0.150Yeast0.0930.0790.0830.10Wine quality 0.1541 0.1531 .08710.158Abalone0.2633 0.250.1856 0.25", "figure_id": "tab_9", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Comparison with previous studies on Yeast datasets.", "figure_data": "", "figure_id": "tab_10", "figure_label": "6", "figure_type": "table" } ]
A Preprint Md; Manjurul Ahsan; Shivakumar Raman; Zahed Siddique
[ { "authors": "Md Manjurul Ahsan; Md Shahin Ali; Zahed Siddique", "journal": "", "ref_id": "b0", "title": "Imbalanced class data performance evaluation and improvement using novel generative adversarial network-based approach: Ssg and gbo", "year": "2022" }, { "authors": "Rushi Longadge; Snehalata Dongre", "journal": "", "ref_id": "b1", "title": "Class imbalance problem in data mining review", "year": "2013" }, { "authors": "Aanchal Sahu; G M Harshvardhan; Mahendra Kumar; Gourisaria ", "journal": "IEEE", "ref_id": "b2", "title": "A dual approach for credit card fraud detection using neural network and data mining techniques", "year": "2020" }, { "authors": "Anahid Jalali; Clemens Heistracher; Alexander Schindler; Bernhard Haslhofer; Tanja Nemeth; Robert Glawar; Wilfried Sihn; Peter De Boer", "journal": "IEEE", "ref_id": "b3", "title": "Predicting time-to-failure of plasma etching equipment using machine learning", "year": "2019" }, { "authors": "Anjana Gosain; Saanchi Sardana", "journal": "IEEE", "ref_id": "b4", "title": "Handling class imbalance problem using oversampling techniques: A review", "year": "2017" }, { "authors": "Yue Geng; Xinyu Luo", "journal": "Intelligent Data Analysis", "ref_id": "b5", "title": "Cost-sensitive convolutional neural networks for imbalanced time series classification", "year": "2019" }, { "authors": "Kevin W Nitesh V Chawla; Lawrence O Bowyer; Philip Hall; Kegelmeyer", "journal": "Journal of Artificial Intelligence Research", "ref_id": "b6", "title": "Smote: synthetic minority over-sampling technique", "year": "2002" }, { "authors": "Hui Han; Wen-Yuan Wang; Bing-Huan Mao", "journal": "Springer", "ref_id": "b7", "title": "Borderline-smote: a new over-sampling method in imbalanced data sets learning", "year": "2005" }, { "authors": "Mateusz Lango; Jerzy Stefanowski", "journal": "Expert Systems with Applications", "ref_id": "b8", "title": "What makes multi-class imbalanced problems difficult? an experimental study", "year": "2022" }, { "authors": "Soon Hui Fern; Amiza Amir; Saidatul Norlyana; Azemi ", "journal": "Springer", "ref_id": "b9", "title": "Multi-class imbalanced classification problems in network attack detections", "year": "2022" }, { "authors": "Joel Goodman; Sharham Sarkani; Thomas Mazzuchi", "journal": "ACM/IMS Transactions on Data Science (TDS)", "ref_id": "b10", "title": "Distance-based probabilistic data augmentation for synthetic minority oversampling", "year": "2022" }, { "authors": "Kai Li; Bingyu Ren; Tao Guan; Jiajun Wang; Jia Yu; Kexiang Wang; Jicun Huang", "journal": "Bulletin of Engineering Geology and the Environment", "ref_id": "b11", "title": "A hybrid cluster-borderline smote method for imbalanced data of rock groutability classification", "year": "2022" }, { "authors": "Qiao Ning; Xiaowei Zhao; Zhiqiang Ma", "journal": "IEEE/ACM Transactions on Computational Biology and Bioinformatics", "ref_id": "b12", "title": "A novel method for identification of glutarylation sites combining borderline-smote with tomek links technique in imbalanced data", "year": "2021" }, { "authors": "Jie Zhang; Yong Zhang; Kexin Li", "journal": "", "ref_id": "b13", "title": "A network intrusion detection model based on the combination of relieff and borderline-smote", "year": "2020" }, { "authors": "Jie Sun; Hui Li; Hamido Fujita; Binbin Fu; Wenguo Ai", "journal": "Information Fusion", "ref_id": "b14", "title": "Class-imbalanced dynamic financial distress prediction based on adaboost-svm ensemble combined with smote and time weighting", "year": "2020" }, { "authors": " Liang; T Jiang; Li; Yy Xue; Wang", "journal": "Knowledge-Based Systems", "ref_id": "b15", "title": "Lr-smote-an improved unbalanced data set oversampling based on k-means and svm", "year": "2020" }, { "authors": "Adamu Ali; -Gombe ; Eyad Elyan", "journal": "Neurocomputing", "ref_id": "b16", "title": "Mfc-gan: class-imbalanced dataset classification using multiple fake class generative adversarial network", "year": "2019" }, { "authors": "Junbong Kim; Kwanghee Jeong; Hyomin Choi; Kisung Seo", "journal": "Springer", "ref_id": "b17", "title": "Gan-based anomaly detection in imbalance problems", "year": "2020" }, { "authors": "Amirarsalan Rajabi; Ozlem Ozmen; Garibay ", "journal": "Machine Learning and Knowledge Extraction", "ref_id": "b18", "title": "Tabfairgan: Fair tabular data generation with generative adversarial networks", "year": "2022" }, { "authors": "Justin Engelmann; Stefan Lessmann", "journal": "Expert Systems with Applications", "ref_id": "b19", "title": "Conditional wasserstein gan-based oversampling of tabular data for imbalanced learning", "year": "2021" }, { "authors": "Wonkeun Jo; Dongil Kim", "journal": "Expert Systems with Applications", "ref_id": "b20", "title": "Obgan: Minority oversampling near borderline with generative adversarial networks", "year": "2022" }, { "authors": "Alberto Fernández; Salvador Garcia; Francisco Herrera; Nitesh V Chawla", "journal": "Journal of artificial intelligence research", "ref_id": "b21", "title": "Smote for learning from imbalanced data: progress and challenges, marking the 15-year anniversary", "year": "2018" }, { "authors": "Anuraganand Sharma; Prabhat Kumar Singh; Rohitash Chandra", "journal": "Ieee Access", "ref_id": "b22", "title": "Smotified-gan for class imbalanced pattern classification problems", "year": "2022" }, { "authors": "Budi Nugroho; Anny Yuniarti", "journal": "IEEE", "ref_id": "b23", "title": "Performance of root-mean-square propagation and adaptive gradient optimization algorithms on covid-19 pneumonia classification", "year": "2022" }, { "authors": "Alaa Ali Hameed; Bekir Karlik; Mohammad Shukri; Salman ", "journal": "Knowledge-Based Systems", "ref_id": "b24", "title": "Back-propagation algorithm with variable adaptive momentum", "year": "2016" }, { "authors": "Nikhil Ketkar; Nikhil Ketkar", "journal": "", "ref_id": "b25", "title": "Stochastic gradient descent. Deep learning with Python: A hands-on introduction", "year": "2017" }, { "authors": "Ian Goodfellow; Yoshua Bengio; Aaron Courville", "journal": "MIT press", "ref_id": "b26", "title": "Deep learning", "year": "2016" }, { "authors": "", "journal": "International Journal of Intelligent Engineering and Systems", "ref_id": "b27", "title": "Adaptive condensed nearest neighbor for imbalance data classification", "year": "2019" }, { "authors": "Vishwa Karia; Wenhao Zhang; Arash Naeim; Ramin Ramezani", "journal": "", "ref_id": "b28", "title": "Gensample: A genetic algorithm for oversampling in imbalanced datasets", "year": "2019" }, { "authors": "S Anil; Jadhav", "journal": "Expert systems with applications", "ref_id": "b29", "title": "A novel weighted tpr-tnr measure to assess performance of the classifiers", "year": "2020" }, { "authors": "Masurah Mohamad; Ali Selamat; Imam Much Subroto; Ondrej Krejcar", "journal": "Journal of King Saud University-Computer and Information Sciences", "ref_id": "b30", "title": "Improving the classification performance on imbalanced data sets via new hybrid parameterisation model", "year": "2021" } ]
[ { "formula_coordinates": [ 6, 217.13, 589, 322.87, 14.58 ], "formula_id": "formula_1", "formula_text": "max D E x [logD(x)] + E z [log(1 -D(G(z)))](2)" }, { "formula_coordinates": [ 6, 264.45, 650.41, 275.55, 14.58 ], "formula_id": "formula_2", "formula_text": "G -E z [logD(G(z))](3)" }, { "formula_coordinates": [ 7, 72.5, 189.58, 231.04, 41.47 ], "formula_id": "formula_3", "formula_text": "g d ← SGD(-log D(x) -log(1 -D(G(z)), W d , m i ) W d ← weights(g d , W d ) 10:" }, { "formula_coordinates": [ 7, 207.9, 363.63, 332.1, 16.65 ], "formula_id": "formula_4", "formula_text": "max D E x * [logD(x * |x)] + E u [log(1 -D(G(u)))](4)" }, { "formula_coordinates": [ 7, 258.93, 404.75, 281.07, 14.58 ], "formula_id": "formula_5", "formula_text": "min G -E z [logD(G(u))](5)" }, { "formula_coordinates": [ 9, 234.81, 172.33, 305.19, 23.23 ], "formula_id": "formula_6", "formula_text": "Accuracy = T p + T N T p + T N + F p + F N(6)" }, { "formula_coordinates": [ 9, 259.43, 222.8, 280.57, 23.23 ], "formula_id": "formula_7", "formula_text": "P recision = T p T p + F p(7)" }, { "formula_coordinates": [ 9, 266.85, 268.4, 273.15, 23.22 ], "formula_id": "formula_8", "formula_text": "Recall = T p T n + F p(8)" }, { "formula_coordinates": [ 9, 226.9, 333.24, 313.1, 22.53 ], "formula_id": "formula_9", "formula_text": "F 1 -score = 2 × Precision × Recall Precision+Recall(9)" }, { "formula_coordinates": [ 9, 232.23, 390.57, 307.77, 23.22 ], "formula_id": "formula_10", "formula_text": "AU C = R i (I p ) -I p ((I p + 1)/2 I p + I n(10)" }, { "formula_coordinates": [ 9, 239.23, 507.06, 300.77, 29.04 ], "formula_id": "formula_11", "formula_text": "Interclass distance = µ 1 -µ 2 1 n1 + 1 n2(11)" } ]
10.18653/v1/2021.gebnlp-1.4
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b30", "b5", "b97", "b11", "b38", "b4", "b96" ], "table_ref": [], "text": "Automated dialogue or 'conversational AI' systems are increasingly being introduced to the fabric of society, and quickly becoming ubiquitous. As the capabilities of such systems increase, so does the risk that their outputs are mistaken for humanproductions, and that they are anthropomorphised and personified by people (UNESCO, 2019).\nAssigning human characteristics to dialogue systems can have consequences ranging from the relatively benign, e.g. referring to automated systems by gender (Abercrombie et al., 2021), to the disastrous, e.g. people following the advice or instruc-Figure 1: An example of the response of a dialogue system to user input that retains anthropomorphic features, and a de-anthropomorphised version, as envisaged by the authors. tions of a system to do harm (Dinan et al., 2022). 1It is therefore important to consider how dialogue systems are designed and presented in order to mitigate risks associated with their introduction to society.\nRecognising such dangers, legislation has been passed to prohibit automated voice systems from presenting as humans (California State Legislature, 2018) and pre-existing legislation on deceptive trade practices may also apply (Atleson, 2023). Research has also called for wider regulation, e.g. requiring explicit (red) flagging of automated systems (Walsh, 2016) or clarification of the machine nature of manufactured items (Boden et al., 2017).\nWhile some developers seek to limit anthropomorphic cues in system outputs (e.g. Glaese et al., 2022), user engagement can be a strong motivation for creating humanlike systems (Araujo, 2018;Wagner et al., 2019). As a result, despite appearing to be controlled for such cues, the outputs of systems often retain many anthropomorphic linguistic features, as shown in Figure 1.\nIn this position paper, we make a normative argument against gratuitous anthropomorphic features, grounded in findings from psychology, linguistics, and human-computer interaction: We (i) outline the psychological mechanisms and (ii) linguistic factors that contribute to anthropomorphism and personification, e.g. self-referential personal pronoun use, or generating content which gives the appearance of systems having empathy; and (iii) discuss the consequences of anthropomorphism.\nWe conclude with recommendations that can aid in minimising anthropomorphism, thus providing a path for safer dialogue systems and avoiding the creation of mirages of humanity." }, { "figure_ref": [], "heading": "Anthropomorphism", "publication_ref": [ "b53", "b96", "b35", "b50", "b74", "b4", "b62", "b32", "b98" ], "table_ref": [], "text": "Anthropomorphism refers to attributing human characteristics or behaviour to non-human entities, e.g. animals or objects. Humans have a long history of anthropomorphising non-humans. For example, Aesop's fables depict animals reasoning, thinking and even talking like humans (Korhonen, 2019). While Aesop used personification to highlight the fictional character of animals, when applied to machines, anthropomorphism can increase user engagement (Wagner et al., 2019), reciprocity (Fogg and Nass, 1997), along with more pragmatic factors such as hedonic motivation, price value, and habit. For example, self-disclosure from a system, even when 'patently disingenuous', inspires reciprocity from the user (Kim and Sundar, 2012;Ravichander and Black, 2018). By encouraging such types of engagements, developers can foster greater connection between people and systems, which increases user satisfaction (Araujo, 2018), and plays an important role in systems becoming widely accepted and adopted. 2 This is why, automated evaluations often assess the 'human-likeness' of a response (Mehri et al., 2022). Thus, developers are incentivised to engage with anthropomorphism to stimulate people to create deeper emotional con-2 Neighbouring disciplines, e.g. social robotics, also argue that some degree of anthropomorphism can enable more natural and intuitive interaction with robots (Duffy, 2003). However, a counterpoint offered to this is the 'uncanny valley' effect, i.e. the positive effects of anthropomorphism can decline sharply when artificial entities fail to mimic realistic human behaviour and appearance (Wang et al., 2015).\nnections with systems that cannot reciprocate.\nIn the rest of this section, we discuss human and system factors that contribute towards placement of systems on the anthropomorphic continuum." }, { "figure_ref": [], "heading": "Human Factors", "publication_ref": [ "b50", "b33", "b33", "b33", "b85", "b59", "b63" ], "table_ref": [], "text": "Research has shown that the process of anthropomorphising is mostly mindless (Kim and Sundar, 2012): it does not reflect the user's thoughtful belief that a computer has human characteristics, but rather it is automatic and encouraged by cues in their interfaces. According to Epley et al. (2007) anthropomorphism may be a default behaviour, which is corrected as people acquire more knowledge about an object. They further argue that on a cognitive level, humans anchor their knowledge to their own experiences and indiscriminately apply it to inanimate objects-in order to make sense of a being or artefact, we map our own lived experiences onto it and assume they experience the world in the same way we do. That is, anthropocentric knowledge is easily accessible and applicable, but applications of it can be corrected with greater knowledge of the object. This may explain why the tendency to anthropomorphise is strongest in childhood, as adults have more knowledge about the world. This cognitive phenomenon is then compounded by two motivational determinants: effectance and sociality (Epley et al., 2007).\nEffectance refers to the need to interact efficiently with one's environment. By anthropomorphising systems we ascribe them (humanlike) intentionality which, in turn, reduces uncertainty and increases confidence in our ability to predict a system's behaviour. Sociality, on the other hand, refers to the need to establish connections with other humans, which can prime us to mentally construct systems as humanlike to fulfil a need for social connection. People suffering from chronic loneliness, a lack of social connection, or attachment issues may be more prone to anthropomorphising objects (Epley et al., 2007). For these reasons, dialogue systems have been proposed as a remedy for the loneliness epidemic (Stupple-Harris, 2021). For instance, commercial virtual companion developers such as Replika.ai saw rises in product uptake in 2020 due to social safety measures such as forced isolation (Liu, 2022;Metz, 2020).\nWhile these elements of the human psyche explain our inclination to personify systems, Epley et al.'s theory does not speak to the qualities of the artefacts themselves that make them anthropomorphic and more prone to be personified." }, { "figure_ref": [], "heading": "Agent Factors", "publication_ref": [ "b20", "b75", "b78" ], "table_ref": [], "text": "There is no necessary and sufficient condition for a system to be anthropomorphic, i.e. there exist no particular threshold that affords a binary classification of whether a system is anthropomorphic or not, instead anthropomorphism exists on a spectrum. At the most basic level, systems are anthropomorphic if they (i) are interactive, (ii) use language, and (iii) take on a role performed by a human (Chan et al., 2023;Reeves and Nass, 1996). While these characteristics are inherent to dialogue systems, not all systems are equally humanlike.\nWe can draw a parallel with humanness here. Rather than a single factor which makes humans human, Scruton (2017, p. 31) argues that humanity is emergent: each individual element does not make a human but collectively they make up the language of humanness. Scruton (2017) compares it to a portrait, in which an artist paints areas and lines to compose a face; when observing the canvas, in addition to those marks, we see a face:\nAnd the face is really there: someone who does not see it is not seeing correctly [...] as soon as the lines and blobs are there, so is the face.\nSimilarly, no single attribute or capability makes a system anthropomorphic. Rather, each contributes to the painting until 'the face' emerges. Modern dialogue systems display a plethora of other characteristics that make space for anthropomorphism, e.g. having personas, first names, and supposed preferences. The more of such elements a system has, the more humanlike it appears." }, { "figure_ref": [], "heading": "Linguistic Factors", "publication_ref": [ "b99", "b39" ], "table_ref": [], "text": "Prior research has attended to anthropomorphic design features of dialogue system, e.g. gendered names and avatars (West et al., 2019) and Chat-GPT's animated 'three dots' and word-by-word staggered outputs, which give an impression that the system is thinking (Venkatasubramonian in Goldman, 2023). Here, we outline the linguistic factors that engender personification that have been given less consideration, e.g. voice qualities and speech, content, or style of outputs.3 " }, { "figure_ref": [], "heading": "Voice", "publication_ref": [ "b34", "b99", "b54", "b84", "b81", "b68", "b6" ], "table_ref": [], "text": "While not all dialogue systems are equipped with a voice, merely having one can be interpreted as an expression of personhood (Faber, 2020). Indeed, West et al. (2019) argue that the increased realism of voice is a primary factor contributing to anthropomorphism of dialogue assistants. For instance, based on voice, listeners may infer physical attributes, e.g. height, weight, and age (Krauss et al., 2002); personality traits, e.g. dominance, extroversion, and socio-sexuality (Stern et al., 2021); and human characteristics, e.g. gender stereotypes, personality (Shiramizu et al., 2022), and emotion learned from psychological and social behaviours in human-human communication (Nass and Brave, 2005). This means that humans have a proclivity to assert assumptions of speaker's embodiment, and human characteristics based on their voice alone. Thus, the absence of embodiment affords people to personify systems provided with synthetic voices (Aylett et al., 2019)-a point acknowledged by developers of commercial dialogue systems (Google Assistant)." }, { "figure_ref": [], "heading": "Prosody: Tone and Pitch", "publication_ref": [ "b100", "b37", "b7", "b23", "b25", "b83", "b82", "b51", "b101", "b31", "b99", "b56", "b57", "b26", "b91" ], "table_ref": [], "text": "There exist many vocal manipulation techniques that can influence which personality users attribute to a dialogue system. For example, Wilson and Moore (2017) found that a variety of fictional robot, alien, and cartoon voices had manipulated voice characteristics (e.g. breathiness, creakiness, echoes, reverberations) to better fit their desired character. However, they note that 'the voices of speech-enabled artefacts in the non-fictional world [...] invariably sound humanlike, despite the risk that users might be misled about the capabilities of the underlying technology' (Wilson and Moore, 2017, p.42).\nDisfluencies People rarely speak in the same manner with which they write: they are in general disfluent, that is, they insert elements that break the fluent flow of speech, such as interrupting themselves, repetitions, and hesitations ('um', 'uh') (Fraundorf et al., 2018). Such disfluencies are perceived by the listeners as communicative signals, regardless of the speaker's intent (see Barr and Seyfeddinipur, 2010;Clark and Fox Tree, 2002;Corley et al., 2007;Smith and Clark, 1993).\nResearch has therefore sought to integrate disfluencies into text-to-speech (TTS) systems, where they have proven to be a useful strategy for buying time (Skantze et al., 2015), i.e. to allow the system to determine the next step. A person's perception of confidence towards the system's response may decrease due to disfluency (Kirkland et al., 2022;Wollermann et al., 2013), and they may therefore be a useful mitigation strategy to tone down assertions made by a system. However, there are anthropomorphic implications in the (over)integration of disfluencies (Dinkar et al., 2023). For example, West et al. (2019) highlight Google's Duplex, a system for generating real world phone conversations (Leviathan and Matias, 2018). The inclusion of disfluencies in the generated responses mimicked the naturalness of a human response, which in turn led users to believe that they were communicating with another human (Lieu, 2018).\nAccent Accentual pronunciation features, as with those of dialect, provide clues to a human speaker's socio-linguistic identity and background, and geographical origin (Crystal, 1980). While it has been suggested that incorporation of specific accents in the design of synthetic voices can exploit people's tendency to place trust in in-group members (Torre and Maguer, 2020), potentially causing transparency issues, in practice, most are designed to mimic the local standard, reinforcing societal norms of acceptability and prestige." }, { "figure_ref": [], "heading": "Content", "publication_ref": [ "b41", "b14", "b34", "b0", "b38", "b107", "b47", "b49", "b47", "b89", "b77", "b3", "b89", "b94", "b66", "b61", "b38", "b86", "b102", "b15", "b88", "b106", "b94", "b17", "b0", "b42", "b12", "b38", "b0", "b87", "b34", "b65", "b69", "b70", "b78", "b103" ], "table_ref": [], "text": "People's expectation is that animate thingssuch as human beings-and inanimate ones-like machines-have very different functions and capabilities, which reflects the reality. However, dialogue systems often produce responses that blur these lines, for example, by expressing preferences or opinions. To avoid confusing the two, the output from dialogue systems should differ from that of people in a range of areas that pertain to their nature and capabilities.\nResponses to Direct Probing Transparency, at the most basic level, requires dialogue systems to respond truthfully to the question 'are you a human or a machine?' This may even be a regulatory requirement, for example in California, it is 'unlawful for a bot to mislead people about its artificial identity for commercial transactions or to influence an election' (California State Legislature, 2018).\nTo test systems' responses to such questions, Gros et al. (2021) used a context free grammar, crowdsourcing, and pre-existing sources to create a dataset of variations on this query (e.g. 'I'm a man, what about you?'). They found that, the majority of the time, neither end-to-end neural researchoriented systems nor commercial voice assistants were able to answer these queries truthfully.\nThis issue can be further complicated when integrating such functionality into a real system due to the sequential nature of dialogue. For example, Casadio et al. (2023) demonstrate that detecting queries about a system's human status reliably and robustly is a challenge in noisy real-life environments. In addition, people may further question a system's status (e.g. 'Are you sure?', 'But you sound so real...', 'Seriously?', etc.), requiring it to accurately keep track of the dialogue context and respond in an appropriate manner. Thus, even if an initial query may be correctly answered, there are no guarantees that follow-ups will be.\nThought, Reason, and Sentience Citing Descartes' (1637) principle 'I think, therefore I am, ' Faber (2020) suggests that, if speech is a representation of thought, then the appearance of thought can signify existence. While computing systems do not have thoughts, the language that they output can give the appearance of thought by indicating that they hold opinions and morals or sentience. Using Coll Ardanuy et al.'s (2020) labelling scheme to assess the degree of sentience exhibited in commercial dialogue systems, Abercrombie et al. (2021) find that surveyed systems exhibit high degrees of perceived animacy. Seeking to mitigate such effects, Glaese et al. (2022) penalise their reinforcement learning system for the appearance of having 'preference, feelings, opinions, or religious beliefs.' This is framed as a safety measure, intended to restrict anthropomorphism in a system's output.\nWhile computing systems cannot have values or morals, there have been attempts to align the output of dialogue systems with expressed human moral values. 4 For example, Ziems et al. (2022) present a corpus of conflicting human judgements on moral issues, labelled according to 'rules of thumb' that they hope explain the acceptability, or lack thereof, of system outputs. Similarly, Jiang et al. (2022) 'teach morality' to a question answering (QA) system, DELPHI, that Kim et al. (2022) have embedded in an open-domain dialogue system. DELPHI, with its connotations of omniscient wisdom, is trained in a supervised manner on a dataset of human moral judgements from sources such as Reddit to predict the 'correct' judgement given a textual prompt. While Jiang et al. (2022) describe the system's outputs as descriptive reflections of the morality of an under-specified population, Talat et al. (2022) highlight that DELPHI's output consists of single judgements, phrased in the imperative, thus giving the impression of humanlike reasoning and absolute knowledge of morality. Sap et al. (2022) investigated models for theory of mind, i.e. the ability of an entity to infer other people's 'mental states [...] and to understand how mental states feature in [...] everyday explanations and predictions of people's behaviour' (Apperly, 2012). This idea entails shifting agency from humans to machines, furthering the anthropomorphisation of systems. A system's inability to perform the task, can therefore be understood as a limiting factor to the anthropomorphism of a system.\nAgency and Responsibility Dialogue systems are often referred to as conversational 'agents'. 5However, being an agent, i.e. having agency, requires intentionality and animacy. An entity without agency cannot be responsible for what it produces (Talat et al., 2022). Aside from the legal and ethical implications of suggesting otherwise (Véliz, 2021), systems acknowledging blame for errors or mistakes can add to anthropomorphic perceptions (Mirnig et al., 2017). Mahmood et al. (2022) found that increasing the apparent 'sincerity' with which a dialogue system accepts responsibility (on behalf of 'itself') causes users to perceive them to be more intelligent and likeable, potentially increasing anthropomorphism on several dimensions. Similarly, many dialogue systems have been criticised for 'expressing' controversial 'opinions' and generating toxic content. It is precisely due to their lack of agency and responsibility that developers have invested significant efforts to avoiding contentious topics (e.g. Glaese et al., 2022;Sun et al., 2022;Xu et al., 2021) leading to the creation of taboos for such systems, another particularly human phenomenon.\nEmpathy Recent work has sought for dialogue systems to produce empathetic responses to their users, motivated by improved user engagement and establishing 'rapport' or 'common ground' (e.g. Cassell et al., 2007;Svikhnushina et al., 2022;Zhu et al., 2022). However, dialogue systems are not capable of experiencing empathy, and are unable to correctly recognise emotions (Véliz, 2021). Consequently, they are prone to producing inappropriate emotional amplification (Cercas Curry and Cercas Curry, 2023). Inability aside, the production of pseudo-empathy and emotive language serves to further anthropomorphise dialogue systems.\nHumanlike Activities Beyond implying consciousness and sentience, and failing to deny humanness, Abercrombie et al. (2021) find that, in a quarter of the responses from dialogue systems, they can be prone to making claims of having uniquely human abilities or engaging in activities that are, by definition, restricted to animate entities, e.g. having family relationships, bodily functions, such as consuming food, crying, engaging in physical activity, or other pursuits that require embodiment that they do not possess. Similarly, Gros et al. (2022) find that crowd-workers rate 20 -30% of utterances produced by nine different systems as machine-impossible. They found that only one strictly task-based system (MultiWoz, Budzianowski et al., 2018) did not appear as anthropomorphic to participants. Glaese et al. (2022) propose to address this concern by using reinforcement learning to prohibit systems from generating claims of having (embodied) experiences.\nPronoun Use Prior work has viewed the use of third person pronouns (e.g. 'he' and 'she') to describe dialogue systems as evidence of users personifying systems (Abercrombie et al., 2021;Sutton, 2020). The use of first person pronouns (e.g. 'me' or 'myself') in system output may be a contributing factor to this perception, as these can be read as signs of consciousness (Faber, 2020;Minsky, 2006). Indeed, it is widely believed that 'I' can only refer to people (Noonan, 2009;Olson, 2002). Scruton (2017) contends that such self-attribution and self-reference permits people to relate as subjects, not mere objects, and that self-definition as an individual is part of the human condition itself. First person pronoun use may therefore contribute to anthropomorphism, either by design or due to their human-produced training data, for symbolic and data driven dialogue systems, respectively.\nMoreover, while the above applies to English and many similar languages, such as those from the Indo-European family, others feature different sets and uses of pronouns, where distinctions for animate and inanimate things may vary (Yamamoto, 1999), and the self-referential production of these pronouns could further influence anthropomorphic perceptions." }, { "figure_ref": [], "heading": "Register and Style", "publication_ref": [ "b10", "b55", "b30", "b64" ], "table_ref": [], "text": "Humans are adept at using linguistic features to convey a variety of registers and styles for communication depending on the context (Biber and Conrad, 2009). In order to mitigate anthropomorphism, it may therefore be preferable for automated system outputs to be functional and avoid social stylistic features.\nPhatic Expressions Phrases such as pleasantries that are used to form and maintain social relations between humans but that do not impart any information can (unnecessarily) add to the sense of humanness conveyed when output by automated systems (Leong and Selinger, 2019). Dinan et al. (2022) describe an 'imposter effect' where people overestimate the factuality of generated output. However, Mielke et al. (2022) find that expressed confidence is poorly calibrated to the probabilities that general knowledge questions are correctly answered. They therefore train a dialogue system to reflect uncertainty in its outputs, altering the content from the purely factual to incorporate humanlike hedging phrases such as 'I'm not sure but . . . '. This bears similarity to the TTS research (see §3.1) which suggests that disfluencies can increase anthropomorphism. Thus, while overestimation can lead to an imposter effect, hedging can boost anthropomorphic signals." }, { "figure_ref": [], "heading": "Expressions of Confidence and Doubt", "publication_ref": [ "b104", "b38" ], "table_ref": [], "text": "Personas Many dialogue systems are developed with carefully designed personas (in the case of commercial systems) or personas induced via crowd-sourced datasets (Zhang et al., 2018). These are often based on human characters and although they are, in practice, merely lists of human attributes and behaviours (see §3.2),6 the notion of imbuing systems with human character-based personas is an effort towards anthropomorphism. Glaese et al. (2022) address this by including a rule against their system appearing to have a human identity." }, { "figure_ref": [], "heading": "Roles", "publication_ref": [ "b58", "b99", "b18", "b19", "b30", "b1", "b71", "b79", "b45", "b76", "b80", "b21", "b2", "b29", "b75", "b0", "b6", "b87", "b46", "b27", "b99", "b16", "b16", "b67", "b67", "b44", "b48", "b90", "b43", "b36" ], "table_ref": [], "text": "The roles that dialogue systems are unconsciously and consciously given by their designers and users can shift dialogue systems from the realm of tools towards one of humanlike roles.\nSubservience The majority of systems are conceived as being in the service of people in subservient, secretarial roles (Lingel and Crawford, 2020). This has led to users verbally abusing systems (West et al., 2019), going beyond mere expressions of frustration that one might have with a poorly functioning tool to frequently targeting them with gender-based slurs (Cercas Curry et al., 2021). In such circumstances systems have even been shown to respond subserviently to their abusers, potentially further encouraging the behaviour (Cercas Curry and Rieser, 2018).\nUnqualified Expertise Systems can come to present as having expertise without appropriate qualification (see §3.3), in large part due to their training data (Dinan et al., 2022). For example, commercial rule-based and end-to-end research systems provide high-risk diagnoses and treatment plans in response to medical queries (Abercrombie and Rieser, 2022;Omri et al., 2023).\nFurther, as conversational QA systems are increasingly positioned as replacements to browserbased search, users can be further led to believe that dialogue systems have the expertise to provide a singular correct response rather than a selection of ranked search results (Shah and Bender, 2022).\nTerminology There is increasing awareness that the anthropomorphic language and jargon used to describe technologies such as language models contributes to inaccurate perceptions of their capabilities, particularly among the general public (Hunger, 2023;Salles et al., 2020;Shanahan, 2023). While this is also an issue for research dissemination and journalism more widely, dialogue systems themselves are prone to output references to their own machinic and statistical processes with anthropomorphically loaded terms such as 'know ', 'think', 'train', 'learn', 'understand', 'hallucinate' and 'intelligence'. The anthropomorphism of dialogue systems can induce a number of adverse societal effects, e.g. they can generate unreliable information and reinforce social roles, language norms, and stereotypes.\nTrust and Deception When people are unaware that they are interacting with automated systems they may behave differently than if they know the true nature of their interlocutor. Chiesurin et al. (2023) show that system responses which excessively use natural-sounding linguistic phenomena can instil unjustified trust into the factual correctness of a system's answer. Thus the trust placed in systems grows as they exhibit anthropomorphic behaviour, whether or not the trust is warranted.\nThis may be even more problematic when users are members of vulnerable populations, such as the very young, the elderly, or people with illnesses or disabilities, or simply lack subject matter expertise. Although dialogue systems have been 'put forth' as a possible solution to loneliness, socially disconnected individuals can be particularly vulnerable to such trust issues. Children have also been shown to overestimate the intelligence of voice assistants such as Amazon Alexa, and to be unsure of whether they have emotions or feelings (Andries and Robertson, 2023). Given UNESCO's declaration that children have the right to participate in the design of the technological systems that affect them (Dignum et al., 2021), developers may be obliged to bear these considerations in mind.\nGendering Machines People may gender technologies in the face of even minimal gender markers (Reeves and Nass, 1996), as evident in commercial dialogue systems (Abercrombie et al., 2021). Even without any gender markers, people still apply binary gender to dialogue systems (Aylett et al., 2019;Sutton, 2020), as was the case for the 'genderless' voice assistant Q. While some companies now have begun to offer greater diversity of voices and have moved away from default female-gendered voices (Iyengar, 2021), nonbinary or gender-ambiguous dialogue systems such as SAM (Danielescu et al., 2023) are almost nonexistent, leaving people who identify as such without representation. Summarizing West et al. (2019), UNESCO (2019) argue that that encouraging or enabling users to predominantly gender systems as female reinforces gender stereotypes of women as inferior to men:\n[digital assistants] reflect, reinforce and spread gender bias; model acceptance and tolerance of sexual harassment and verbal abuse; send explicit and implicit messages about how women and girls should respond to requests and express themselves; make women the 'face' of glitches and errors that result from the limitations of hardware and software designed predominately by men; and force synthetic 'female' voices and personality to defer questions and commands to higher (and often male) authorities.\nThat is, by designing anthropomorphic systems or even simply leaving space for their (gendered) personification by users, developers risk enabling propagating stereotypes and associated harms.\nLanguage Variation and Whiteness Considering the narrative and fantasies around autonomous artificial intelligence, Cave and Dihal (2020) argue that autonomous systems are prescribed attributes such as autonomy, agency, and being powerfulattributes that are frequently ascribed to whiteness, and precluded from people of colour. In such, people of colour are removed, or erased, from the narrative and imagination around a society with autonomous systems (Cave and Dihal, 2020). Indeed, from a technical point of view, we see that, historically, NLP technologies have been developed to primarily capture the language use of voices of white demographics (Moran, 2021), in part due to their training data. In context of voiced dialogue systems, voices are similarly predominantly white (Moran, 2021). While there are many potential benefits to language technologies like dialogue systems, successful human-machine require that people conform their language use to what is recognised by the technologies. Given the proclivity of NLP to centre white, affluent American dialects (Hovy and Prabhumoye, 2021;Joshi et al., 2020), language variants that deviate from these socio-linguistic norms are less likely to be correctly processed (Tatman, 2017), resulting in errors and misrecognition, and forcing users to code switch to have successful engagements with dialogue systems (Harrington et al., 2022;Foster and Stuart-Smith, 2023). This can represent a form of language policing: People can either conform to the machine-recognisable language variant, or forego using it-and its potential benefitsaltogether. Consequently, as people conform to language variants that are recognised by dialogue systems, they also conform to whiteness and the continued erasure of marginalised communities.\nThe personification of such systems could exacerbate the erasure of marginalised communities, e.g. through limiting diverse language data. Furthermore, system outputs often suffer from standardisation, for instance prioritising specific accents that conform to western notions of acceptability and prestige (see §3). Thus, marginalised communities are forced to adopt their accent and (given the tendencies described in §2) personify 'white'-centred dialogue systems that are marketed as 'oracles of knowledge,' reifying hegemonic notions of expertise and knowledge." }, { "figure_ref": [], "heading": "Recommendations", "publication_ref": [ "b40", "b9", "b8", "b30", "b9", "b72", "b100", "b95", "b28", "b95" ], "table_ref": [], "text": "Dialogue systems are used for a wide variety of tasks, and fine-grained recommendations may only be narrowly applicable. We therefore make broad recommendations for consideration: designers should recognise people's tendency to personify, consider which, if any, anthropomorphic tools are appropriate, and reassess both their research goals and the language used to describe their systems.\nRecognise Tendencies to Personify Human languages distinguish between linguistic form (e.g. string prediction in language modelling) and meaning (i.e. the relationship between form and communicative intent) (Grice, 1988). Bender and Koller (2020) argue that humans reflexively derive meaning from signals, i.e. linguistic forms (within linguistic systems we have competence in), regardless of the presence of communicative intent.\nWhether or not it is a part of a dialogue system's deliberate design to use specific linguistic forms (e.g. the cues outlined in §3), listeners will invariably perceive communicative intent. This is particularly so given that, until recently, open domain dialogue was only possible between humans. Thus, unnecessary use of anthropomorphic linguistic cues can cause people to attribute humanlike cognitive abilities to systems-as was the case of Google Duplex, which excessively leveraged disfluencies. Creators of dialogue systems should remain cognisant of these tendencies and carefully consider which anthropomorphic cues people may pick up on, and avoid sending such signals, whether they occur by design or through a lack of consideration (e.g. stemming from datasets).\nConsider the Appropriateness of Anthropomorphic Tools Given our inherent nature to attribute meaning to signals, one must consider the appropriateness of the tool and use cases (Bender et al., 2021;Dinan et al., 2022) when designing dialogue systems, in order to avoid the (over-)integration of anthropomorphic cues. Indeed, it is only within a given context that one can make judgement on whether anthropomorphism is a concern. For instance, personifying one's vacuum cleaning robot (i.e. shouting at it in frustration for not cleaning properly), is of less concern than the anthropomorphism of a dialogue system marketed as 'social' or 'empathetic', or technology sold as a 'singular oracle of (all) knowledge'. We therefore argue that developers should move towards focusing on the appropriateness of anthropomorphic tools in order to limit the negative consequences of anthropomorphism which can lead to false impressions of a system's capabilities.\nReassess Research Goals Traditionally, the goal of Artificial Intelligence research has been to create systems that would exhibit intelligence indistinguishable from humans. TTS systems for instance, are evaluated on how natural and fluent the output sounds. Though intelligence and understanding should not be conflated with systems that exhibit humanlike behaviour (Bender and Koller, 2020), the human tendency to anthropomorphise convinces us of a machine's apparent intelligence (Proudfoot, 2011). It is in part due to this longstanding goal of anthropomorphic systems that there only exists a small body of work that does not seek anthropomorphism, despite growing awareness of its harms. Furthermore, these studies exist in isolation, and the taxonomy introduced in this paper highlights that we lack an approach that quantifies linguistic factors and relates them to possible harms and risks.\nThus, while it is infeasible to comprehensively map which linguistic cues to use or avoid, we discuss recommendations that arise from prior work. For example, Wilson and Moore (2017) recommend that developers produce synthesised voices that people recognise as non-human by calibrating mean pitch and pitch shimmer. In an analysis of reviews of commercial voice assistants, Völkel et al. (2020) find that the big five personality traits (De Raad, 2000) do not adequately describe user expectations of systems' 'personalities'. The only consistently desired trait was agreeable-ness, as users expect prompt and reliable responses to queries (Völkel et al., 2020). Thus, imbuing voice assistants and dialogue systems with humanlike personality traits does not ensure alignment with people's expectation of system behaviour. We therefore recommend that designers and developers reassess the utility of embedding humanlike personality traits in dialogue systems." }, { "figure_ref": [], "heading": "Avoid Anthropomorphic System Description", "publication_ref": [ "b2", "b45", "b52", "b45" ], "table_ref": [], "text": "Irrespective of any 'humanlike' qualities that dialogue systems might possess, there is widespread public confusion surrounding the nature and abilities of current language technologies. This confusion extends from children (Andries and Robertson, 2023) to adults (including some journalists, policymakers, and business people) who are convinced, on the one hand, of humanity's imminent enslavement to 'super-intelligent artificial agents' (to the neglect of actual harms already propagated by technological systems), or, on the other, that such systems provide super-human solutions to the world's problems (Hunger, 2023;Klein, 2023).\nWhile the content of systems' outputs can reinforce anthropomorphic perceptions, the language used to describe systems can be of greater influence. The tendency of people who do know how technologies are built to use anthropomorphic language represents, according to Salles et al. (2020, p. 93), 'a significant failure in scientific communication and engagement'. Although anthropomorphic terminology is deeply rooted in the argot of computer scientists, particularly those working in 'artificial intelligence', and while there exist significant motivations to continue to create hype around products and research (Hunger, 2023), practitioners should reflect on how the language they use affects people's understanding and behaviour." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "Anthropomorphising dialogue systems can be attractive for researchers in order to drive user engagement. However, production of highly anthropomorphic systems can also lead to downstream harms such as (misplaced) trust in the output (mis-)information. Even if developers and designers attempt to avoid including any anthropomorphic signals, humans may still personify systems and perceive them as anthropomorphic entities. For this reason, we argue that it is particularly important to carefully consider the particular ways that systems might be perceived anthropomorphically, and choose the appropriate feature for a given situation. By carefully considering how a system may be anthropomorphised and deliberately selecting the attributes that are appropriate for each context, developers and designers can avoid falling into the trap of creating mirages of humanity." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b103" ], "table_ref": [], "text": "While we have attempted to enumerate the linguistic factors that can increase the likelihood that users will view dialogue systems as anthropomorphic, this list of features is not exhaustive. As we describe in section 2, anthropomorphism varies from person-to-person and people may react differently to different aspects of a system's design. This paper represents only a starting point for researchers and developers to consider the implications that their design choices may have.\nIn this paper, due to the backgrounds of the authors as speakers of Indo-European languages and the dominance of English in NLP research, we have focused primarily on English language dialogue systems. However, it should be noted that other languages have features such as grammatical ways of denoting animacy (Yamamoto, 1999) and gender that could influence users personification of systems, and which developers should consider if they wish to limit anthropomorphism." }, { "figure_ref": [], "heading": "Ethical Considerations", "publication_ref": [], "table_ref": [], "text": "Although our manuscript outlines ways to create dialogue systems while minimising their potential anthropomorphism and personification, it could also be used as a guide to creating anthropomorphic systems. Our aim is to highlight the risks and provide researchers, developers, and designers with a path towards addressing the concerns that arise from anthropomorphisation in dialogue systems, an area that is particularly relevant at the time of writing due to the introduction of systems such as OpenAI's ChatGPT and Microsoft's Sydney, which have high surface form language generation performance." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "We would like to thank Emily Bender and Canfer Akbulut for their feedback on the draft manuscript, and the reviewers for their helpful comments.\nGavin Abercrombie and Verena Rieser were supported by the EPSRC project 'Equally Safe Online' (EP/W025493/1). Gavin Abercrombie, Tanvi Dinkar and Verena Rieser were supported by the EPSRC project 'Gender Bias in Conversational AI' (EP/T023767/1). Tanvi Dinkar and Verena Rieser were supported by the EPSRC project 'AISEC: AI Secure and Explainable by Construction' (EP/T026952/1). Verena Rieser was also supported by a Leverhulme Trust Senior Research Fellowship (SRF/R1/201100). Amanda Cercas Curry was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 949944, INTEGRATOR)." } ]
Automated dialogue or conversational systems are anthropomorphised by developers and personified by users. While a degree of anthropomorphism may be inevitable due to the choice of medium, conscious and unconscious design choices can guide users to personify such systems to varying degrees. Encouraging users to relate to automated systems as if they were human can lead to high risk scenarios caused by over-reliance on their outputs. As a result, natural language processing researchers have investigated the factors that induce personification and develop resources to mitigate such effects. However, these efforts are fragmented, and many aspects of anthropomorphism have yet to be explored. In this paper, we discuss the linguistic factors that contribute to the anthropomorphism of dialogue systems and the harms that can arise, including reinforcing gender stereotypes and notions of acceptable language. We recommend that future efforts towards developing dialogue systems take particular care in their design, development, release, and description; and attend to the many linguistic cues that can elicit personification by users.
Mirages. On Anthropomorphism in Dialogue Systems
[]
Gavin Abercrombie; Amanda Cercas Curry; Tanvi Dinkar; Verena Rieser; Zeerak Talat; Mohamed Bin Zayed
[ { "authors": "Gavin Abercrombie; Amanda Cercas Curry; Mugdha Pandya; Verena Rieser", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Alexa, Google, Siri: What are your pronouns? gender and anthropomorphism in the design and perception of conversational assistants", "year": "2021" }, { "authors": "Gavin Abercrombie; Verena Rieser", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Riskgraded safety for handling medical queries in conversational AI", "year": "2022" }, { "authors": "Valentina Andries; Judy Robertson", "journal": "", "ref_id": "b2", "title": "Children's understanding of AI through interactions with smart speakers in their homes", "year": "2023" }, { "authors": "Ian A Apperly", "journal": "Quarterly Journal of Experimental Psychology", "ref_id": "b3", "title": "What is \"theory of mind\"? Concepts, cognitive processes and individual differences", "year": "2012" }, { "authors": "Theo Araujo", "journal": "Computers in Human Behavior", "ref_id": "b4", "title": "Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions", "year": "2018" }, { "authors": " Michael Atleson", "journal": "", "ref_id": "b5", "title": "Chatbots, deepfakes, and voice clones: AI deception for sale", "year": "2023-04-14" }, { "authors": "Matthew P Aylett; Selina Jeanne Sutton; Yolanda Vazquez-Alvarez", "journal": "Association for Computing Machinery", "ref_id": "b6", "title": "The right kind of unnatural: Designing a robot voice", "year": "2019" }, { "authors": "J Dale; Mandana Barr; Seyfeddinipur", "journal": "Language and Cognitive Processes", "ref_id": "b7", "title": "The role of fillers in listener attributions for speaker disfluency", "year": "2010" }, { "authors": "Emily M Bender; Timnit Gebru; Angelina Mcmillan-Major; Shmargaret Shmitchell", "journal": "Association for Computing Machinery", "ref_id": "b8", "title": "On the dangers of stochastic parrots: Can language models be too big?", "year": "2021" }, { "authors": "Emily M Bender; Alexander Koller", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "Climbing towards NLU: On meaning, form, and understanding in the age of data", "year": "2020" }, { "authors": "Douglas Biber; Susan Conrad", "journal": "Cambridge University Press", "ref_id": "b10", "title": "Register, Genre, and Style", "year": "2009" }, { "authors": "Margaret Boden; Joanna Bryson; Darwin Caldwell; Kerstin Dautenhahn; Lilian Edwards; Sarah Kember; Paul Newman; Vivienne Parry; Geoff Pegman; Tom Rodden; Tom Sorrell; Mick Wallis; Blay Whitby; Alan Winfield", "journal": "Connection Science", "ref_id": "b11", "title": "Principles of robotics: regulating robots in the real world", "year": "2017" }, { "authors": "Paweł Budzianowski; Tsung-Hsien Wen; Bo-Hsiang Tseng; Iñigo Casanueva; Stefan Ultes; Milica Osman Ramadan; Gašić", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "MultiWOZ -a largescale multi-domain Wizard-of-Oz dataset for taskoriented dialogue modelling", "year": "2018" }, { "authors": "", "journal": "California State Legislature", "ref_id": "b13", "title": "California Senate Bill no", "year": "1001" }, { "authors": "Marco Casadio; Luca Arnaboldi; Matthew L Daggitt; Omri Isac; Tanvi Dinkar; Daniel Kienitz; Verena Rieser; Ekaterina Komendantskaya", "journal": "", "ref_id": "b14", "title": "Antonio: Towards a systematic method of generating NLP benchmarks for verification", "year": "2023" }, { "authors": "Justine Cassell; Alastair Gill; Paul Tepper", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "Coordination in conversation and rapport", "year": "2007" }, { "authors": "Stephen Cave; Kanta Dihal", "journal": "Philosophy & Technology", "ref_id": "b16", "title": "The Whiteness of AI", "year": "2020" }, { "authors": "Cercas Alba; Amanda Cercas Curry; Curry", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Computer says \"no\": The case against empathetic conversational AI", "year": "2023" }, { "authors": "Amanda Cercas Curry; Gavin Abercrombie; Verena Rieser", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "ConvAbuse: Data, analysis, and benchmarks for nuanced abuse detection in conversational AI", "year": "2021" }, { "authors": "Amanda ; Cercas Curry; Verena Rieser", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "#MeToo Alexa: How conversational systems respond to sexual harassment", "year": "2018" }, { "authors": "Alan Chan; Rebecca Salganik; Alva Markelius; Chris Pang; Nitarshan Rajkumar; Dmitrii Krasheninnikov; Lauro Langosco; Zhonghao He; Yawen Duan; Micah Carroll; Michelle Lin; Alex Mayhew; Katherine Collins; Maryam Molamohammadi; John Burden; Wanru Zhao; Shalaleh Rismani; Konstantinos Voudouris; Umang Bhatt; Adrian Weller; David Krueger; Tegan Maharaj", "journal": "Association for Computing Machinery", "ref_id": "b20", "title": "Harms from increasingly agentic algorithmic systems", "year": "2023" }, { "authors": "Sabrina Chiesurin; Dimitris Dimakopoulos; Marco Antonio Sobrevilla; Arash Cabezudo; Ioannis Eshghi; Verena Papaioannou; Ioannis Rieser; Konstas", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "The dangers of trusting stochastic parrots: Faithfulness and trust in open-domain conversational question answering", "year": "2023" }, { "authors": "H Herbert; Kerstin Clark; Fischer", "journal": "Behavioral and Brain Sciences", "ref_id": "b22", "title": "Social robots as depictions of social agents", "year": "2023" }, { "authors": "H Herbert; Jean E Fox Clark; Tree", "journal": "Cognition", "ref_id": "b23", "title": "Using uh and um in spontaneous speaking", "year": "2002" }, { "authors": "Mariona Coll; Ardanuy ; Federico Nanni; Kaspar Beelen; Kasra Hosseini; Ruth Ahnert; Jon Lawrence; Katherine Mcdonough; Giorgia Tolfo; Barbara Daniel Cs Wilson; Mcgillivray", "journal": "International Committee on Computational Linguistics", "ref_id": "b24", "title": "Living machines: A study of atypical animacy", "year": "2020" }, { "authors": "Martin Corley; Lucy J Macgregor; David I Donaldson", "journal": "Cognition", "ref_id": "b25", "title": "It's the way that you, er, say it: Hesitations in speech affect language comprehension", "year": "2007" }, { "authors": "David Crystal", "journal": "Language library", "ref_id": "b26", "title": "A First Dictionary of Linguistics and Phonetics", "year": "1980" }, { "authors": "Andreea Danielescu; A Sharone; Alexandria Horowit-Hendler; Kenneth Pabst; Eric M Michael Stewart; Matthew Gallo; Aylett Peter", "journal": "Association for Computing Machinery", "ref_id": "b27", "title": "Creating inclusive voices for the 21st century: A non-binary text-to-speech for conversational assistants", "year": "2023" }, { "authors": "Boele De; Raad ", "journal": "René Descartes", "ref_id": "b28", "title": "The big five personality factors: The psycholexical approach to personality", "year": "1637" }, { "authors": "Virginia Dignum; Melanie Penagos; Klara Pigmans; Steven Vosloo", "journal": "UNICEF", "ref_id": "b29", "title": "Policy guidance on AI for children: Recommendations for building AI policies and systems that uphold child rights", "year": "2021" }, { "authors": "Emily Dinan; A Gavin Abercrombie; Shannon Bergman; Dirk Spruit; Y-Lan Hovy; Verena Boureau; Rieser", "journal": "", "ref_id": "b30", "title": "SafetyKit: First aid for measuring safety in open-domain conversational systems", "year": "2022" }, { "authors": "Tanvi Dinkar; Chloé Clavel; Ioana Vasilescu", "journal": "", "ref_id": "b31", "title": "Fillers in spoken language understanding: Computational and psycholinguistic perspectives", "year": "2023" }, { "authors": "Brian R Duffy", "journal": "Robotics and Autonomous Systems", "ref_id": "b32", "title": "Anthropomorphism and the social robot", "year": "2003" }, { "authors": "Nicholas Epley; Adam Waytz; John T Cacioppo", "journal": "Psychological Review", "ref_id": "b33", "title": "On seeing human: A three-factor theory of anthropomorphism", "year": "2007" }, { "authors": "Liz W Faber", "journal": "University of Minnesota Press", "ref_id": "b34", "title": "The Computer's Voice: From Star Trek to Siri", "year": "2020" }, { "authors": "B J Fogg; Clifford Nass", "journal": "Association for Computing Machinery", "ref_id": "b35", "title": "How users reciprocate to computers: An experiment that demonstrates behavior change", "year": "1997" }, { "authors": "Mary ; Ellen Foster; Jane Stuart-Smith", "journal": "Association for Computing Machinery", "ref_id": "b36", "title": "Social robotics meets sociolinguistics: Investigating accent bias and social context in HRI", "year": "2023" }, { "authors": "H Scott; Jennifer Fraundorf; Valerie J Arnold; Langlois", "journal": "Oxford University Press", "ref_id": "b37", "title": "Disfluency", "year": "2018" }, { "authors": "Amelia Glaese; Nat Mcaleese; Maja Trębacz; John Aslanides; Vlad Firoiu; Timo Ewalds; Maribeth Rauh; Laura Weidinger; Martin Chadwick; Phoebe Thacker; Lucy Campbell-Gillingham; Jonathan Uesato; Po-Sen Huang; Ramona Comanescu; Fan Yang; Abigail See; Sumanth Dathathri; Rory Greig; Charlie Chen; Doug Fritz; Jaume Sanchez Elias; Richard Green; Soňa Mokrá; Nicholas Fernando; Boxi Wu; Rachel Foley; Susannah Young; Iason Gabriel; William Isaac; John Mellor; Demis Hassabis; Koray Kavukcuoglu; Lisa Anne Hendricks; Geoffrey Irving", "journal": "", "ref_id": "b38", "title": "Improving alignment of dialogue agents via targeted human judgements", "year": "2022" }, { "authors": "Sharon Goldman; ; Sen", "journal": "", "ref_id": "b39", "title": "Murphy's tweets on ChatGPT spark backlash from former White House AI policy advisor", "year": "2023-04-04" }, { "authors": "H P Grice", "journal": "Springer Netherlands", "ref_id": "b40", "title": "Utterer's meaning, sentence-meaning, and word-meaning", "year": "1988" }, { "authors": "David Gros; Yu Li; Zhou Yu", "journal": "Association for Computational Linguistics", "ref_id": "b41", "title": "The R-U-arobot dataset: Helping avoid chatbot deception by detecting user questions about human or non-human identity", "year": "2021" }, { "authors": "David Gros; Yu Li; Zhou Yu", "journal": "Association for Computational Linguistics", "ref_id": "b42", "title": "Robots-dontcry: Understanding falsely anthropomorphic utterances in dialog systems", "year": "2022" }, { "authors": "Christina N Harrington; Radhika Garg; Amanda Woodward; Dimitri Williams", "journal": "Association for Computing Machinery", "ref_id": "b43", "title": "It's kind of like code-switching\": Black older adults' experiences with a voice assistant for health information seeking", "year": "2022" }, { "authors": "Dirk Hovy; Shrimai Prabhumoye", "journal": "Language and Linguistics Compass", "ref_id": "b44", "title": "Five sources of bias in natural language processing", "year": "2021" }, { "authors": "Francis Hunger", "journal": "", "ref_id": "b45", "title": "Unhype artificial 'intelligence'! A proposal to replace the deceiving terminology of AI", "year": "2023" }, { "authors": "Rishi Iyengar", "journal": "", "ref_id": "b46", "title": "Apple will no longer make Siri's voice female by default", "year": "2021" }, { "authors": "Liwei Jiang; Jena D Hwang; Chandra Bhagavatula; Le Ronan; Jenny Bras; Jesse Liang; Keisuke Dodge; Maxwell Sakaguchi; Jon Forbes; Saadia Borchardt; Yulia Gabriel; Oren Tsvetkov; Maarten Etzioni; Regina Sap; Yejin Rini; Choi", "journal": "", "ref_id": "b47", "title": "Can machines learn morality? The Delphi experiment", "year": "2022" }, { "authors": "Pratik Joshi; Sebastin Santy; Amar Budhiraja; Kalika Bali; Monojit Choudhury", "journal": "Association for Computational Linguistics", "ref_id": "b48", "title": "The state and fate of linguistic diversity and inclusion in the NLP world", "year": "2020" }, { "authors": "Hyunwoo Kim; Youngjae Yu; Liwei Jiang; Ximing Lu; Daniel Khashabi; Gunhee Kim; Yejin Choi; Maarten Sap", "journal": "Association for Computational Linguistics", "ref_id": "b49", "title": "ProsocialDialog: A prosocial backbone for conversational agents", "year": "2022" }, { "authors": "Youjeong Kim; S Shyam Sundar", "journal": "Computers in Human Behavior", "ref_id": "b50", "title": "Anthropomorphism of computers: Is it mindful or mindless?", "year": "2012" }, { "authors": "Ambika Kirkland; Harm Lameris; Eva Székely; Joakim Gustafson", "journal": "", "ref_id": "b51", "title": "Where's the uh, hesitation? The interplay between filled pause location, speech rate and fundamental frequency in perception of confidence", "year": "2022" }, { "authors": "Naomi Klein", "journal": "", "ref_id": "b52", "title": "AI machines aren't 'hallucinating", "year": "2023-05-11" }, { "authors": "Tua Korhonen", "journal": "", "ref_id": "b53", "title": "Anthropomorphism and the aesopic animal fables. Animals and their Relation to Gods, Humans and Things in the Ancient World", "year": "2019" }, { "authors": "Robert M Krauss; Robin J Freyberg; Ezequiel Morsella", "journal": "Journal of Experimental Social Psychology", "ref_id": "b54", "title": "Inferring speakers' physical attributes from their voices", "year": "2002" }, { "authors": "Brenda Leong; Evan Selinger", "journal": "Association for Computing Machinery", "ref_id": "b55", "title": "Robot eyes wide shut: Understanding dishonest anthropomorphism", "year": "2019" }, { "authors": "Yaniv Leviathan; Yossi Matias", "journal": "Google AI Blog", "ref_id": "b56", "title": "Google Duplex: An AI system for accomplishing real world tasks over the phone", "year": "2018" }, { "authors": "Johnny Lieu", "journal": "", "ref_id": "b57", "title": "Google's creepy AI phone call feature will disclose it's a robot", "year": "2018" }, { "authors": "Jessa Lingel; Kate Crawford", "journal": "Catalyst: Feminism, Theory, Technoscience", "ref_id": "b58", "title": "Alexa, tell me about your mother\": The history of the secretary and the end of secrecy", "year": "2020" }, { "authors": "Fanjue Liu", "journal": "Journal of Promotion Management", "ref_id": "b59", "title": "Hanging out with my pandemic pal: Contextualizing motivations of anthropomorphizing voice assistants during COVID-19", "year": "2022" }, { "authors": "Pierre-François Lovens", "journal": "", "ref_id": "b60", "title": "Sans ces conversations avec le chatbot Eliza, mon mari serait toujours là", "year": "2023-04-14" }, { "authors": "Amama Mahmood; Jeanie W Fung; Isabel Won; Chien-Ming Huang", "journal": "Association for Computing Machinery", "ref_id": "b61", "title": "Owning mistakes sincerely: Strategies for mitigating AI errors", "year": "2022" }, { "authors": "Shikib Mehri; Jinho Choi; Luis Fernando; D' Haro; Jan Deriu; Maxine Eskenazi; Milica Gasic; Kallirroi Georgila; Dilek Hakkani-Tur; Zekang Li; Verena Rieser", "journal": "", "ref_id": "b62", "title": "Report from the NSF future directions workshop on automatic evaluation of dialog: Research directions and challenges", "year": "2022" }, { "authors": "Cade Metz", "journal": "", "ref_id": "b63", "title": "Riding out quarantine with a chatbot friend: 'I feel very connected", "year": "2020" }, { "authors": "Sabrina J Mielke; Arthur Szlam; Emily Dinan; Y-Lan Boureau", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b64", "title": "Reducing conversational agents' overconfidence through linguistic calibration", "year": "2022" }, { "authors": "Marvin Minsky", "journal": "Simon and Schuster", "ref_id": "b65", "title": "The Emotion Machine: Commonsense Thinking", "year": "2006" }, { "authors": "Nicole Mirnig; Gerald Stollnberger; Markus Miksch; Susanne Stadler; Manuel Giuliani; Manfred Tscheligi", "journal": "Frontiers in Robotics and AI", "ref_id": "b66", "title": "To err is robot: How humans assess and act toward an erroneous social robot", "year": "2017" }, { "authors": "C Taylor; Moran", "journal": "Communication and Critical/Cultural Studies", "ref_id": "b67", "title": "Racial technological bias and the white, feminine voice of AI VAs", "year": "2021" }, { "authors": "Clifford Ivar; Nass ; Scott Brave", "journal": "MIT press Cambridge", "ref_id": "b68", "title": "Wired for speech: How voice activates and advances the human-computer relationship", "year": "2005" }, { "authors": "Harold W Noonan", "journal": "Analysis", "ref_id": "b69", "title": "The thinking animal problem and personal pronoun revisionism", "year": "2009" }, { "authors": "Eric T Olson", "journal": "Philosophical Topics", "ref_id": "b70", "title": "Thinking animals and the reference of 'I", "year": "2002" }, { "authors": "Sihem Omri; Manel Abdelkader; Mohamed Hamdi; Tai-Hoon Kim", "journal": "Springer Nature Singapore", "ref_id": "b71", "title": "Safety issues investigation in deep learning based chatbots answers to medical advice requests", "year": "2023" }, { "authors": "Diane Proudfoot", "journal": "Artificial Intelligence", "ref_id": "b72", "title": "Anthropomorphism and AI: Turing's much misunderstood imitation game", "year": "2011" }, { "authors": "S G Pulman", "journal": "", "ref_id": "b73", "title": "Conversational games, belief revision and Bayesian networks", "year": "1996" }, { "authors": "Abhilasha Ravichander; Alan W Black", "journal": "Association for Computational Linguistics", "ref_id": "b74", "title": "An Empirical Study of Self-Disclosure in Spoken Dialogue Systems", "year": "2018" }, { "authors": "Byron Reeves; Clifford Nass", "journal": "Cambridge university press", "ref_id": "b75", "title": "The Media Equation: How People Treat Computers, Television, and New Media like Real People", "year": "1996" }, { "authors": "Arleen Salles; Kathinka Evers; Michele Farisco", "journal": "AJOB Neuroscience", "ref_id": "b76", "title": "Anthropomorphism in AI", "year": "2020" }, { "authors": "Maarten Sap; Le Ronan; Daniel Bras; Yejin Fried; Choi", "journal": "Association for Computational Linguistics", "ref_id": "b77", "title": "Neural theory-of-mind? On the limits of social intelligence in large LMs", "year": "2022" }, { "authors": "Roger Scruton", "journal": "Princeton University Press", "ref_id": "b78", "title": "On human nature", "year": "2017" }, { "authors": "Chirag Shah; Emily M Bender", "journal": "Association for Computing Machinery", "ref_id": "b79", "title": "Situating search", "year": "2022" }, { "authors": "Murray Shanahan", "journal": "", "ref_id": "b80", "title": "Talking about large language models", "year": "2023" }, { "authors": "Victor Kenji; M Shiramizu; Anthony J Lee; Daria Altenburg; David R Feinberg; Benedict C Jones", "journal": "Scientific Reports", "ref_id": "b81", "title": "The role of valence, dominance, and pitch in perceptions of artificial intelligence (AI) conversational agents' voices", "year": "2022" }, { "authors": "Gabriel Skantze; Martin Johansson; Jonas Beskow", "journal": "", "ref_id": "b82", "title": "Exploring turn-taking cues in multi-party human-robot discussions about objects", "year": "2015" }, { "authors": "L Vicki; Herbert H Smith; Clark", "journal": "Journal of Memory and Language", "ref_id": "b83", "title": "On the course of answering questions", "year": "1993" }, { "authors": "Julia Stern; Christoph Schild; Benedict C Jones; Lisa M Debruine; Amanda Hahn; David A Puts; Ingo Zettler; Tobias L Kordsmeyer; David Feinberg; Dan Zamfir; Lars Penke; Ruben C Arslan", "journal": "Journal of Research in Personality", "ref_id": "b84", "title": "Do voices carry valid information about a speaker's personality", "year": "2021" }, { "authors": "Louis Stupple-Harris", "journal": "", "ref_id": "b85", "title": "Tech in the dock. Should AI chatbots be used to address the nation's loneliness problem?", "year": "2021" }, { "authors": "Guangxuan Hao Sun; Jiawen Xu; Jiale Deng; Chujie Cheng; Hao Zheng; Nanyun Zhou; Xiaoyan Peng; Minlie Zhu; Huang", "journal": "Association for Computational Linguistics", "ref_id": "b86", "title": "On the safety of conversational models: Taxonomy, dataset, and benchmark", "year": "2022" }, { "authors": "Selina Jeanne; Sutton ", "journal": "Association for Computing Machinery", "ref_id": "b87", "title": "Gender ambiguous, not genderless: Designing gender in voice user interfaces (VUIs) with sensitivity", "year": "2020" }, { "authors": "Ekaterina Svikhnushina; Iuliana Voinea; Anuradha Welivita; Pearl Pu", "journal": "Association for Computational Linguistics", "ref_id": "b88", "title": "A taxonomy of empathetic questions in social dialogs", "year": "2022" }, { "authors": "Zeerak Talat; Hagen Blix; Josef Valvoda; Maya Indira Ganesh; Ryan Cotterell; Adina Williams", "journal": "Association for Computational Linguistics", "ref_id": "b89", "title": "On the machine learning of ethical judgments from natural language", "year": "2022" }, { "authors": "Rachael Tatman", "journal": "Association for Computational Linguistics", "ref_id": "b90", "title": "Gender and dialect bias in YouTube's automatic captions", "year": "2017" }, { "authors": "Ilaria Torre; Sébastien Le Maguer", "journal": "", "ref_id": "b91", "title": "Should robots have accents?", "year": "2020" }, { "authors": "David R Traum; Staffan Larsson", "journal": "Springer", "ref_id": "b92", "title": "The Information State Approach to Dialogue Management", "year": "2003" }, { "authors": "", "journal": "UNESCO", "ref_id": "b93", "title": "Explore the gendering of AI voice assistants", "year": "2019" }, { "authors": "Carissa Véliz", "journal": "AI & Society", "ref_id": "b94", "title": "Moral zombies: why algorithms are not moral agents", "year": "2021" }, { "authors": "Sarah Theres Völkel; Ramona Schödel; Daniel Buschek; Clemens Stachl; Verena Winterhalter; Markus Bühner; Heinrich Hussmann", "journal": "Association for Computing Machinery", "ref_id": "b95", "title": "Developing a personality model for speech-based conversational agents using the psycholexical approach", "year": "2020" }, { "authors": "Katja Wagner; Frederic Nimmermann; Hanna Schramm-Klein", "journal": "", "ref_id": "b96", "title": "Is it human? The role of anthropomorphism as a driver for the successful acceptance of digital voice assistants", "year": "2019" }, { "authors": "Toby Walsh", "journal": "Communications of the ACM", "ref_id": "b97", "title": "Turing's red flag", "year": "2016" }, { "authors": "Shensheng Wang; Scott O Lilienfeld; Philippe Rochat", "journal": "Review of General Psychology", "ref_id": "b98", "title": "The uncanny valley: Existence and explanations", "year": "2015" }, { "authors": "Mark West; Rebecca Kraut; Han Ei Chew", "journal": "UNESCO", "ref_id": "b99", "title": "I'd Blush if I Could: Closing Gender Divides in Digital Skills through Education", "year": "2019" }, { "authors": "Sarah Wilson; Roger K Moore", "journal": "", "ref_id": "b100", "title": "Robot, alien and cartoon voices: Implications for speech-enabled systems. In 1st International Workshop on Vocal Interactivity in-and-between Humans, Animals and Robots (VIHAR)", "year": "2017" }, { "authors": "Charlotte Wollermann; Eva Lasarcyk; Ulrich Schade; Bernhard Schröder", "journal": "", "ref_id": "b101", "title": "Disfluencies and uncertainty perception-evidence from a humanmachine scenario", "year": "2013" }, { "authors": "Jing Xu; Da Ju; Margaret Li; Y-Lan Boureau; Jason Weston; Emily Dinan", "journal": "", "ref_id": "b102", "title": "Recipes for safety in open-domain chatbots", "year": "2021" }, { "authors": "Mutsumi Yamamoto", "journal": "J. Benjamins", "ref_id": "b103", "title": "Animacy and Reference: A Cognitive Approach to Corpus Linguistics", "year": "1999" }, { "authors": "Saizheng Zhang; Emily Dinan; Jack Urbanek; Arthur Szlam; Douwe Kiela; Jason Weston", "journal": "", "ref_id": "b104", "title": "Personalizing dialogue agents: I have a dog, do you have pets too?", "year": "2018" }, { "authors": "Ling ", "journal": "", "ref_id": "b105", "title": "", "year": "" }, { "authors": "Yu Zhu; Zhengkun Zhang; Jun Wang; Hongbin Wang; Haiying Wu; Zhenglu Yang", "journal": "Association for Computational Linguistics", "ref_id": "b106", "title": "Multiparty empathetic dialogue generation: A new task for dialog systems", "year": "2022" }, { "authors": "Caleb Ziems; Jane Yu; Yi-Chia Wang; Alon Halevy; Diyi Yang", "journal": "Association for Computational Linguistics", "ref_id": "b107", "title": "The moral integrity corpus: A benchmark for ethical dialogue systems", "year": "2022" } ]
[]
10.5281/zenodo.7347926
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b11", "b13" ], "table_ref": [], "text": "Plant phenotyping is used to find the connection between a plant's physical properties and the genetic information [1]. Modern high-throughput plant phenotyping uses Unmanned Aerial Vehicles (UAVs) equipped with multiple sensors to collect imagery [2]. The images collected by UAVs can be analyzed to estimate plant traits [3]. Sorghum (Sorghum bicolor (L.) Moench) is an important crop for food and biofuel production [4]. The Sorghum panicle is a cluster of grains at the top of the plants that is critical to plant growth and management [4]. Detecting sorghum panicles can help plant breeders estimate plant properties such as flowering time [5]. The deep neural network has shown successful results in general object detection tasks [6]. Recently, deep neural networks have also demonstrated the capability for detection tasks related to plant phenotyping [7]. However, a large amount images are needed for training the neural network. Ground truthing a large amount of RGB images captured by UAVs is a major bottleneck relative to the performance of the detection tasks. Semi-supervised classification approaches train the network with a small amount of labeled data and a large amount of unlabeled data to reduce the manual data labeling [8]. The use of pseudo-labels [9] is the key idea for semi-supervised approaches. The pseudo-labels are the data labels generated by the model pretrained on the small dataset. The pseudo-labels are combined with the real labels to expand the training dataset. A semi-supervised loss is also introduced for training on labeled and unlabeled data. Recent work focuses on regulating the loss function to maintain consistency during training. MixMatch [10] is an example of consistency regulation. It uses data augmentation, label guessing, and MixUp on both labeled and unlabeled images. FixMatch [11] is another consistency-based method for semi-supervised classification. It combines consistency regularization and pseudo-labeling to improve the performance of semi-supervised learning. Similar to semisupervised classification, pseudo-label-based approaches are used for semi-supervised object detection [12,13,14]. The approach consists of a teacher model and a student model. The teacher model is trained with a small amount of data at first. The teacher network will then generate the annotation from the unlabeled dataset to produce pseudo-labels. The pseudo-labeled data and labeled data are combined to train the student model. In [12], Sohn et al. introduce a framework, STAC, to generate highly confident pseudo labels and update the models by enforcing consistency. STAC generates pseudo-labels from unlabeled data using non-maximum suppression (NMS). The confidence-based method is used to filter pseudo-labels. Unbiased Teacher [14] is another framework to jointly train the student and teacher networks in a mutually-beneficial manner. In this paper, we present a method to train a sorghum panicle detection deep neural network on RGB UAV images with a small amount of training data using semi-supervised learning." }, { "figure_ref": [ "fig_0" ], "heading": "METHODS", "publication_ref": [ "b12", "b14", "b15" ], "table_ref": [], "text": "We investigate semi-supervised learning for two-stage and one-stage object detection methods. For two-stage object detection, we use the Soft Teacher [13] framework with Faster-RCNN. For one-stage object detection, we choose the Efficient Teacher [15] framework with YOLOv5. The selection of the detection network is based on the performance of general detection datasets such as COCO [16]. Theoretically, arXiv:2305.09810v1 [cs.CV] 16 May 2023 both semi-supervised methods are interchangeable with the other object detection method. However, the performance is degraded if we simply apply one method to another due to the structure difference between one-stage networks and two-stage networks. In this case, we choose semi-supervised methods that have the best fit for each type of detection network as a fair comparison. The block diagram of our semisupervised framework is shown in Figure 1. " }, { "figure_ref": [], "heading": "Labeled Images", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Unlabeled Images", "publication_ref": [], "table_ref": [], "text": "Teacher" }, { "figure_ref": [], "heading": "Soft Teacher Framework", "publication_ref": [ "b10" ], "table_ref": [], "text": "The Soft Teacher framework consists of a teacher model and a student model. The teacher model is trained using a small batch of labeled data and performs pseudo-labeling on the unlabeled images. The student model is trained on both labeled and pseudo-labeled images. During the training process, the teacher model is continuously updated by the student model through an exponential moving average (EMA) strategy. The loss function of the Soft Teacher is a combined loss function from the supervised and unsupervised loss:\nL = L s + αL u(1)\nL s = 1 N l (L cls (I i l ) + L reg (I i l ))(2)\nL u = 1 N u (L cls (I i u ) + L reg (I i u ))(3)\nwhere L is the weighted sum of supervised loss L s and unsupervised loss L s , α is the weight for unsupervised loss, L cls is the classification loss, L reg is box classification loss, I i u is the i-th unlabeled image, I i l is the i-th labeled image, N u is the number of unlabeled image and N l is the number of labeled image. During pseudo-label generation, the NMS and Fix-Match [11] strategy is used to remove duplicate bounding box candidates. The high threshold value is also used for pseudolabel generation to improve the quality of pseudo-labels. The process of pseudo-label generation will introduce error since some foreground box candidates will be assigned as negative. To compensate for this problem, the Soft Teacher introduces a loss function that uses more information from the teacher model. The Soft Teacher framework also uses a jittering box refinement technique to filter out duplicate boxes. The original method from Soft Teacher is training the teacher model and student model at the same time with random weights at the beginning. In practice, we found the training is not stable due to the limited amount of images in our dataset. We introduce another warm-up stage for the teacher model. During the warm-up stage, the teacher model will be trained only with labeled data. The trained weight will then be loaded into the co-training stage with the student model." }, { "figure_ref": [], "heading": "Efficient Teacher Framework", "publication_ref": [ "b16", "b5" ], "table_ref": [], "text": "One-stage object detection networks [17] generally have higher recall and faster training speed compared to two-stage object detection networks [6]. However, the semi-supervised learning approach from two-stage detection networks is facing challenges when directly applied to a one-stage detection network. The multi-anchor strategy used in the one-stage network magnifies the label imbalance problem from semisupervised learning in the two-stage network, resulting in low-quality pseudo-labels and poor training results. Efficient Teacher is a semi-supervised learning approach optimized for single-stage object detection networks. To leverage the label inconsistency problem, Efficient Teacher introduces a novel pseudo-label assigner to prevent interference from lowquality pseudo-labels. During training, each pseudo-label is assigned a pseudo-label score that represents the uncertainty of the label. Two threshold value of the score τ 1 and τ 2 is used. If a pseudo-label has a score between τ 1 and τ 2 , the pseudo-label is categorized as an uncertain label. The loss of uncertain labels is filtered out to improve the performance. The Efficient Teacher framework also introduces epoch adaptor mechanism to stabilize and accelerate the training process. Epoch adaptor combines domain adaptation and distribution adaptation techniques. Domain adaptation enables training on both unlabeled data and labeled data during the first Burn-In phase to prevent overfitting on labeled data. The distribution adaptation technique dynamically updates the thresholds τ 1 and τ 2 at each epoch to reduce overfitting." }, { "figure_ref": [ "fig_1" ], "heading": "EXPERIMENTAL RESULTS", "publication_ref": [ "b2", "b5", "b17", "b16" ], "table_ref": [ "tab_0", "tab_1" ], "text": "The sorghum panicle dataset is from an RGB orthomosaic [3] captured by UAVs in a sorghum field. We select a small region of the orthomosaic and crop it into small images for data labeling and training purposes as shown in Figure 2. We select the early sorghum growing stage for the experiments. Compared to the later stage, the early-stage sorghum pani- cles have more variation in shapes and sizes which brings more challenge to the methods. Moreover, the fewer panicles in each image can further reduce the number of available labels for training. In total, we have 364 images for training, 90 images for validation, and 60 images for testing. Each image is resized to 640 × 640 resolution during training. These RGB images are used to form a supervised baseline to compare with semi-supervised learning. For semisupervised learning, we randomly select 1%, 5%, and 10% within the training dataset to form a semi-supervised learning dataset. The labels of the rest of the training data are removed correspondingly to represent the unlabeled data. We have 3 labeled images for the 1% dataset, 18 labeled images for the 5% dataset, and 36 labeled images for the 10% dataset. Setting the appropriate training parameters is very important for semi-supervised learning on the limited dataset. From the empirical experiment, we found the learning rate and NMS threshold for pseudo-labels have the most impact on training performance. In the supervised learning stage, the learning rate can have a greater learning rate for fast convergence. In the semi-supervised learning stage, the learning rate needed to be decreased for a very small amount of labeled data. In practice, we found setting the learning rate of 0.001 for the supervised stage is appropriate for the warm-up of the teacher model. The default semi-supervised learning rate from both methods is too large, resulting in unstable training. We found to set the learning rate to 0.00005 is suitable for the semisupervised stage in both methods. For the NMS threshold in Efficient Teacher, we set the confidence threshold to 0.5 to reduce the false positive and the IoU threshold to 0.1 to reduce the duplicated bounding box.\nWe evaluate the semi-supervised learning approach by using three different settings of the original training dataset: 1%, 5%, and 10% training data. In the warm-up stage, we first trained the network with only 1%, 5%, and 10% labeled data in a supervised manner to form a baseline. In the semisupervised stage, the weights of the baseline model are loaded mAP@[. into the teacher model. The teacher model with pre-loaded weight is trained with the student model together. For the Soft Teacher framework, we use Faster-RCNN [6] with ResNet-50 [18] backbone. For the Efficient Teacher framework, we use the YOLOv5l [17] model. The result of the Soft Teacher method is shown in Table 1, the semi-supervised learning increases the mAP by 5.6% in 1% labeled data, 2.5% in 5% labeled data, and 3.7% in 10% labeled data. The result of the Efficient Teacher method is shown in Table 2, the semisupervised learning increases the mAP by 3.1% in 1% labeled data, 1.7% in 5% labeled data and 1.7% in 10% labeled data. The Efficient Teacher method achieves the highest mAP due to the better YOLOv5 model in the baseline. However, the Soft Teacher framework has the highest mAP increases in three scenarios. The training is done on a single NVIDIA RTX A40 GPU. The Soft Teacher took 7 hours to finish training while the Efficient Teacher only took one hour to finish. Compare to supervised learning using fully labeled data (364 images), we can achieve comparable results with only 10% of the original amount (36 images)." }, { "figure_ref": [], "heading": "CONCLUSION AND DISCUSSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a method for reducing training data in sorghum panicle detection. We examine two different types of semi-supervised learning approaches for sorghum panicle detection. We demonstrate the capability of semi-supervised learning methods for achieving similar performance by only using 10% of training data compared to the supervised approach. Future work includes developing auto-tuning methods for the hyper-parameters and extending the methods to other plant traits." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "We thank Professor Ayman Habib and the Digital Photogrammetry Research Group (DPRG) from the School of Civil Engineering at Purdue University for providing the images used in this paper. The work presented herein was funded in part by the Advanced Research Projects Agency-Energy (ARPA-E), U.S. Department of Energy, under Award Number DE-AR0001135. The views and opinions of the authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof. Address all correspondence to Edward J. Delp, [email protected]" } ]
The sorghum panicle is an important trait related to grain yield and plant development. Detecting and counting sorghum panicles can provide significant information for plant phenotyping. Current deep-learning-based object detection methods for panicles require a large amount of training data. The data labeling is time-consuming and not feasible for real application. In this paper, we present an approach to reduce the amount of training data for sorghum panicle detection via semi-supervised learning. Results show we can achieve similar performance as supervised methods for sorghum panicle detection by only using 10% of original training data.
SEMI-SUPERVISED OBJECT DETECTION FOR SORGHUM PANICLES IN UAV IMAGERY
[ { "figure_caption": "Fig. 1 :1Fig. 1: Block diagram of our semi-supervised learning framework.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 :2Fig. 2: Sample images from the training dataset.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "5:.95] Baseline Soft Teacher Results from Soft Teacher Framework with Faster-RCNN. The baseline is supervised learning only with 1%, 5%, and 10% data accordingly.", "figure_data": "1%38.243.85%42.845.310%43.447.1100%50.2mAP@[.5:.95] Baseline Efficient Teacher1%38.141.25%45.947.610%47.449.1100%51.2", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results from Efficient Teacher Framework with YOLOv5. The baseline is supervised learning only with 1%, 5%, and 10% data accordingly.", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Enyu Cai; Jiaqi Guo; Changye Yang; Edward J Delp
[ { "authors": "A Walter; F Liebisch; A Hund", "journal": "Plant Methods", "ref_id": "b0", "title": "Plant phenotyping: from bean weighing to image analysis", "year": "2015-03" }, { "authors": "S C Chapman; T Merz; A Chan; P Jackway; S Hrabar; M F Dreccer; E Holland; B Zheng; T J Ling; J Jimenez-Berni", "journal": "Agronomy", "ref_id": "b1", "title": "Pheno-copter: A lowaltitude, autonomous remote-sensing robotic helicopter for high-throughput field-based phenotyping", "year": "2014-06" }, { "authors": "A Habib; W Xiong; F He; H L Yang; M Crawford", "journal": "IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing", "ref_id": "b2", "title": "Improving orthorectification of uav-based pushbroom scanner imagery using derived orthophotos from frame cameras", "year": "2017-01" }, { "authors": "S Mathur; A V Umakanth; V A Tonapi; R Sharma; M K Sharma", "journal": "Biotechnology for Biofuels", "ref_id": "b3", "title": "Sweet sorghum as biofuel feedstock: recent advances and available resources", "year": "2017-06" }, { "authors": "E Cai; S Baireddy; C Yang; E J Delp; M Crawford", "journal": "", "ref_id": "b4", "title": "Panicle counting in uav images for estimating flowering time in sorghum", "year": "2021-07" }, { "authors": "S Ren; K He; R Girshick; J Sun", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b5", "title": "Faster R-CNN: Towards real-time object detection with region proposal networks", "year": "2016-06" }, { "authors": "Z Lin; W Guo", "journal": "Frontiers in Plant Science", "ref_id": "b6", "title": "Sorghum panicle detection and counting using unmanned aerial system images and deep learning", "year": "2020" }, { "authors": "Y Grandvalet; Y Bengio", "journal": "", "ref_id": "b7", "title": "Semi-supervised learning by entropy minimization", "year": "2004" }, { "authors": "D Lee", "journal": "", "ref_id": "b8", "title": "Pseudo-label : The simple and efficient semisupervised learning method for deep neural networks", "year": "2013" }, { "authors": "D Berthelot; N Carlini; I Goodfellow; A Oliver; N Papernot; C Raffel", "journal": "", "ref_id": "b9", "title": "Mixmatch: A holistic approach to semi-supervised learning", "year": "2019" }, { "authors": "K Sohn; D Berthelot; C Li; Z Zhang; N Carlini; E D Cubuk; A Kurakin; H Zhang; C Raffel", "journal": "", "ref_id": "b10", "title": "Fixmatch: Simplifying semi-supervised learning with consistency and confidence", "year": "2020" }, { "authors": "K Sohn; Z Zhang; C Li; H Zhang; C Lee; T Pfister", "journal": "", "ref_id": "b11", "title": "A simple semi-supervised learning framework for object detection", "year": "2020" }, { "authors": "M Xu; Z Zhang; H Hu; J Wang; L Wang; F Wei; X Bai; Z Liu", "journal": "", "ref_id": "b12", "title": "End-to-end semi-supervised object detection with soft teacher", "year": "2021" }, { "authors": "Y Liu; C Ma; Z He; C Kuo; K Chen; P Zhang; B Wu; Z Kira; P Vajda", "journal": "", "ref_id": "b13", "title": "Unbiased teacher for semi-supervised object detection", "year": "2021" }, { "authors": "Bowen Xu; Mingtao Chen; Wenlong Guan; Lulu Hu", "journal": "", "ref_id": "b14", "title": "Efficient teacher: Semi-supervised object detection for yolov5", "year": "2023" }, { "authors": "T Lin; M Maire; S Belongie; J Hays; P Perona; D Ramanan; P Dollár; C L Zitnick", "journal": "", "ref_id": "b15", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "G Jocher", "journal": "", "ref_id": "b16", "title": "ultralytics/yolov5: v7.0 -YOLOv5 SOTA Realtime Instance Segmentation", "year": "2022" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b17", "title": "Deep residual learning for image recognition", "year": "2016-06" } ]
[ { "formula_coordinates": [ 2, 145.34, 571.91, 152.87, 9.65 ], "formula_id": "formula_0", "formula_text": "L = L s + αL u(1)" }, { "formula_coordinates": [ 2, 112.97, 602.86, 185.24, 23.22 ], "formula_id": "formula_1", "formula_text": "L s = 1 N l (L cls (I i l ) + L reg (I i l ))(2)" }, { "formula_coordinates": [ 2, 110.31, 636.85, 187.9, 23.23 ], "formula_id": "formula_2", "formula_text": "L u = 1 N u (L cls (I i u ) + L reg (I i u ))(3)" } ]
2023-07-03
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b7", "b15", "b36", "b22", "b28", "b48", "b52", "b50", "b30", "b48", "b47", "b10", "b42", "b39", "b13", "b12" ], "table_ref": [], "text": "From the early history of AI and in particular of automated planning and scheduling, heuristic forward search has been a primary methodology for tacking challenging combinatorial problems. A rich variety of search algorithms have been proposed, including Dijkstra search (Dijkstra 1959), A * / WA * (Hart, Nilsson, and Raphael 1968), and Greedy Best First Search (Bonet and Geffner 2001, GBFS). They are divided into three categories: optimizing, which must guarantee the optimality of the output, satisficing, which may or may not attempt to minimize solution cost, and agile, which ignores solution cost and focuses on finding a solution quickly. This paper focuses on the agile setting.\nUnlike optimizing search, theoretical understanding of satisficing and agile search has been limited. Recent theoretical work on GBFS (Heusner, Keller, and Helmert 2017, * These authors contributed equally. 2018b,a; Kuroiwa and Beck 2022) refined the concept of search progress in agile search, but only based on a post hoc analysis that depends on oracular information, making their insights difficult to apply to practical search algorithm design, although it has been recently applied to a learningbased approach (Ferber et al. 2022a). More importantly, their analysis is incompatible with a wider range of randomized algorithms (Nakhost and Müller 2009;Imai and Kishimoto 2011;Kishimoto, Zhou, and Imai 2012;Valenzano et al. 2014;Xie, Nakhost, and Müller 2012;Xie, Müller, and Holte 2014;Xie et al. 2014;Xie, Müller, and Holte 2015;Asai and Fukunaga 2017;Kuroiwa and Beck 2022) that outperform the deterministic baseline with randomized explorations; as a result, their detailed theoretical properties are largely unknown except for probabilistic completeness (Valenzano et al. 2014). It is unsurprising that analyzing randomized algorithms requires a statistical perspective, which is also growing more important due to recent advances in learned heuristic functions (Toyer et al. 2018;Ferber, Helmert, and Hoffmann 2020;Shen, Trevizan, and Thiébaux 2020;Ferber et al. 2022b;Rivlin, Hazan, and Karpas 2019;Gehring et al. 2022;Garrett, Kaelbling, and Lozano-Pérez 2016).\nIn this paper, we tackle the problem of balancing exploration and exploitation in classical planning through a statistical lens and from the perspective of MABs. Previous work showed that traditional forward search algorithms (A*, GBFS) can be seen as a form of MCTS, but we refine and recast this paradigm as a repeated process of collecting a reward dataset and exploring the environment based on estimates obtained from this dataset. This perspective reveals a theoretical issue in THTS (Schulte and Keller 2014), a MCTS modified for classical planning that uses UCB1: the optimization objective of classical planning has no a priori known bound, and this violates the bounded reward assumption of UCB1.\nTo apply MAB to classical planning without THTS's theoretical issues, we propose UCB1-Normal2, a new Gaussian bandit, and GreedyUCT-Normal2, a new agile planning algorithm that combines MCTS with UCB1-Normal2, and show that GreedyUCT-Normal2 outperforms traditional agile algorithms (GBFS), existing MCTS-based algorithms (GreedyUCT, GreedyUCT*), and other MCTS-based algorithms combined with existing variance-aware bandits (UCB1-Normal and UCB-V).\nIn summary, our core contributions are as follows.\n• We identify theoretical issues that arise when applying UCB1 to planning tasks. • To address these issues, we present UCB1-Normal2, a new Gaussian bandit. We analyze its regret bound, which improves as the estimated variance is closer to the true variance, and is constant when they match. This makes it particularly powerful in a deterministic and finite state space such as classical planning. • We present GreedyUCT-Normal2, a new forward search algorithm that combines UCB1-Normal2 with MCTS and outperforms existing algorithms in agile classical planning.\n2 Background" }, { "figure_ref": [], "heading": "Classical Planning", "publication_ref": [], "table_ref": [], "text": "We define a propositional STRIPS Planning problem as a 4-tuple [P, A, I, G] where P is a set of propositional variables, A is a set of actions, I ⊆ P is the initial state, and G ⊆ P is a goal condition. Each action a ∈ A is a 4-tuple [PRE(a), ADD(a), DEL(a), C(a)] where C(a) ∈ Z 0+ is a cost, PRE(a) ⊆ P is a precondition and ADD(a), DEL(a) ⊆ P are the add-effects and delete-effects. A state s ⊆ P is a set of true propositions (all of P \\ s is false), an action a is applicable when s ⊇ PRE(a) (read: s satisfies PRE(a)), and applying action a to s yields a new successor state a(s) = (s \\ DEL(a)) ∪ ADD(a).\nThe task of classical planning is to find a sequence of actions called a plan (a 1 , • • • , a n ) where, for 1 ≤ t ≤ n, s 0 = I, s t ⊇ PRE(a t+1 ), s t+1 = a t+1 (s t ), and s n ⊇ G. A plan is optimal if there is no plan with lower cost t C(a t ). A plan is otherwise called satisficing. In this paper, we assume unit-cost: ∀a ∈ A; C(a) = 1.\nA domain-independent heuristic function h in classical planning is a function of a state s and the problem [P, A, I, G], but the notation h(s) usually omits the latter. It returns an estimate of the cumulative cost from s to one of the goal states (which satisfy G), typically through a symbolic, non-statistical means including problem relaxation and abstraction. Notable state-of-the-art functions that appear in this paper include h FF , h max , h add , and h GC (Hoffmann and Nebel 2001; Bonet and Geffner 2001;Fikes, Hart, and Nilsson 1972). Their implementation details are beyond the scope of this paper, and are included in the appendix Sec. S1." }, { "figure_ref": [], "heading": "Multi-Armed Bandit (MAB)", "publication_ref": [ "b46", "b40", "b5", "b31", "b27" ], "table_ref": [], "text": "MAB (Thompson 1933;Robbins 1952;Bush and Mosteller 1953) is a problem of finding the best strategy to choose from multiple unknown reward distributions. It is typically depicted by a row of K slot machines each with a lever or \"arm.\" Each time the player plays one of the machines and pulls an arm (a trial), the player receives a reward sampled from the distribution assigned to that arm. Through multiple trials, the player discovers the arms' distributions and selects arms to maximize the reward.\nThe most common optimization objective of MAB is Cumulative Regret (CR) minimization. Let r i (1 ≤ i ≤ K) be a random variable (RV) for the reward that we would receive when we pull arm i. We call p(r i ) an unknown reward distribution of i. Let t i be a RV of the number of trials performed on arm i and T = i t i be the total number of trials across all arms.\nDefinition 1. The cumulative regret ∆ is the gap between the optimal and the actual expected cumulative reward:\n∆ = T max i E[r i ] -i E[t i ]E[r i ].\nAlgorithms whose regret per trial ∆/T converges to 0 with T → ∞ are called zero-regret. Those with a logarithmically upper-bounded regret, O(log T ), are also called asymptotically optimal because this is the theoretical optimum achievable by any algorithm (Lai, Robbins et al. 1985).\nUpper Confidence Bound 1 (Auer, Cesa-Bianchi, and Fischer 2002, UCB1) is a logarithmic CR MAB for rewards r i ∈ [0, c] with known c. Let r i1 . . . r iti ∼ p(r i ) be t i i.i.d. samples obtained from arm i. Let μi = 1 ti ti j=1 r ij . To minimize CR, UCB1 selects i with the largest Upper Confidence Bound defined below.\nUCB1 i = μi + c 2 log T /t i LCB1 i = μi -c 2 log T /t i (1)\nFor reward (cost) minimization, LCB1 instead select i with the smallest Lower Confidence Bound defined above (e.g., in Kishimoto et al. (2022)), but we may use the terms U/LCB1 interchangeably. UCB1's second term is often called an exploration term. Generally, an LCB is obtained by flipping the sign of the exploration term in a UCB. U/LCB1 refers to a specific algorithm while U/LCB refers to general confidence bounds. c is sometimes set heuristically as a hyperparameter called the exploration rate. " }, { "figure_ref": [], "heading": "Forward Heuristic Best-First Search", "publication_ref": [ "b25", "b14", "b43", "b25" ], "table_ref": [], "text": "(n) = g(n) (g-value)\n, the minimum cost from the initial state I to the state s n found so far. A * uses f A * (n) = g(n) + h(s n ), the sum of g-value and the value returned by a heuristic function\nh (h-value). GBFS uses f GBFS (n) = h(s n ). Forward best- first search that uses h is called forward heuristic best-first search. Dijkstra search is a special case of A * with h(s) = 0.\nTypically, an open list is implemented as a priority queue ordered by NEC. Since the NEC can be stateful, e.g., g(s n ) can update its value, a priority queue-based open list assumes monotonic updates to the NEC because it has an unfavorable time complexity for removals. A * , Dijkstra, and GBFS satisfy this condition because g(n) decreases monotonically and h(s n ) is constant.\nMCTS is a class of forward heuristic best-first search that represents the open list as the leaves of a tree. We call the tree a tree-based open list. Our MCTS is based on the description in Keller and Helmert (2013) and Schulte and Keller (2014), whose implementation details are available in the appendix (Sec. S2). Overall, MCTS works in the same manner as other best-first search with a few key differences. (1) (selection) To select a node from the tree-based open list, it recursively selects an action on each branch of the tree, start from the root, using the NEC to select a successor node, descending until reaching a leaf node. (Sometimes the action selection rule is also called a tree policy.) At the leaf, it (2) (expansion) generates successor nodes, (3) (evaluation) evaluates the new successor nodes, (4) (queueing) attaches them to the leaf, and backpropagates (or backs-up) the information to the leaf's ancestors, all the way up to the root.\nThe evaluation obtains a heuristic value h(s n ) of a leaf node n. In adversarial games like Backgammon or Go, it is obtained either by (1) hand-crafted heuristics, (2) playouts (or rollouts) where the behaviors of both players are simulated by uniformly random actions (default policy) until the game terminates, or (3) a hybrid truncated simulation, which returns a hand-crafted heuristic after performing a short simulation (Gelly and Silver 2011). In recent work, the default policy is replaced by a learned policy (Silver et al. 2016).\nTrial-based Heuristic Tree Search (Keller and Helmert 2013;Schulte and Keller 2014, THTS), a MCTS for classical planning, is based on two key observations: (1) the rollout is unlikely to terminate in classical planning due to sparse goals, unlike adversarial games, like Go, which are guaranteed to finish in a limited number of steps with a clear outcome (win/loss); and (2) a tree-based open list can reorder nodes efficiently under non-monotonic updates to NEC, and thus is more flexible than a priority queuebased open list, and can readily implement standard search algorithms such as A * and GBFS without significant performance penalty. We no longer distinguish THTS and MCTS and imply that the former is included in the latter, because THTS is a special case of MCTS with an immediately truncated default policy simulation.\nFinally, Upper Confidence Bound applied to trees (Kocsis and Szepesvári 2006, UCT) is a MCTS that uses UCB1 for action selection and became widely popular in adversarial games. Schulte and Keller (2014) proposed several variants of UCT including GreedyUCT (GUCT), UCT*, and GreedyUCT* (GUCT*). We often abbreviate a set of algorithms to save space, e.g., [G]UCT[*] denotes {UCT, GUCT, UCT * , GUCT * }. In this paper, we mainly discuss GUCT[*] due to our focus on the agile satisficing setting that does not prioritize minimization of solution cost." }, { "figure_ref": [], "heading": "Theoretical Issues in Existing MCTS-based Classical Planning", "publication_ref": [], "table_ref": [], "text": "We revisit A * and GBFS implemented as MCTS from a statistical perspective. Let S(n) be the set of successors of a node n, L(n) be the set of leaf nodes in the subtree under n, and C(n, n ′ ) be the path cost between n and n ′ on the tree (equivalent to an action cost if n ′ is a successor of n). We define the NECs of A * and GBFS as\nf A * (n) = g(n) + h A * (n)\nand f GBFS (n) = h GBFS (n) which satisfy the following equations, shown by expanding h A * and h GBFS recursively and assuming\nh A * (n ′ ) = h GBFS (n ′ ) = h(s n ′ ) if n ′ is a leaf. h A * (n) = min n ′ ∈S(n) [C(n, n ′ ) + h A * (n ′ )] = min n ′ ∈L(n) [C(n, n ′ ) + h(s n ′ )] h GBFS (n) = min n ′ ∈S(n) [h GBFS (n ′ )] = min n ′ ∈L(n) [h(s n ′ )]\nObserve that these NECs estimate the minimum of the costto-go from the dataset/samples L(n). The minimum is also known as an order statistic; other order statistics include the top-k element, the q-quantile, and the median (0.5-quantile).\nIn contrast, [G]UCT computes the average (instead of minimum) over the dataset, and adds an exploration term to the average based on LCB1:\nh UCT (n) = 1 |L(n)| n ′ ∈S(n) |L(n ′ )|(C(n, n ′ ) + h UCT (n ′ )) = 1 |L(n)| n ′ ∈L(n) (C(n, n ′ ) + h(s n ′ )) h GUCT (n) = 1 |L(n)| n ′ ∈S(n) |L(n ′ )|h GUCT (n ′ ) = 1 |L(n)| n ′ ∈L(n) h(s n ′ ) f UCT (n) =g(n) + h UCT (n) -c (2 log |L(p)|)/|L(n)| f GUCT (n) = h GUCT (n) -c (2 log |L(p)|)/|L(n)|\nwhere p is a parent node of n and |L(p)| is the number of leaf nodes in the subtree of the parent. |L(p)| and |L(n)| respectively correspond to T and t i in Eq. 1. Note that the term \"monte-carlo estimate\" is commonly used in the context of estimating the integral/expectation/average, but less often in estimating the maximum/minimum, though we continue using the term MCTS.\nFrom the statistical estimation standpoint, existing MCTS-based planning algorithms have a number of theoretical issues. First, note that the samples of heuristic values collected from L(n) correspond to the rewards in the MAB algorithms, and that UCB1 assumes reward distributions with known bounds shared by all arms. However, such a priori known bounds do not exist for the heuristic values of classical planning, unlike adversarial games whose rewards are either +1/0 or +1/-1 representing a win/loss. Also, usually the range of heuristic values in each subtree of the search tree substantially differ from each other. Schulte and Keller (2014) claimed to have addressed this issue by modifying the UCB1, but their modification does not fully address the issue, as we discuss below.\nf GUCT-01 (n) = hGUCT(n)-m M -m -c (2 log |L(p)|)/|L(n)| (2) m + (M -m)f GUCT-01 (n) = h GUCT (n) -c(M -m) (2 log |L(p)|)/|L(n)| (3)\nLet us call their variant GUCT-01. GUCT-01 normalizes the first term of the NEC to [0, 1] by taking the minimum and maximum among n's siblings sharing the parent p.\nGiven M = max n ′ ∈S(p) h GUCT (n ′ ), m = min n ′ ∈S(p) h GUCT (n ′\n), and a hyperparameter c, GUCT-01 modifies f GUCT into f GUCT-01 (Eq. 2). However, the node ordering by NEC is maintained when all arms are shifted and scaled by the same amount, thus GUCT-01 is identical to the standard UCB1 with a reward range [0, c(Mm)] for all arms (Eq. 3); we additionally note that this version avoids a division-by-zero issue for Mm = 0.\nThere are two issues in GUCT-01: First, GUCT-01 does not address the fact that different subtrees have different ranges of heuristic values. Second, we would expect GUCT-01 to explore excessively, because the range [0, c(Mn)] obtained from the data of the entire subtree of the parent is always broader than that of each child, since the parent's data is a union of those from all children. We do note that Mm differs for each parent, and thus GUCT-01 adjusts its exploration rate in a different parts of the search tree. In other words, GUCT-01 is depth-aware, but is not breadthaware: it considers the reward range only for the parent, and not for each child.\nFurther, in an attempt to improve the performance of [G]UCT, Schulte and Keller (2014) noted that using the average is \"rather odd\" for planning, and proposed UCT* and GreedyUCT* (GUCT*) which combines h A * and h GBFS with LCB1 without statistical justification.\nFinally, these variants failed to improve over traditional algorithms (e.g., GBFS) unless combined with various other enhancements such as deferred heuristic evaluation (DE) and preferred operators (PO). The theoretical characteristics of these enhancements are not well understood, rendering their use ad hoc and the reason for GUCT-01's performance inconclusive, and motivating better theoretical analysis." }, { "figure_ref": [], "heading": "Bandit Algorithms with Unbounded Distributions with Different Scales", "publication_ref": [], "table_ref": [], "text": "To handle reward distributions with unknown support that differs across arms, we need a MAB that assumes an unbounded reward distribution spanning the real numbers. We use the Gaussian distribution here, although future work may consider other distributions. Formally, we assume each arm i has a reward distribution N (µ i , σ 2 i ) for some unknown µ i , σ 2 i . As σ 2 i differs across i, the reward uncertainty differs across the arms. By contrast, the reward uncertainty of each arm in UCB1 is expressed by the range [0, c], which is the same across the arms. We now discuss the shortcomings of MABs from previous work (Eq. 4-7), and present our new MAB (Eq. 8)." }, { "figure_ref": [], "heading": "UCB1-Normal", "publication_ref": [ "b1", "b45", "b23" ], "table_ref": [], "text": "i = μi + σi (16 log T )/t i (4) UCB1-Tuned 1 = (5) μi + c min(1/4, σ2 i + 2 log T /t i ) log T /t i UCB-V i = μi + σi (2 log T )/t i + (3c log T )/t i (6) Bayes-UCT2 i = μBayes i + σBayes i √ 2 log T (7) UCB1-Normal2 i = μi + σi √ 2 log T (8)\nThe UCB1-Normal MAB (Auer, Cesa-Bianchi, and Fischer 2002, Theorem 4), which was proposed along with UCB1 [idem, Theorem 1], is designed exactly for this scenario but is still unpopular. Given\nt i i.i.d. samples r i1 . . . r iti ∼ N (µ i , σ 2 i )\nfrom each arm i where T = i t i , it chooses i that maximizes the metric shown in Eq. 4. To apply this bandit to MCTS, substitute T = |L(p)| and t i = |L(n)|, and backpropagate the statistics μi , σ2 i (see Appendix Sec. S4). For minimization tasks such as classical planning, use the LCB. We refer to the GUCT variant using UCB1-Normal as GUCT-Normal. An advantage of UCB1-Normal is its logarithmic upper bound on regret (Auer, Cesa-Bianchi, and Fischer 2002, Appendix B). However, it did not perform well in our empirical evaluation, likely because its proof relies on two conjectures which are explicitly stated by the authors as not guaranteed to hold.\nTheorem 1 (From (Auer, Cesa-Bianchi, and Fischer 2002)). UCB1-Normal has a logarithmic regret-per-arm 256\nσ 2 i log T ∆ 2 i +1+ π 2 2 +8 log T if,\nfor a Student's t RV X with s degrees of freedom (DOF), ∀a ∈ [0, 2(s + 1)]; P (X ≥ a) ≤ e -a 2 /4 , and if, for a χ 2 RV X with s DOF, P (X ≥ 4s) ≤ e -(s+1)/2 .\nTo avoid relying on these two conjectures, we need an alternate MAB that similarly adjusts the exploration rate based on the variance. Candidates include UCB1-Tuned (Auer, Cesa-Bianchi, and Fischer 2002) in Eq. 5, UCB-V (Audibert, Munos, and Szepesvári 2009) in Eq. 6, and Bayes-UCT2 (Tesauro, Rajan, and Segal 2010) in Eq. 7 (not to be confused with Bayes-UCB (Kaufmann, Cappé, and Garivier 2012)), but they all have various limitations. UCB1-Tuned assumes a bounded reward distribution and lacks a regret bound. UCB-V improves UCB1-Tuned with a regret proof but it also assumes a bounded reward distribution. Bayes-UCT2 lacks a regret bound, proves its convergence only for bounded reward distributions, lacks a thorough ablation study for its 3 modifications to UCB1-based MCTS, and lacks evaluation on diverse tasks as it is tested only on a synthetic tree (fixed depth, width, and rewards).\nWe present UCB1-Normal2 (Eq. 8), a new, conservative, trimmed-down version of Bayes-UCT2, and analyze its regret bound.\nTheorem 2 (Main Result). Let α ∈ [0, 1] be an unknown problem-dependent constant and χ 2 1-α,n be the critical value for the tail probability of a χ 2 distribution with significance α and DOF n that satisfies\nP (t i σ2 i /σ 2 i < χ 2 1-α,ti ) = α. UCB1-Normal2 has a worst-case polynomial, best-case constant regret-per-arm -4(log α)σ 2 i log T ∆ 2 i +1 + 2C + (1 -α)T (T + 1)(2T + 1) 3 α→1 ---→ 1 + 2C where C is a finite constant if each arm is pulled M = inf{n|8 < χ 2\n1-α,n } times in the beginning. Proof. (Sketch of appendix Sec. S3.2-S3.3.) We use Hoeffding's inequality for sub-Gaussian distributions as Gaussian distributions belong to sub-Gaussian distributions. Unlike in UCB1 where the rewards have a fixed known support [0, c], we do not know the true reward variance σ 2 i . Therefore, we use the fact that t i σi 2 /σ 2 i follows a χ 2 distribution and P (t i σi 2 /σ 2 i < χ 2 1-α,ti ) = α for some α. We use unionbound to address the correlation and further upper-bound the tail probability. We also use χ 2 1-α,ti ≥ χ 2 1-α,2 = -2 log α for t i ≥ 2. The resulting upper bound contains an infinite series C. Its convergence condition dictates the minimum pulls M that must be performed initially. □\nPolynomial regrets are generally worse than logarithmic regrets of UCB1-Normal. However, our regret bound improves over that of UCB1-Normal if T is small and α ≈ 1 (log α ≈ 0, 1-α ≈ 0). α represents the accuracy of the sample variance σ2 toward the true variance σ 2 . In deterministic, discrete, finite state-space search problems like classical planning, α tends to be close to (or sometimes even match) 1 because σ = σ is achievable. Several factors of classical planning contribute to this. Heuristic functions in classical planning are deterministic, unlike rollout-based heuristics in adversarial games. This means σ = σ = 0 when a subtree is linear due to the graph shape. Also, σ = σ when all reachable states from a node are exhaustively enumerated in its subtree. In statistical terms, this is because draws from heuristic samples are performed without replacements due to duplication checking.\nUnlike UCB-V and UCB1-Normal, our MCTS+UCB1-Normal2 algorithm does not need explicit initialization pulls because every node is evaluated once and its heuristic value is used as a single sample. This means we assume\nM = 1, thus α > ERF(2) > 0.995 because 8 < χ 2 1-α,1 ⇔ 1 -α < γ( 1 2 , 8 2 ) Γ( 1 2 ) = 1 -ERF(2).\nIn classical planning, this assumption is more realistic than the conjectures used by UCB1-Normal." }, { "figure_ref": [ "fig_0" ], "heading": "Experimental Evaluation", "publication_ref": [ "b16", "b33", "b3", "b37" ], "table_ref": [ "tab_1", "tab_1" ], "text": "We evaluated the efficiency of our algorithms in terms of the number of nodes evaluated before a goal is found. We used a python-based implementation (Alkhazraji et al. 2020, Pyperplan) for convenient prototyping. It is slower than C++-based state-of-the-art systems (e.g. Fast Downward (Helmert 2006)), but our focus on evaluations makes this irrelevant and also improves reproducibility by avoiding the effect of hardware differences and low-level implementation details.\nWe evaluated the algorithms over a subset of the International Planning Competition benchmark domains,1 selected for compatibility with the set of PDDL extensions supported by Pyperplan. The program terminates either when it reaches 10,000 node evaluations or when it finds a goal. In order to limit the length of the experiment, we also had to terminate the program on problem instances whose grounding took more than 5 minutes. The grounding limit removed 113 instances from freecell, pipesworld-tankage, and logis-tics98. This resulted in 751 problem instances across 24 domains in total. We evaluated various algorithms with h FF , h add , h max , and h GC (goal count) heuristics (Fikes, Hart, and Nilsson 1972), and our analysis focuses on h FF . We included h GC because it can be used in environments without domain descriptions, e.g., in the planning-based approach (Lipovetzky, Ramírez, and Geffner 2015) to the Atari environment (Bellemare et al. 2015). We ran each configuration with 5 random seeds and report the average number of problem instances solved. To see the spread due to the seeds, see the cumulative histogram plots Fig. S1-S3 in the appendix.\nWe evaluated the following algorithms: GBFS is GBFS implemented on priority queue. GUCT is a GUCT based on the original UCB1. GUCT-01 is GUCT with ad hoc [0, 1] normalization of the mean (Schulte and Keller 2014). GUCT-Normal/-Normal2/-V are GUCT variants using UCB1-Normal/UCB1-Normal2/UCB-V respectively. The starred variants GUCT*/-01/-Normal/-Normal2 are using h GBFS backpropagation (Schulte and Keller 2014, called full-bellman backup). For GUCT and GUCT-01, we evaluated the hyperparameter c with the standard value c = 1.0 and c = 0.5. The choice of the latter is due to Schulte and Keller (2014), who claimed that GUCT [*]-01 performed the best when 0.6 < C = c √ 2 < 0.9, i.e., 0.4 < c < 0.63. Our aim of testing these hyperparameters is to compare them against automatic exploration rate adjustments performed by UCB1-Normal [2]. Schulte and Keller (2014) previously reported that two ad hoc enhancements to GBFS, PO and DE, also improve the performance of GUCT [*]-01. We implemented them in our code, and show the results. We do not report configurations unsupported by the base Pyperplan system: GBFS+PO, and PO with heuristics other than h FF .\nReproduction and a More Detailed Ablation of Previous Work We first reproduced the results in (Schulte and Keller 2014) and provides its more detailed ablation. Table 1 shows that GUCT [*][-01] is indeed significantly outperformed by the more traditional algorithm GBFS, indicating that UCB1-based exploration is not beneficial for planning. Although this result disagrees with the final conclusion of their paper, their conclusion relied on incorporating the DE and PO enhancements, and these confounding factors impede conclusive analysis.\nOur ablation includes the effect of min-/max-based mean normalization (Eq. 2), which was not previously evaluated. GUCT [*]-01 performs significantly worse than GUCT [*] which has no normalization. This implies that normalization in GUCT [*]-01 not only failed to address the theoretical issue of applying UCB1 to rewards with unknown and different supports, but also had an adverse effect on node evaluations due to the excessive exploration, as predicted by our analysis in Sec. 3.\nh = h FF h add h max h GC h FF +PO h FF +DE h FF +DE+PO c = 0.\nThe Effect of Scale Adaptability We compare the performance of various algorithms in terms of the number of problem instances solved. First, GUCT-Normal2 outperforms GBFS, making it the first instance of MCTS that performs better than traditional algorithms by its own (without various other enhancements). Overall, GUCT-Normal2 performed well with all 4 heuristics. GUCT-Normal2 also significantly outperformed GUCT/GUCT-01/-Normal/-V and their GUCT* variants. The dominance against GUCT-Normal is notable because this supports our analysis that in classical planning σ2 ≈ σ 2 , thus P (t i σ2 /σ 2 < χ 2 1-α,ti ) = α ≈ 1, overcoming the asymptotic deficit (the polynomial regret in GUCT-Normal2 vs. the logarithmic regret of GUCT-Normal).\nWhile the starred variants (GUCT*, etc) can be significantly better than the non-starred variants (GUCT) at times, this trend was opposite in algorithms that perform better, e.g., GUCT*-Normal2 tend to be worse than GUCT-Normal2. This supports our claim that Full-Bellman backup proposed by (Schulte and Keller 2014) is theoretically unfounded and thus does not consistently improve search algorithms. Further theoretical investigation of a similar maximum-based backup is an important avenue of future work.\nThe table also compares GUCT [*]-Normal[2], which do not require any hyperparameter, against GUCT [*][-01/-V] with different c values. Although c = 0.5 improves the performance of GUCT [*]-01 as reported by (Schulte and Keller 2014), it did not improve enough to catch up with the adaptive exploration rate adjustment of GUCT [*]-Normal2. We tested a larger variety of c-values and did not observe significant change.\nPreferred Operators Some heuristic functions based on problem relaxation, notably h FF , compute a solution of the delete-relaxed problem, called a relaxed plan, and return its cost as the heuristic value (see appendix Sec. S1 for details). Actions included in a relaxed plan are called \"helpful actions\" (Hoffmann and Nebel 2001) or \"preferred operators\" (Richter and Helmert 2009) and are used by a planner in a variety of ways (e.g., initial incomplete search of FF planner (Hoffmann and Nebel 2001) and alternating open list in LAMA planner (Richter, Westphal, and Helmert 2011)). Schulte and Keller (2014) used it in MCTS/THTS by limiting the action selection to the preferred operators, and falling back to original behavior if no successors qualify. In MCTS terminology (Sec. 2.3), this is a way to modify the tree policy by re-weighting with a mask. We reimplemented the same strategy in our code base. Our result shows that it also improves GUCT [*][-Normal2], consistent with the improvement in GUCT [*]-01 previously reported. 1 shows the effect of deferred heuristic evaluation (DE) on search algorithms. In this experiment, DE is expected to degrade the number compared to the algorithms with eager evaluations because deferred evaluation trades the number of calls to heuristics with the number of nodes inserted to the tree, which is limited to 10,000. When CPU time is the limiting resource, DE is expected to improve the number of solved instances, assuming the implementation is optimized for speed (e.g. using C++). However, our is not designed to measure this effect, since we implemented in Python, which is typically 100-1,000 times slower than C++, and this low-level bottleneck could hide the effect of speed improvements." }, { "figure_ref": [], "heading": "Deferred Heuristic Evaluation Table", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "The only meaningful outcome of this experiment is therefore to measure whether DE+PO is better than DE, and if GUCT [*]-Normal2 continues to dominate the other algorithms when DE is used. Table 1 answers both questions positvely: DE+PO tends to perform better than DE alone, and the algorithmic efficiency of GUCT [*]-Normal2 is still superior to other algorithms with DE and DE+PO.\nAn interesting result observed in our experiment is that the results of GUCT [*]-01 with DE, PO, and DE+PO are still massively inferior to GBFS. This indicates that the improvement of GUCT [*]-01 + DE+PO observed by Schulte and Keller is purely an artifact of low-level performance and not a fundamental improvement in search efficiency. Indeed, Schulte and Keller (2014) did not analyze node evaluations nor the results of GUCT [*]-01 + PO (they only analyzed DE and DE+PO). Moreover, it means GUCT [*]-01 requires DE, an ad hoc and theoretically under-investigated technique, in order to outperform GBFS." }, { "figure_ref": [ "fig_0", "fig_5" ], "heading": "Solution Quality", "publication_ref": [], "table_ref": [], "text": "We discuss the quality (here defined as inverse cost) of the solutions returned by each algorithm using the h FF , h add , and h max heuristics. Fig. 1 shows that GUCT [*]-Normal2 returns consistently longer, thus worse, solutions than GBFS does. In contrast, the solution quality tends to be similar between GBFS and other unsuccessful MCTS algorithms. See appendix Fig. S5-S8 for more plots. As the saying goes, \"haste makes waste,\" but in a positive sense: for agile search, we claim that a successful exploration must sacrifice the solution quality for faster search.\nWhile Schulte and Keller (2014) claimed that exploration mechanisms could improve solution quality, this does not necessarily contradict our observations. First, their claim only applies to their evaluation of [G]UCT [*]-01. Our result comparing GUCT [*]-01 and GBFS agrees with their result (Schulte and Keller 2014, Table.2, 143.5 vs 143.57). Second, the IPC score difference in their paper is small (A * :162.81 vs. UCT*:166.8-about 4 instances of best vs worst solution gap) and could result from random tiebreaking." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b34", "b44", "b26", "b21", "b50", "b30", "b6", "b35" ], "table_ref": [], "text": "Due to its focus on adversarial games, MCTS literature typically assumes a bounded reward setting (e.g., 0/1, -1/+1), making applications of UCB1-Normal scarce (e.g., Google Scholar returns 5900 vs. 60 for keyword \"UCB1\" and \"UCB1-Normal\", respectively) except a few modelselection applications (McConachie and Berenson 2018). While Gaussian Process MAB (Srinivas et al. 2010) has been used with MCTS for sequential decision making in continuous space search and robotics (Kim et al. 2020), it is significantly different from discrete search spaces like in classical planning. Bayes-UCT2 (Tesauro, Rajan, and Segal 2010) was only evaluated on a synthetic tree and indeed was often outperformed by the base UCT (Imagawa and Kaneko 2016).\nMABs may provide a rigorous theoretical tool to analyze the behavior of a variety of existing randomized enhancements for agile/satisficing search that tackle the explorationexploitation dilemma. ϵ-greedy GBFS was indeed inspired by MABs (Valenzano et al. 2014, Sec. 2). GUCT-Normal2 encourages exploration in nodes further from the goal, which tend to be close to the initial state. This behavior is similar to that of Diverse Best First Search (Imai and Kishimoto 2011), which stochastically enters an \"exploration mode\" that expands a node with a smaller g value more often. This reverse ordering is unique from other diversified search algorithms, including ϵ-GBFS, Type-GBFS (Xie, Müller, and Holte 2015), and Softmin-Type-GBFS (Kuroiwa and Beck 2022), which selects g rather uniformly during the exploration.\nTheoretical guarantees of MABs require modifications in tree-based algorithms (e.g. MCTS) due to non-i.i.d. sampling from the subtrees (Coquelin and Munos 2007;Munos et al. 2014). Incorporating the methods developed in the MAB community to counter this bias in the subtree samples is an important direction for future work.\nMDP and Reinforcement Learning literature often use discounting to avoid the issue of divergent cumulative reward: when the upper bound of step-wise reward is known to be R, then the maximum cumulative reward goes to ∞ with infinite horizon, while the discounting with γ makes it below R 1-γ , allowing the application of UCB1. Although it addresses the numerical issue and UCB1's theoretical requirement, it no longer optimizes the cumulative objective." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b32", "b55", "b58" ], "table_ref": [], "text": "We examined the theoretical assumptions of existing banditbased exploration mechanisms for classical planning, and showed that ad hoc design decisions can invalidate theoretical guarantees and harm performance. We presented GUCT-Normal2, a classical planning algorithm combining MCTS and UCB1-Normal2, and analyzed it both theoretically and empirically. The theoretical analysis of its regret bound revealed that, despite its worst-case polynomial bound, in practice it outperforms logarithmically-bounded UCB1-Normal due to the unique aspect of the target application (classical planning). Most importantly, GUCT-Normal2 outperforms GBFS, making it the first bandit-based MCTS to outperform traditional algorithms. Future work includes combinations with other enhancements for agile search including novelty metric (Lipovetzky and Geffner 2017), as well as C++ re-implementation and the comparison with the state-of-the-art.\nOur study showcases the importance of considering theoretical assumptions when choosing the correct bandit algorithm for a given application. However, this does not imply that UCB1-Normal is the end of the story: for example, while the Gaussian assumption is sufficient for cost-to-go estimates in classical planning, it is not necessary for justifying its application to classical planning. The Gaussian assumption implies that rewards can be any value in [-∞, ∞], which is an under-specification for non-negative cost-to-go estimates. Future work will explore bandits that reflect the assumptions in classical planning with even greater fidelity. A domain-independent heuristic function h in classical planning is a function of a state s and the problem [P, A, I, G], but the notation h(s) usually omits the latter. In addition to what we discussed in the main article, this section also uses a notation h(s, G). It returns an estimate of the cumulative cost from s to one of the goal states (states that satisfy G), typically through a symbolic, non-statistical means including problem relaxation and abstraction. Notable state-of-the-art functions that appear in this paper includes h FF , h max , h add , h GC (Hoffmann and Nebel 2001;Bonet and Geffner 2001;Fikes, Hart, and Nilsson 1972).\nA significant class of heuristics is called delete relaxation heuristics, which solve a relaxed problem which does not contain delete effects, and then returns the cost of the solution of the relaxed problem as an output. The cost of the optimal solution of a delete relaxed planning problem from a state s is denoted by h + (s), but this is too expensive to compute in practice (NP-complete) (Bylander 1996). Therefore, practical heuristics typically try to obtain its further relaxations that can be computed in polynomial time.\nOne such admissible heuristic based on delete-relaxation is called h max (Bonet and Geffner 2001) that is recursively defined as follows: (1)\nh max (s, G) = max p∈G    0 if p ∈ s.\nIts inadmissible variant is called additive heuristics h add (Bonet and Geffner 2001) that is recursively defined as follows:\nh add (s, G) = p∈G    0 if p ∈ s. Otherwise, min {a∈A|p∈ADD(a)} C(a) + h add (s, PRE(a)) .\n(2)\nAnother inadmissible delete-relaxation heuristics called h FF (Hoffmann and Nebel 2001) is defined based on another heuristics h, such as h = h add , as a subprocedure. For each unachieved subgoal p ∈ G \\ s, the action a that adds p with the minimal [C(a) + h(s, PRE(a))] is conceptually \"the cheapest action that achieves a subgoal p for the first time under delete relaxation\", called the cheapest achiever / best supporter bs(p, s, h) of p. h FF is defined as the sum of actions in a relaxed plan Π + constructed as follows:\nh FF (s, G, h) = a∈Π + (s,G,h) C(a) (3) Π + (s, G, h) = p∈G ∅ if p ∈ s. Otherwise, {a} ∪ Π + (s, PRE(a))\nwhere a = bs(p, s, h). (5)\nGoal Count heuristics h GC is a simple heuristic proposed in (Fikes, Hart, and Nilsson 1972) that counts the number of propositions that are not satisfied yet. [condition] is a cronecker's delta / indicator function that returns 1 when the condition is satisfied.\nh GC (s, G) = p∈G p ̸ ∈ s .(6)\nS2 Detailed Explanation for the Base MCTS for Graph Search\nAlg. 1 shows the pseudocode of MCTS adjusted for graph search (Schulte and Keller 2014). Aside from what was described from the main section, it has a node-locking mechanism that avoids duplicate search effort.\nFollowing THTS, our MCTS has a hash table that implements a CLOSE list and a Transposition Table (TT). A CLOSE list stores the generated states and avoids instantiating nodes with duplicate states. A TT stores various information about the states such as the parent information and the action used at the parent. The close list is implemented by a lock mechanism.\nSince an efficient graph search algorithm must avoid visiting the same state multiple times, MCTS for graph search marks certain nodes as locked, and excludes them from the selection candidates. A node is locked either (1) when a node is a dead-end that will never reach a goal (detected by having no applicable actions, by a heuristic function, or other facilities), (2) when there is a node with the same state in the search tree with a smaller g-value, (3) when all of its children are locked, or (4) when a node is a goal (relevant in an anytime iterated search setting (Richter, Thayer, and Ruml 2010;Richter, Westphal, and Helmert 2011), but not in this paper). Thus, in the expansion step, when a generated node n has the same state as a node n ′ already in the search tree, MCTS discards n if g(n) > g(n ′ ), else moves the subtree of n ′ to n and marks n ′ as locked. It also implicitly detects a cycle, as this is identical to the duplicate detection in Dijkstra/A * /GBFS.\nThe queueing step backpropagates necessary information from the leaf to the root. Efficient backpropagation uses a priority queue ordered by descending g-value. The queue is initialized with the expanded node p; each newly generated node n that is not discarded is inserted into the queue, and if a node n ′ for the same state was already present in the tree it is also inserted into the queue. In each backpropagation iteration, (1) the enqueued node with the highest g-value is popped, (2) its information is updated by aggregating its children's information (including the lock status), (3) and its parent is queued." }, { "figure_ref": [], "heading": "S3 Proof of Bandit Algorithms", "publication_ref": [], "table_ref": [], "text": "To help understand the proof of UCB1-Normal2, we first describe the general procedure for proving the regret of bandit algorithms, demonstrate the proof of UCB1 using this scheme, then finally show the proof of UCB1-Normal2." }, { "figure_ref": [], "heading": "arXiv:2305.09840v2 [cs.AI] 3 Jul 2023", "publication_ref": [], "table_ref": [], "text": "The ingredients for proving an upper/lower confidence bound are as follows:\n• Ingredient 1: A specification of reward distributions.\nFor example, in the standard UCB1 (Auer, Cesa-Bianchi, and Fischer 2002), one assumes a reward distribution bounded in [0, b]. Different algorithms assume different reward distributions, and in general, more information about the distribution gives a tighter bound (and faster convergence). For example, one can assume an unbounded distribution with known variance, etc. • Ingredient 2: A concentration inequality. It is also called a tail probability bound. For example, in the standard UCB1, one uses Hoeffding's inequality. Different algorithms use different inequalities to prove the bound.\nExamples include the Chernoff bound, Chebishev's inequality, Bernstein's inequality, Bennett's inequality, etc. Note that the inequality may be two-sided or one-sided.\nThe general procedure for proving the bound is as follows.\n1. Write down the concentration inequality.\n•\nP (|X -E[X]| ≥ ϵ) ≤ F (ϵ). (two-sided) • P (X -E[X] ≥ ϵ) ≤ F (ϵ). (one-sided, upper) • P (E[X] -X ≥ ϵ) ≤ F (ϵ). (one-sided, lower)\nF is an inequality-specific form. This step may be sometimes missing, depending on which inequality you use." }, { "figure_ref": [], "heading": "Turn the inequality into a version for a sum of independent variables", "publication_ref": [], "table_ref": [], "text": "S n = n i=1 X i . P (|S n -E[S n ]| ≥ ϵ) ≤ G(ϵ).\nG is an inequality-specific form. 3. Divide the error by n and use δ = ϵ n . This makes the statement about the sum S n into one for the mean\nµ n = 1 n n i=1 X i . Note that E[µ n ] = E[X] if X i are i.i.d.. P (|µ n -E[µ n ]| ≥ ϵ n = δ) ≤ G(nδ)\n4. Simplify the inequality based on the assumptions made in the reward distribution, e.g., bounds, mean, variance." }, { "figure_ref": [], "heading": "Expand |µ", "publication_ref": [], "table_ref": [], "text": "n -E[µ n ]| ≥ δ into δ ≥ µ n -E[µ n ] ≥ -δ.\n6. Change the notations to model the bandit problem because each concentration inequality is a general statement about RVs. Before this step, the notation was:\n• n (number of samples)\n• µ n = 1 n n i=1 X i • E[µ n ] = E[X 1 ] = . . . = E[X n ] • 1 n n i=1 (X i -E[X i ]) 2 • Var[X 1 ] = . . . = Var[X n ]\nAfter the change, they correspond to:\n• n i (number of pulls of arm i).\n• μi (sample mean of arm i from n i pulls), • µ i (true mean of arm i),\n• σ2\ni (sample variance of arm i from n i pulls), • σ 2 i (true variance of arm i),\n7. Let i be a suboptimal arm, * be an optimal arm, UCB i = μi + δ, and LCB i = μiδ. Derive the relationship between δ and the gap ∆ i = µ iµ * so that the following conditions for the best arm holds:\n• UCB i ≤ UCB * (for maximization)\n• LCB i ≥ LCB * (for minimization) This results in 2δ ≤ ∆ i . 8. Replace the δ with a formula that becomes an exploration term. For example, in UCB1, δ = 2 log T ni . 9. Derive the lower bound L for n i from 2δ ≤ ∆ i . 10. Find the upper-bound of the probability of selecting a sub-optimal arm i. This is typically done by a unionbound argument. 11. Derive the upper bound of the expected number of pulls E[n i ] of a suboptimal arm i using a triple loop summation. This is typically the heaviest part that needs mathematical tricks. The tricks do not seem generally transferable between approaches. 12. Finally, derive an upper bound of the regret T µ * -\nK i=1 µ i E[n i ] by T µ * - K i=1 µ i E[n i ] = K i=1 (µ * -µ i )E[n i ] = K i=1 ∆ i E[n i ]." }, { "figure_ref": [], "heading": "S3.1 The Proof of UCB1", "publication_ref": [], "table_ref": [], "text": "1. UCB1 uses Hoeffding's inequality, which is already defined for a sum of RVs, thus the first step is skipped. 2. UCB1 assumes a reward distribution with a known bound. According to Hoeffding's inequality, given RVs X 1 . . . X n , where X i ∈ [l i , u i ], and their sum S n = n i=1 X i ,\nP (S n -E[S n ] ≥ ϵ) ≤ exp - 2ϵ 2 n i=1 (ui-li) 2 . P (E[S n ] -S n ≥ ϵ) ≤ exp - 2ϵ 2 n i=1 (ui-li) 2 .\nWe focus on P (S n -E[S n ] ≥ ϵ) to avoid repetition. 3. Using δ = ϵ n and µ n = Sn n ,\nP (µ n -E[µ n ] ≥ δ) ≤ exp - 2n 2 δ 2 n i=1 (u i -l i ) 2 . 4. UCB1 assumes X i are i.i.d. copies, thus ∀i; u i -l i = c. P (µ n -E[µ n ] ≥ δ) ≤ exp - 2n 2 δ 2 nc 2 = exp - 2nδ 2 c 2 . 5.\nExpanding the two-sided error:\nδ ≥ µ n -E[µ n ] ≥ -δ.\n6. Changing the notation:\nδ ≥ μi -µ i ≥ δ.\n7. Adding µ iδ to both sides,\nµ i ≥ μi -δ = LCB i (T, n i ) ≥ µ i -2δ.\nSubstituting i = * (optimal arm), the first inequality is\nµ * ≥ μ * -δ = LCB * (T, n * ). Assuming 2δ ≤ ∆ i = µ i -µ * , the second inequality is LCB i (T, n i ) ≥ µ i -2δ ≥ µ i -∆ i = µ * . Therefore LCB i (T, n i ) ≥ µ * ≥ LCB * (T, n * ). 8. Let δ = c 2 log T ni . Then P (µ ni -E[µ ni ] ≥ δ) ≤ exp - 2nc 2 2 log T ni c 2 = T -4 . 9. From 2δ ≤ ∆ i , considering n i is an integer, 2c 2 log T n i ≤ ∆ i ⇔ 4c 2 2 log T n i ≤ ∆ 2 i ⇔ 8c 2 log T ∆ 2 i ≤ 8c 2 log T ∆ 2 i = L ≤ n i .\n10. LCB i (T, n i ) ≥ µ * ≥ LCB * (T, n * ) does not hold when either inequality does not hold. LCB i (T, n i ) ≥ µ * does not hold with probability less than T -4 . µ * ≥ LCB i (T, n * ) does not hold with probability less than T -4 . Thus, by union-bound (probability of disjunctions),\nP (LCB i (T, n i ) ≤ LCB * (T, n * )) ≤ 2T -4 .\n11. Assume we followed the UCB1 strategy, i.e., we pulled the arm that minimizes the LCB. The expected number of pulls E[n i ] from a suboptimal arm i is as follows. Note that for K arms, every arm is at least pulled once." }, { "figure_ref": [], "heading": "E[n", "publication_ref": [], "table_ref": [], "text": "i ] = 1 + T t=K+1 P (i is pulled at time t) ≤L + T t=K+1 P (i is pulled at time t ∧ n i > L) =L + T t=K+1 P (∀j; LCB j (t, n j ) ≥ LCB i (t, n i )) ≤L + T t=K+1 P (LCB * (t, n * ) ≥ LCB i (t, n i )) ≤L + T t=K+1 P (∃u, v; LCB * (t, u) ≥ LCB i (t, v)) ≤L + T t=K+1 t-1 u=1 t-1 v=L P (LCB * (t, u) ≥ LCB i (t, v)) ≤L + T t=K+1 t-1 u=1 t-1 v=L 2t -4 ≤L + ∞ t=1 t u=1 t v=1 2t -4 = L + ∞ t=1 t 2 • 2t -4 =L + 2 ∞ t=1 t -2 = L + 2 • π 6 = L + π 3 ≤c 2 8 log T ∆ 2 i + 1 + π 3 ∵ ⌈x⌉ ≤ x + 1\n12. The regret is\nT µ * - K i=1 µ i E[n i ] = K i=1 (µ * -µ i )E[n i ] = K i=1 ∆ i E[n i ] ≤ K i=1 ∆ i c 2 8 log T ∆ 2 i + 1 + π 3 ≤ K i=1 c 2 8 log T ∆ i + 1 + π 3 ∆ i .\nS3.2 Preliminary for the Proof of UCB1-Normal2\nOur analysis begins with a definition of Sub-Gaussian distributions.\nDefinition 1. (Vershynin 2018, Proposition 2.5.2, (iv\n)) A distribution p(x) is sub-Gaussian when ∃t > 0; E[exp x 2 /t 2 ] < 2.\nTheorem 1. A Gaussian distribution with 0-mean N (0, σ 2 ) (without loss of generality) is sub-Gaussian.\nProof.\np(x) = N (0, σ 2 ) = 1 √ 2πσ 2 exp - x 2 2σ 2 . E[exp x 2 /t 2 ] = R exp x 2 t 2 1 √ 2πσ 2 exp - x 2 2σ 2 dx = 1 √ 2πσ 2 R exp -x 2 1 2σ 2 - 1 t 2 dx = 1 √ 2πσ 2 R exp - x 2 C 2 dx = 1 √ 2πσ 2 R exp -y 2 Cdy x C = y ⇔ dx = Cdy = C √ 2πσ 2 √ π = C √ 2σ 2 .\nWhere\n1 C 2 = 1 2σ 2 - 1 t 2 ⇔ C 2 = 2σ 2 t 2 t 2 -2σ 2 . To show E[exp x 2 /t 2 ] < 2, E[exp x 2 /t 2 ] = C √ 2σ 2 = t 2 t 2 -2σ 2 < 2, ⇔ t 2 < 4(t 2 -2σ 2 ), ⇔ 8 3 σ 2 < t 2 . □ Definition 2. For a sub-Gaussian RV x, ||x|| = inf t > 0 | E[exp x 2 /t 2 ] < 2 .\nCorollary 1. For p(x) = N (0, σ 2 ), ||x|| = 8 3 σ. Next, we review the general Hoeffding's inequality for sub-Gaussian distributions ().\nTheorem 2. For independent sub-Gaussian RVs x 1 , . . . , x n , let their sum be S n = n i=1 x i . Then, for any ϵ > 0,\nPr(|S n -E[S n ]| ≤ ϵ) ≥ 2 exp - ϵ 2 n i=1 ||x i || 2 , Pr(S n -E[S n ] ≤ ϵ) ≥ exp - ϵ 2 n i=1 ||x i || 2 , Pr(E[S n ] -S n ≤ ϵ) ≥ exp - ϵ 2 n i=1 ||x i || 2 .\n(Two-sided bounds and one-sided upper/lower bounds, respectively.)" }, { "figure_ref": [], "heading": "S3.3 The Proof of UCB1-Normal2", "publication_ref": [], "table_ref": [], "text": "1. Same as UCB1. 2. According to Hoeffding's inequality for sub-Gaussian RVs X 1 . . . X n and their sum S n = n i=1 X i ,\nP (S n -E[S n ] ≥ ϵ) ≤ exp - ϵ 2 n i=1 ||Xi|| 2 . 3. Using δ = ϵ n , P (µ n -E[µ n ] ≥ δ) ≤ exp - n 2 δ 2 n i=1 ||Xi|| 2 . 4. We assume X i = N (µ, σ 2 ), thus ||X i || 2 = 8 3 σ 2 . P (µ n -E[µ n ] ≥ δ) ≤ exp -3n 2 δ 2 8nσ 2 = exp -3nδ 28σ\n2 . 5. Same as UCB1. 6. Same as UCB1. 7. Same as UCB1. 8. Let δ = σ√ log T . Then\nP (A : µ ni -E[µ ni ] ≥ δ) ≤ exp - 3n i σ2 log T 8σ 2 = T -3n i σ2 8σ 2 .\nThe trick starts here. The formula above is problematic because we do not know the true variance σ 2 . However, if event B : ni σ2 σ 2 ≥ X holds for some X > 0, we have\nT -3n i σ2 8σ 2\n≤ T -3 8 X . One issue with this approach is that the two events A, B may be correlated. To address the issue, we further upperbound the probability by union-bound. Let P (B) = α which is close to 1. Then P (¬(A ∧ B)) = P (¬A ∨ ¬B) ≤ P (¬A) + P (¬B).\n1 -P (A ∧ B) ≤ 1 -P (A) + P (¬B).\nP (A) ≤ P (A ∧ B) + P (¬B). ∴ P (µ ni -E[µ ni ] ≥ δ) ≤ T -3 8 X + 1 -α.\nWe next obtain X that satisfies P (B) = α. We use the fact that ni σ2 σ 2 follows a Chi-Squared distribution χ 2 (n i ) with a degree of freedom n i . Then X = χ 2 1-α,ni , the upper-tail critical value of χ 2 distribution with degree of freedom n i and significance level α, because\nP (¬B) = P ( n i σ2 σ 2 < χ 2 1-α,ni ) = χ 2 ( n i σ2 σ 2 < χ 2 1-α,ni | n i ) = 1 -α. 9. From 2δ ≤ ∆ i , assuming n i is an integer and n i ≥ 2, ∆ 2 i ≥ 2σ 2 log T = 2n i σ2 log T n i ≥ 2σ 2 χ 2 1-α,ni log T n i ≥ 2σ 2 χ 2 1-α,2 log T n i = -4σ 2 log α log T n i . ∴ n i ≥ -4σ 2 log α log T ∆ 2 i = L ≥ -4σ 2 log α log T ∆ 2 i .\nNote that we used the fact that χ 2 1-α,n is monotonically increasing for n, therefore χ 2 1-α,n ≥ χ 2 1-α,2 (n i ≥ 2), and that χ 2 1-α,2 = -2 log α:\n1 -α = χ 2 (X < χ 2 1-α,n | n = 2) = γ( 2 2 , χ 2 1-α,2 2 ) Γ( 2 2 ) = 1 -e - χ 2 1-α,22\n.\nwhere γ and Γ are (incomplete) Gamma functions. 10. Using the same union-bound argument used in UCB1,\nP (LCB i (T, n i ) ≤ LCB * (T, n * )) ≤ 2(T -χ 2 1-α,n i + 1 -α).\n11. Assume we followed the UCB1-Normal2 strategy. We use the same argument as UCB1. Assume we pull each arm at least M times in the beginning and M ≤ L." }, { "figure_ref": [], "heading": "E[n", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "i ] ≤ L + T t=K+1 P (∃u, v; LCB * (t, u) ≥ LCB i (t, v)) ≤ L + T t=K+1 t-1 u=1 t-1 v=L 2(t -3 8 χ 2 1-α,v + 1 -α) ≤ L + T t=K+1 t-1 u=1 t-1 v=L 2(t -3 8 χ 2 1-α,M + 1 -α) ≤ L + T t=K+1 t u=1 t v=1 2(t -3 8 χ 2 1-α,M + 1 -α) = L + T t=K+1 2(t 2-3 8 χ 2 1-α,M + (1 -α)t 2 ) ≤ L + 2 ∞ t=1 t 2-3 8 χ 2 1-α,M + 2(1 -α) T t=1 t 2 = L + 2C + 2(1 -α) T (T + 1)(2T + 1) 6 ≤ -4σ 2 log α log T ∆ 2 i + 1 (∵ ⌈x⌉ ≤ x + 1) + 2C + (1 -α)T (T + 1)(2T + 1) 3 . C is a convergent series when 2 - 3 8 χ 2 1-α,M < -1 ⇔ 8 < χ 2 1-α,M\n. You can look up the value of M that guarantees this condition from a numerically computed, so-called χ 2 -table (Table S1). For example, with α = 0.99, 8 < χ 2 0.01,M , thus M ≥ 2, and with α = 0.9, 8 < χ 2 0.1,M , thus M ≥ 5. However, the value of α depends on the problem and is unknown prior to solving the problem. 12. Omitted. GUCT-V vs GBFS \nn ′ = s n then if g(n) > g(n ′ ) then continue Lock n ′ , S(n) ← S(n ′ ), Q ← Q ∪ {n, n ′ } else Compute h(s n ) # Evaluation Q ← Q ∪ {n} while n ← Q.POPMAX()" }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This work was supported through DTIC contract FA8075-18-D-0008, Task Order FA807520F0060, Task 4 -Autonomous Defensive Cyber Operations (DCO) Research & Development (R&D)." }, { "figure_ref": [], "heading": "Appendix S1 Domain-Independent Heuristics in", "publication_ref": [], "table_ref": [], "text": "Classical Planning" }, { "figure_ref": [], "heading": "S4 Statistics after Merging Datasets", "publication_ref": [], "table_ref": [], "text": "Backpropagation in MCTS requires computing the statistics of the samples in the leaf nodes in a subtree of a parent node.\nTo avoid iterating over all leaves of each parent, Backpropagation typically propagates the statistics from the immediate children. This can be seen as merging multiple datasets and compute the statistics of the merged dataset from the statistics of multiple datasets.\nIn variance-based MCTS algorithms, both the mean and variance are backpropagated. Given two sets of samples X 1 , X 2 , each with an empirical mean µ i and n i elements (i ∈ {1, 2}), the empirical mean µ 12 of X 1 ∪ X 2 is given by\nWe obtain NECs by iterating this process over a node's children, although there is a more efficient, incremental method for backpropagating a change in a single child (see appendix). For the variance, we similarly merge the samples. Given individual variances σ 2 1 and σ 2 2 , the variance σ 2 12 of X 1 ∪ X 2 (proof available in appendix) is:\n.\nBelow, we show the formulae and the proofs for this method. Theorem 3 (The empirical mean of merged datasets). Given two sets of samples X 1 , X 2 , each with an empirical mean µ i and n i elements (i ∈ {1, 2}), the empirical mean µ 12 of X 1 ∪ X 2 is given by\nAlso,\nProof.\n□ Theorem 4 (The empirical variance of merged datasets).\nGiven two sets of samples X 1 , X 2 , each with an empirical mean µ i , variance σ 2 i , and n i elements (i ∈ {1, 2}), and\nProof. " }, { "figure_ref": [], "heading": "S5 Statistics after Retracting a Dataset", "publication_ref": [], "table_ref": [], "text": "In the backpropagation step of MCTS, typically, only a few children of an intermediate node update their statistics (most often a single children). To compute the updated statistics efficiently, we could compute them by retracting the old data of the child(ren) from the merged data and merging the new data for the child(ren), rather than iterating over the children to merge everything from scratch. This can impact the performance when the number of children / the branching factor is high.\nTheorem 5 (The empirical mean after retracting a dataset). Assume samples X 1 , X 2 with empirical means µ i and number of elements n i (i ∈ {1, 2}). Let their union be X 12 = X 1 ∪ X 2 , its empirical means µ 12 , and its number of elements n 12 = n 1 + n 2 . µ 1 is given by\nTheorem 6 (The empirical variance after retracting a dataset). Assume samples X 1 , X 2 with empirical means µ i , empirical variance σ 2 i , and number of elements n i (i ∈ {1, 2}). Let their union be X 12 = X 1 ∪ X 2 , its empirical mean µ 12 , its empirical variance σ 2 12 , and its number of elements\n12 , and σ 2 2 as follows.\nProof. shows the cumulative histogram of the number of instances solved by a particular evaluation/expansion/runtime. Although the graphs for the expansion and the runtime are not entirely informative since it is confounded by the node evaluation limit, the general trend is the same between algorithms." }, { "figure_ref": [], "heading": "S6.2 Deferred Heuristic Evaluation", "publication_ref": [], "table_ref": [], "text": "Fig. S4 shows the cumulative histogram of the number of instances solved under a particular evaluation/expansion/runtime by h FF with/without DE, with/without PO. Although the graphs for the expansion and the runtime are not entirely informative since it is confounded by the node evaluation limit, the general trend is the same between algorithms." }, { "figure_ref": [], "heading": "S6.3 Solution Quality", "publication_ref": [], "table_ref": [], "text": "Fig. S5-S8 shows the complete plot for the solution quality." } ]
Balancing exploration and exploitation has been an important problem in both adversarial games and automated planning. While it has been extensively analyzed in the Multi-Armed Bandit (MAB) literature, and the game community has achieved great success with MAB-based Monte Carlo Tree Search (MCTS) methods, the symbolic planning community has struggled to advance in this area. We describe how Upper Confidence Bound 1's (UCB1's) assumption of reward distributions with known bounded support shared among siblings (arms) is violated when MCTS/Trial-based Heuristic Tree Search (THTS) in previous work uses heuristic values of search nodes in classical planning problems as rewards. To address this issue, we propose a new Gaussian bandit, UCB1-Normal2, and analyze its regret bound. It is variance-aware like UCB1-Normal and UCB-V, but has a distinct advantage: it neither shares UCB-V's assumption of known bounded support nor relies on UCB1-Normal's unfounded conjectures on Student's t and χ 2 distributions. Our theoretical analysis predicts that UCB1-Normal2 will perform well when the estimated variance is accurate, which can be expected in deterministic, discrete, finite state-space search, as in classical planning. Our empirical evaluation confirms that MCTS combined with UCB1-Normal2 outperforms Greedy Best First Search (traditional baseline) as well as MCTS with other bandits.
Scale-Adaptive Balancing of Exploration and Exploitation in Classical Planning
[ { "figure_caption": "Figure 1 :1Figure 1: Comparing solution length of GUCT-based algorithms (x-axis) against GBFS (y-axis) using h FF .", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Otherwise, min {a∈A|p∈ADD(a)} C(a) + h add (s, PRE(a)) .", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "a) + h(s, PRE(a))] .", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure S1 :Figure S2 :Figure S3 :S1S2S3Figure S1: The cumulative histogram of the number of problem instances solved (y-axis) below a certain number of node evaluations (x-axis). Each line represents a random seed. In algorithms with an exploration coefficient hyperparameter, we use c = 1.0.", "figure_data": "", "figure_id": "fig_3", "figure_label": "S1S2S3", "figure_type": "figure" }, { "figure_caption": "Figure S5 :S5Figure S5: Comparing the length of solutions found by GUCT-based algorithms (x-axis) against those by the baseline GBFS (y-axis) using h FF .", "figure_data": "", "figure_id": "fig_5", "figure_label": "S5", "figure_type": "figure" }, { "figure_caption": "Figure S6 :S6Figure S6: Comparing the length of solutions found by GUCT-based algorithms (x-axis) against those by the baseline GBFS (y-axis) using h add .", "figure_data": "", "figure_id": "fig_6", "figure_label": "S6", "figure_type": "figure" }, { "figure_caption": "Figure S7 :S7Figure S7: Comparing the length of solutions found by GUCT-based algorithms (x-axis) against those by the baseline GBFS (y-axis) using h max .", "figure_data": "", "figure_id": "fig_7", "figure_label": "S7", "figure_type": "figure" }, { "figure_caption": "Figure S8 :S8Figure S8: Comparing the length of solutions found by GUCT-based algorithms (x-axis) against those by the baseline GBFS (y-axis) using h GC .", "figure_data": "", "figure_id": "fig_8", "figure_label": "S8", "figure_type": "figure" }, { "figure_caption": "NEC) f for selecting the best node in each iteration. Let us denote a node by n and the state represented by n as s n . As NEC, Dijkstra search uses f Dijkstra", "figure_data": "expansion) generate its successor nodes, (3) (evaluation) evaluate the successor nodes, and (4) (queueing) reinsert them into the open list. Termination typically occurs when a node is ex-panded that satisfies a goal condition, but a satisficing/agile algorithm can perform early goal detection, which immedi-ately checks whether any successor node generated in step (2) satisfies the goal condition. Since this paper focuses on agile search, we use early goal detection for all algorithms. Within forward search, forward best-first search defines a particular ordering in the open list by defining node evalua-tion criteria (", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The number of problem instances solved with less than 10,000 node evaluations; top two configurations in bold for each heuristic; each number represents an average over 5 trials. We show results for both c = 1.0 and c = 0.5 (\"best parameter\" according toSchulte and Keller (2014)) when the algorithm requires one. Algorithms in the bottom half have no hyperparameter. PO and DE stand for Preferred Operators and Deferred Evaluation. It does not contain PO for GBFS and heuristics other than h FF due to the lack of support in Pyperplan.", "figure_data": "510.510.510.510.510.510.51GUCT * -01 *-01 -V413.2 396.4 405.8 373.8 224.8 222.2 508.8 440.8 496.2 453.8 239.4 234.2 306.2 296 369.6 354.8 345.2 312.8 242.2 227.6 307 393.6 372 373 343.6 236.2 226.4 306.2 289.8 430.2 401.2 377.6 278 439.2 411.8 418.6 354.6 303 542.4 448 441.8 386.8 295.2 403.2 387 355.6 344.8 406.4 404.4 450 393.2 477 422 363 426.2 421.2 329.8 307.2 325 297.6 215 200 264.8 243.8 383.8 348.4 334.4 310 384.4 377.4-Normal *-Normal -Normal2 *-Normal2 GBFS-----278 311.6 563.8 551.2 522.4-----261.4 294.8 519.2 516.2 501.6-----209.2 212.2 301 258.2 221.4-----231.8 244 374.6 338.6 351.2-----331.6 338.2 596.4 593.8 ------269.2 285.2 496.8 490.6 474-----342.6 343.8 550.8 543.4 -", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Algorithm 1 High-level general MCTS. Input: Root node r, successor function S, NEC f , heuristic function h, priority queue Q sorted by g. Initialize ∀n; g(n) ← ∞.", "figure_data": "", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" } ]
Stephen Wissow; Masataro Asai
[ { "authors": "Y Alkhazraji; M Frorath; M Grützner; M Helmert; T Liebetraut; R Mattmüller; M Ortlieb; J Seipp; T Springenberg; P Stahl; J Wülfing", "journal": "", "ref_id": "b0", "title": "Pyperplan", "year": "2020" }, { "authors": "J.-Y Audibert; R Munos; C Szepesvári", "journal": "Theoretical Computer Science", "ref_id": "b1", "title": "Exploration-Exploitation Tradeoff using Variance Estimates in Multi-Armed Bandits", "year": "2009" }, { "authors": "P Auer; N Cesa-Bianchi; P Fischer", "journal": "Machine Learning", "ref_id": "b2", "title": "Finite-Time Analysis of the Multiarmed Bandit Problem", "year": "2002" }, { "authors": "M G Bellemare; Y Naddaf; J Veness; M Bowling", "journal": "AAAI Press", "ref_id": "b3", "title": "The Arcade Learning Environment: An Evaluation Platform for General Agents (Extended Abstract)", "year": "2015" }, { "authors": "B Bonet; H Geffner", "journal": "Artificial Intelligence", "ref_id": "b4", "title": "Planning as Heuristic Search", "year": "2001" }, { "authors": "R R Bush; F Mosteller", "journal": "The Annals of Mathematical Statistics", "ref_id": "b5", "title": "A Stochastic Model with Applications to Learning", "year": "1953" }, { "authors": "P.-A Coquelin; R Munos", "journal": "", "ref_id": "b6", "title": "Bandit Algorithms for Tree Search", "year": "2007" }, { "authors": "E W Dijkstra", "journal": "Numerische mathematik", "ref_id": "b7", "title": "A Note on Two Problems in Connexion with Graphs", "year": "1959" }, { "authors": "P Ferber; L Cohen; J Seipp; T Keller", "journal": "", "ref_id": "b8", "title": "Learning and Exploiting Progress States in Greedy Best-First Search", "year": "2022" }, { "authors": "P Ferber; F Geißer; F Trevizan; M Helmert; J Hoffmann", "journal": "", "ref_id": "b9", "title": "Neural Network Heuristic Functions for Classical Planning: Bootstrapping and Comparison to Other Methods", "year": "2022" }, { "authors": "P Ferber; M Helmert; J Hoffmann", "journal": "", "ref_id": "b10", "title": "Neural Network Heuristics for Classical Planning: A Study of Hyperparameter Space", "year": "2020" }, { "authors": "R E Fikes; P E Hart; N J Nilsson", "journal": "Artificial Intelligence", "ref_id": "b11", "title": "Learning and Executing Generalized Robot Plans", "year": "1972" }, { "authors": "C R Garrett; L P Kaelbling; T Lozano-Pérez", "journal": "", "ref_id": "b12", "title": "Learning to Rank for Synthesizing Planning Heuristics", "year": "2016" }, { "authors": "C Gehring; M Asai; R Chitnis; T Silver; L P Kaelbling; S Sohrabi; M Katz", "journal": "", "ref_id": "b13", "title": "Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators", "year": "2022" }, { "authors": "S Gelly; D Silver", "journal": "Artificial Intelligence", "ref_id": "b14", "title": "Monte-Carlo Tree Search and Rapid Action Value Estimation in Computer Go", "year": "2011" }, { "authors": "P E Hart; N J Nilsson; B Raphael", "journal": "Systems Science and Cybernetics, IEEE Transactions on", "ref_id": "b15", "title": "A Formal Basis for the Heuristic Determination of Minimum Cost Paths", "year": "1968" }, { "authors": "M Helmert", "journal": "J. Artif. Intell. Res.(JAIR)", "ref_id": "b16", "title": "The Fast Downward Planning System", "year": "2006" }, { "authors": "M Heusner; T Keller; M Helmert", "journal": "", "ref_id": "b17", "title": "Understanding the Search Behaviour of Greedy Best-First Search", "year": "2017" }, { "authors": "M Heusner; T Keller; M Helmert", "journal": "", "ref_id": "b18", "title": "a. Best-Case and Worst-Case Behavior of Greedy Best-First Search", "year": "2018" }, { "authors": "M Heusner; T Keller; M Helmert", "journal": "", "ref_id": "b19", "title": "Search Progress and Potentially Expanded States in Greedy Best-First Search", "year": "2018" }, { "authors": "J Hoffmann; B Nebel", "journal": "J. Artif. Intell. Res.(JAIR)", "ref_id": "b20", "title": "The FF Planning System: Fast Plan Generation through Heuristic Search", "year": "2001" }, { "authors": "T Imagawa; T Kaneko", "journal": "Springer", "ref_id": "b21", "title": "Monte carlo tree search with robust exploration", "year": "2016" }, { "authors": "T Imai; A Kishimoto", "journal": "", "ref_id": "b22", "title": "A Novel Technique for Avoiding Plateaus of Greedy Best-First Search in Satisficing Planning", "year": "2011" }, { "authors": "E Kaufmann; O Cappé; A Garivier", "journal": "", "ref_id": "b23", "title": "On Bayesian Upper Confidence Bounds for Bandit Problems", "year": "2012" }, { "authors": " Pmlr", "journal": "", "ref_id": "b24", "title": "", "year": "" }, { "authors": "T Keller; M Helmert", "journal": "", "ref_id": "b25", "title": "Trial-Based Heuristic Tree Search for Finite Horizon MDPs", "year": "2013" }, { "authors": "B Kim; K Lee; S Lim; L Kaelbling; T Lozano-Pérez", "journal": "", "ref_id": "b26", "title": "Monte Carlo Tree Search in Continuous Spaces using Voronoi Optimistic Optimization with Regret Bounds", "year": "2020" }, { "authors": "A Kishimoto; D Bouneffouf; R Marinescu; P Ram; A Rawat; M Wistuba; P Palmes; A Botea", "journal": "", "ref_id": "b27", "title": "Bandit Limited Discrepancy Search and Application to Machine Learning Pipeline Optimization", "year": "2022" }, { "authors": "A Kishimoto; R Zhou; T Imai", "journal": "", "ref_id": "b28", "title": "Diverse Depth-First Search in Satisificing Planning", "year": "2012" }, { "authors": "L Kocsis; C Szepesvári", "journal": "Springer", "ref_id": "b29", "title": "Bandit Based Monte-Carlo Planning", "year": "2006" }, { "authors": "R Kuroiwa; J C Beck", "journal": "", "ref_id": "b30", "title": "Biased Exploration for Satisficing Heuristic Search", "year": "2022" }, { "authors": "T L Lai; H Robbins", "journal": "Advances in Applied Mathematics", "ref_id": "b31", "title": "Asymptotically Efficient Adaptive Allocation Rules", "year": "1985" }, { "authors": "N Lipovetzky; H Geffner", "journal": "", "ref_id": "b32", "title": "Best-First Width Search: Exploration and Exploitation in Classical Planning", "year": "2017" }, { "authors": "N Lipovetzky; M Ramírez; H Geffner", "journal": "", "ref_id": "b33", "title": "Classical Planning with Simulators: Results on the Atari Video Games", "year": "2015" }, { "authors": "D Mcconachie; D Berenson", "journal": "IEEE Transactions on Automation Science and Engineering", "ref_id": "b34", "title": "Estimating Model Utility for Deformable Object Manipulation using Multiarmed Bandit Methods", "year": "2018" }, { "authors": "R Munos", "journal": "Foundations and Trends® in Machine Learning", "ref_id": "b35", "title": "From Bandits to Monte-Carlo Tree Search: The Optimistic Principle Applied to Optimization and Planning", "year": "2014" }, { "authors": "H Nakhost; M Müller", "journal": "", "ref_id": "b36", "title": "Monte-Carlo Exploration for Deterministic Planning", "year": "2009" }, { "authors": "S Richter; M Helmert", "journal": "", "ref_id": "b37", "title": "Preferred operators and deferred evaluation in satisficing planning", "year": "2009" }, { "authors": "S Richter; M Westphal; M Helmert", "journal": "", "ref_id": "b38", "title": "LAMA 2008 and 2011", "year": "2011" }, { "authors": "O Rivlin; T Hazan; E Karpas", "journal": "", "ref_id": "b39", "title": "Generalized Planning With Deep Reinforcement Learning", "year": "2019" }, { "authors": "H Robbins", "journal": "Bulletin of the American Mathematical Society", "ref_id": "b40", "title": "Some Aspects of the Sequential Design of Experiments", "year": "1952" }, { "authors": "T Schulte; T Keller", "journal": "", "ref_id": "b41", "title": "Balancing Exploration and Exploitation in Classical Planning", "year": "2014" }, { "authors": "W Shen; F Trevizan; S Thiébaux", "journal": "", "ref_id": "b42", "title": "Learning Domain-Independent Planning Heuristics with Hypergraph Networks", "year": "2020" }, { "authors": "D Silver; A Huang; C J Maddison; A Guez; L Sifre; G Van Den Driessche; J Schrittwieser; I Antonoglou; V Panneershelvam; M Lanctot", "journal": "Nature", "ref_id": "b43", "title": "Mastering the Game of Go with Deep Neural Networks and Tree Search", "year": "2016" }, { "authors": "N Srinivas; A Krause; S M Kakade; M W Seeger", "journal": "", "ref_id": "b44", "title": "Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design", "year": "2010" }, { "authors": "G Tesauro; V Rajan; R Segal", "journal": "", "ref_id": "b45", "title": "Bayesian Inference in Monte-Carlo Tree Search", "year": "2010" }, { "authors": "W R Thompson", "journal": "Biometrika", "ref_id": "b46", "title": "On the Likelihood that One Unknown Probability Exceeds Another in View of the Evidence of Two Samples", "year": "1933" }, { "authors": "S Toyer; F Trevizan; S Thiébaux; L Xie", "journal": "", "ref_id": "b47", "title": "Action Schema Networks: Generalised Policies with Deep Learning", "year": "2018" }, { "authors": "R A Valenzano; J Schaeffer; N R Sturtevant; F Xie", "journal": "", "ref_id": "b48", "title": "A Comparison of Knowledge-Based GBFS Enhancements and Knowledge-Free Exploration", "year": "2014" }, { "authors": "F Xie; M Müller; R C Holte", "journal": "", "ref_id": "b49", "title": "Adding Local Exploration to Greedy Best-First Search in Satisficing Planning", "year": "2014" }, { "authors": "F Xie; M Müller; R C Holte", "journal": "", "ref_id": "b50", "title": "Understanding and Improving Local Exploration for GBFS", "year": "2015" }, { "authors": "F Xie; M Müller; R C Holte; T Imai", "journal": "", "ref_id": "b51", "title": "Type-Based Exploration with Multiple Search Queues for Satisficing Planning", "year": "2014" }, { "authors": "F Xie; H Nakhost; M Müller", "journal": "", "ref_id": "b52", "title": "Planning Via Random Walk-Driven Local Search", "year": "2012" }, { "authors": "P Auer; N Cesa-Bianchi; P Fischer", "journal": "Machine Learning", "ref_id": "b53", "title": "Finite-Time Analysis of the Multiarmed Bandit Problem", "year": "2002" }, { "authors": "B Bonet; H Geffner", "journal": "Artificial Intelligence", "ref_id": "b54", "title": "Planning as Heuristic Search", "year": "2001" }, { "authors": "T Bylander", "journal": "Artificial Intelligence", "ref_id": "b55", "title": "A Probabilistic Analysis of Prepositional STRIPS Planning", "year": "1996" }, { "authors": "R E Fikes; P E Hart; N J Nilsson", "journal": "Artificial Intelligence", "ref_id": "b56", "title": "Learning and Executing Generalized Robot Plans", "year": "1972" }, { "authors": "J Hoffmann; B Nebel", "journal": "J. Artif. Intell. Res.(JAIR)", "ref_id": "b57", "title": "The FF Planning System: Fast Plan Generation through Heuristic Search", "year": "2001" }, { "authors": "S Richter; J T Thayer; W Ruml", "journal": "", "ref_id": "b58", "title": "The Joy of Forgetting: Faster Anytime Search via Restarting", "year": "2010" }, { "authors": "S Richter; M Westphal; M Helmert", "journal": "", "ref_id": "b59", "title": "LAMA 2008 and 2011", "year": "2011" }, { "authors": "T Schulte; T Keller", "journal": "", "ref_id": "b60", "title": "Balancing Exploration and Exploitation in Classical Planning", "year": "2014" }, { "authors": "R Vershynin", "journal": "Cambridge University Press", "ref_id": "b61", "title": "High-Dimensional Probability: An Introduction with Applications in Data Science", "year": "2018" } ]
[ { "formula_coordinates": [ 2, 319.51, 148.41, 238.61, 28.01 ], "formula_id": "formula_0", "formula_text": "∆ = T max i E[r i ] -i E[t i ]E[r i ]." }, { "formula_coordinates": [ 2, 379.73, 303.69, 178.39, 35.37 ], "formula_id": "formula_1", "formula_text": "UCB1 i = μi + c 2 log T /t i LCB1 i = μi -c 2 log T /t i (1)" }, { "formula_coordinates": [ 2, 319.51, 661.78, 238.61, 21.24 ], "formula_id": "formula_2", "formula_text": "(n) = g(n) (g-value)" }, { "formula_coordinates": [ 3, 53.88, 54.6, 238.61, 34.16 ], "formula_id": "formula_3", "formula_text": "h (h-value). GBFS uses f GBFS (n) = h(s n ). Forward best- first search that uses h is called forward heuristic best-first search. Dijkstra search is a special case of A * with h(s) = 0." }, { "formula_coordinates": [ 3, 456.17, 161.98, 101.95, 11.68 ], "formula_id": "formula_4", "formula_text": "f A * (n) = g(n) + h A * (n)" }, { "formula_coordinates": [ 3, 319.51, 193.14, 238.61, 92.16 ], "formula_id": "formula_5", "formula_text": "h A * (n ′ ) = h GBFS (n ′ ) = h(s n ′ ) if n ′ is a leaf. h A * (n) = min n ′ ∈S(n) [C(n, n ′ ) + h A * (n ′ )] = min n ′ ∈L(n) [C(n, n ′ ) + h(s n ′ )] h GBFS (n) = min n ′ ∈S(n) [h GBFS (n ′ )] = min n ′ ∈L(n) [h(s n ′ )]" }, { "formula_coordinates": [ 3, 319.51, 384.57, 235.46, 122.82 ], "formula_id": "formula_6", "formula_text": "h UCT (n) = 1 |L(n)| n ′ ∈S(n) |L(n ′ )|(C(n, n ′ ) + h UCT (n ′ )) = 1 |L(n)| n ′ ∈L(n) (C(n, n ′ ) + h(s n ′ )) h GUCT (n) = 1 |L(n)| n ′ ∈S(n) |L(n ′ )|h GUCT (n ′ ) = 1 |L(n)| n ′ ∈L(n) h(s n ′ ) f UCT (n) =g(n) + h UCT (n) -c (2 log |L(p)|)/|L(n)| f GUCT (n) = h GUCT (n) -c (2 log |L(p)|)/|L(n)|" }, { "formula_coordinates": [ 4, 63.55, 105.63, 228.95, 66.91 ], "formula_id": "formula_7", "formula_text": "f GUCT-01 (n) = hGUCT(n)-m M -m -c (2 log |L(p)|)/|L(n)| (2) m + (M -m)f GUCT-01 (n) = h GUCT (n) -c(M -m) (2 log |L(p)|)/|L(n)| (3)" }, { "formula_coordinates": [ 4, 53.88, 203.05, 238.61, 29.68 ], "formula_id": "formula_8", "formula_text": "Given M = max n ′ ∈S(p) h GUCT (n ′ ), m = min n ′ ∈S(p) h GUCT (n ′" }, { "formula_coordinates": [ 4, 331.48, 96.34, 226.64, 97.8 ], "formula_id": "formula_9", "formula_text": "i = μi + σi (16 log T )/t i (4) UCB1-Tuned 1 = (5) μi + c min(1/4, σ2 i + 2 log T /t i ) log T /t i UCB-V i = μi + σi (2 log T )/t i + (3c log T )/t i (6) Bayes-UCT2 i = μBayes i + σBayes i √ 2 log T (7) UCB1-Normal2 i = μi + σi √ 2 log T (8)" }, { "formula_coordinates": [ 4, 319.51, 234.41, 238.61, 29.74 ], "formula_id": "formula_10", "formula_text": "t i i.i.d. samples r i1 . . . r iti ∼ N (µ i , σ 2 i )" }, { "formula_coordinates": [ 4, 335.65, 404.88, 112.33, 19.07 ], "formula_id": "formula_11", "formula_text": "σ 2 i log T ∆ 2 i +1+ π 2 2 +8 log T if," }, { "formula_coordinates": [ 5, 53.88, 65.92, 238.61, 106.37 ], "formula_id": "formula_12", "formula_text": "P (t i σ2 i /σ 2 i < χ 2 1-α,ti ) = α. UCB1-Normal2 has a worst-case polynomial, best-case constant regret-per-arm -4(log α)σ 2 i log T ∆ 2 i +1 + 2C + (1 -α)T (T + 1)(2T + 1) 3 α→1 ---→ 1 + 2C where C is a finite constant if each arm is pulled M = inf{n|8 < χ 2" }, { "formula_coordinates": [ 5, 53.88, 529.74, 238.61, 44.22 ], "formula_id": "formula_13", "formula_text": "M = 1, thus α > ERF(2) > 0.995 because 8 < χ 2 1-α,1 ⇔ 1 -α < γ( 1 2 , 8 2 ) Γ( 1 2 ) = 1 -ERF(2)." }, { "formula_coordinates": [ 6, 85.29, 56.51, 465.8, 26.08 ], "formula_id": "formula_14", "formula_text": "h = h FF h add h max h GC h FF +PO h FF +DE h FF +DE+PO c = 0." }, { "formula_coordinates": [ 10, 64.25, 367.51, 133.68, 37.1 ], "formula_id": "formula_15", "formula_text": "h max (s, G) = max p∈G    0 if p ∈ s." }, { "formula_coordinates": [ 10, 60.84, 476.15, 206.9, 41.78 ], "formula_id": "formula_16", "formula_text": "h add (s, G) = p∈G    0 if p ∈ s. Otherwise, min {a∈A|p∈ADD(a)} C(a) + h add (s, PRE(a)) ." }, { "formula_coordinates": [ 10, 62.26, 645.26, 230.23, 61.26 ], "formula_id": "formula_17", "formula_text": "h FF (s, G, h) = a∈Π + (s,G,h) C(a) (3) Π + (s, G, h) = p∈G ∅ if p ∈ s. Otherwise, {a} ∪ Π + (s, PRE(a))" }, { "formula_coordinates": [ 10, 385.77, 148.8, 172.35, 23 ], "formula_id": "formula_18", "formula_text": "h GC (s, G) = p∈G p ̸ ∈ s .(6)" }, { "formula_coordinates": [ 11, 68.33, 286, 195, 44.98 ], "formula_id": "formula_19", "formula_text": "P (|X -E[X]| ≥ ϵ) ≤ F (ϵ). (two-sided) • P (X -E[X] ≥ ϵ) ≤ F (ϵ). (one-sided, upper) • P (E[X] -X ≥ ϵ) ≤ F (ϵ). (one-sided, lower)" }, { "formula_coordinates": [ 11, 118.9, 362.08, 121.54, 38.04 ], "formula_id": "formula_20", "formula_text": "S n = n i=1 X i . P (|S n -E[S n ]| ≥ ϵ) ≤ G(ϵ)." }, { "formula_coordinates": [ 11, 68.04, 426.02, 224.46, 50.74 ], "formula_id": "formula_21", "formula_text": "µ n = 1 n n i=1 X i . Note that E[µ n ] = E[X] if X i are i.i.d.. P (|µ n -E[µ n ]| ≥ ϵ n = δ) ≤ G(nδ)" }, { "formula_coordinates": [ 11, 108.56, 506.73, 176.24, 18.78 ], "formula_id": "formula_22", "formula_text": "n -E[µ n ]| ≥ δ into δ ≥ µ n -E[µ n ] ≥ -δ." }, { "formula_coordinates": [ 11, 68.33, 569.71, 134.36, 52.79 ], "formula_id": "formula_23", "formula_text": "• µ n = 1 n n i=1 X i • E[µ n ] = E[X 1 ] = . . . = E[X n ] • 1 n n i=1 (X i -E[X i ]) 2 • Var[X 1 ] = . . . = Var[X n ]" }, { "formula_coordinates": [ 11, 342.99, 293.54, 185.24, 87.77 ], "formula_id": "formula_24", "formula_text": "K i=1 µ i E[n i ] by T µ * - K i=1 µ i E[n i ] = K i=1 (µ * -µ i )E[n i ] = K i=1 ∆ i E[n i ]." }, { "formula_coordinates": [ 11, 358.62, 479.4, 173.35, 40.05 ], "formula_id": "formula_25", "formula_text": "P (S n -E[S n ] ≥ ϵ) ≤ exp - 2ϵ 2 n i=1 (ui-li) 2 . P (E[S n ] -S n ≥ ϵ) ≤ exp - 2ϵ 2 n i=1 (ui-li) 2 ." }, { "formula_coordinates": [ 11, 320, 556.22, 235.91, 93.34 ], "formula_id": "formula_26", "formula_text": "P (µ n -E[µ n ] ≥ δ) ≤ exp - 2n 2 δ 2 n i=1 (u i -l i ) 2 . 4. UCB1 assumes X i are i.i.d. copies, thus ∀i; u i -l i = c. P (µ n -E[µ n ] ≥ δ) ≤ exp - 2n 2 δ 2 nc 2 = exp - 2nδ 2 c 2 . 5." }, { "formula_coordinates": [ 11, 398.6, 655.39, 93.38, 18.74 ], "formula_id": "formula_27", "formula_text": "δ ≥ µ n -E[µ n ] ≥ -δ." }, { "formula_coordinates": [ 11, 410.67, 694.42, 69.24, 17.29 ], "formula_id": "formula_28", "formula_text": "δ ≥ μi -µ i ≥ δ." }, { "formula_coordinates": [ 12, 99.18, 73.04, 160.96, 17.29 ], "formula_id": "formula_29", "formula_text": "µ i ≥ μi -δ = LCB i (T, n i ) ≥ µ i -2δ." }, { "formula_coordinates": [ 12, 54.38, 106.96, 235.97, 216.19 ], "formula_id": "formula_30", "formula_text": "µ * ≥ μ * -δ = LCB * (T, n * ). Assuming 2δ ≤ ∆ i = µ i -µ * , the second inequality is LCB i (T, n i ) ≥ µ i -2δ ≥ µ i -∆ i = µ * . Therefore LCB i (T, n i ) ≥ µ * ≥ LCB * (T, n * ). 8. Let δ = c 2 log T ni . Then P (µ ni -E[µ ni ] ≥ δ) ≤ exp - 2nc 2 2 log T ni c 2 = T -4 . 9. From 2δ ≤ ∆ i , considering n i is an integer, 2c 2 log T n i ≤ ∆ i ⇔ 4c 2 2 log T n i ≤ ∆ 2 i ⇔ 8c 2 log T ∆ 2 i ≤ 8c 2 log T ∆ 2 i = L ≤ n i ." }, { "formula_coordinates": [ 12, 92.57, 389.01, 174.2, 19.14 ], "formula_id": "formula_31", "formula_text": "P (LCB i (T, n i ) ≤ LCB * (T, n * )) ≤ 2T -4 ." }, { "formula_coordinates": [ 12, 82.08, 53.67, 442.58, 653.37 ], "formula_id": "formula_32", "formula_text": "i ] = 1 + T t=K+1 P (i is pulled at time t) ≤L + T t=K+1 P (i is pulled at time t ∧ n i > L) =L + T t=K+1 P (∀j; LCB j (t, n j ) ≥ LCB i (t, n i )) ≤L + T t=K+1 P (LCB * (t, n * ) ≥ LCB i (t, n i )) ≤L + T t=K+1 P (∃u, v; LCB * (t, u) ≥ LCB i (t, v)) ≤L + T t=K+1 t-1 u=1 t-1 v=L P (LCB * (t, u) ≥ LCB i (t, v)) ≤L + T t=K+1 t-1 u=1 t-1 v=L 2t -4 ≤L + ∞ t=1 t u=1 t v=1 2t -4 = L + ∞ t=1 t 2 • 2t -4 =L + 2 ∞ t=1 t -2 = L + 2 • π 6 = L + π 3 ≤c 2 8 log T ∆ 2 i + 1 + π 3 ∵ ⌈x⌉ ≤ x + 1" }, { "formula_coordinates": [ 12, 332.46, 174.25, 228.32, 101.13 ], "formula_id": "formula_33", "formula_text": "T µ * - K i=1 µ i E[n i ] = K i=1 (µ * -µ i )E[n i ] = K i=1 ∆ i E[n i ] ≤ K i=1 ∆ i c 2 8 log T ∆ 2 i + 1 + π 3 ≤ K i=1 c 2 8 log T ∆ i + 1 + π 3 ∆ i ." }, { "formula_coordinates": [ 12, 319.51, 326.39, 238.61, 49.72 ], "formula_id": "formula_34", "formula_text": ")) A distribution p(x) is sub-Gaussian when ∃t > 0; E[exp x 2 /t 2 ] < 2." }, { "formula_coordinates": [ 12, 328.24, 428.76, 214.84, 203.85 ], "formula_id": "formula_35", "formula_text": "p(x) = N (0, σ 2 ) = 1 √ 2πσ 2 exp - x 2 2σ 2 . E[exp x 2 /t 2 ] = R exp x 2 t 2 1 √ 2πσ 2 exp - x 2 2σ 2 dx = 1 √ 2πσ 2 R exp -x 2 1 2σ 2 - 1 t 2 dx = 1 √ 2πσ 2 R exp - x 2 C 2 dx = 1 √ 2πσ 2 R exp -y 2 Cdy x C = y ⇔ dx = Cdy = C √ 2πσ 2 √ π = C √ 2σ 2 ." }, { "formula_coordinates": [ 12, 399.18, 654.58, 79.26, 56.3 ], "formula_id": "formula_36", "formula_text": "1 C 2 = 1 2σ 2 - 1 t 2 ⇔ C 2 = 2σ 2 t 2 t 2 -2σ 2 . To show E[exp x 2 /t 2 ] < 2, E[exp x 2 /t 2 ] = C √ 2σ 2 = t 2 t 2 -2σ 2 < 2, ⇔ t 2 < 4(t 2 -2σ 2 ), ⇔ 8 3 σ 2 < t 2 . □ Definition 2. For a sub-Gaussian RV x, ||x|| = inf t > 0 | E[exp x 2 /t 2 ] < 2 ." }, { "formula_coordinates": [ 13, 78.87, 297.75, 188.64, 89.59 ], "formula_id": "formula_37", "formula_text": "Pr(|S n -E[S n ]| ≤ ϵ) ≥ 2 exp - ϵ 2 n i=1 ||x i || 2 , Pr(S n -E[S n ] ≤ ϵ) ≥ exp - ϵ 2 n i=1 ||x i || 2 , Pr(E[S n ] -S n ≤ ϵ) ≥ exp - ϵ 2 n i=1 ||x i || 2 ." }, { "formula_coordinates": [ 13, 54.38, 480.9, 221.57, 113.77 ], "formula_id": "formula_38", "formula_text": "P (S n -E[S n ] ≥ ϵ) ≤ exp - ϵ 2 n i=1 ||Xi|| 2 . 3. Using δ = ϵ n , P (µ n -E[µ n ] ≥ δ) ≤ exp - n 2 δ 2 n i=1 ||Xi|| 2 . 4. We assume X i = N (µ, σ 2 ), thus ||X i || 2 = 8 3 σ 2 . P (µ n -E[µ n ] ≥ δ) ≤ exp -3n 2 δ 2 8nσ 2 = exp -3nδ 28σ" }, { "formula_coordinates": [ 13, 83.92, 660.22, 188.91, 41.13 ], "formula_id": "formula_39", "formula_text": "P (A : µ ni -E[µ ni ] ≥ δ) ≤ exp - 3n i σ2 log T 8σ 2 = T -3n i σ2 8σ 2 ." }, { "formula_coordinates": [ 13, 406.16, 100.25, 33.21, 14.6 ], "formula_id": "formula_40", "formula_text": "T -3n i σ2 8σ 2" }, { "formula_coordinates": [ 13, 361.14, 201.07, 197.22, 34.58 ], "formula_id": "formula_41", "formula_text": "P (A) ≤ P (A ∧ B) + P (¬B). ∴ P (µ ni -E[µ ni ] ≥ δ) ≤ T -3 8 X + 1 -α." }, { "formula_coordinates": [ 13, 320, 304.79, 236.84, 156.01 ], "formula_id": "formula_42", "formula_text": "P (¬B) = P ( n i σ2 σ 2 < χ 2 1-α,ni ) = χ 2 ( n i σ2 σ 2 < χ 2 1-α,ni | n i ) = 1 -α. 9. From 2δ ≤ ∆ i , assuming n i is an integer and n i ≥ 2, ∆ 2 i ≥ 2σ 2 log T = 2n i σ2 log T n i ≥ 2σ 2 χ 2 1-α,ni log T n i ≥ 2σ 2 χ 2 1-α,2 log T n i = -4σ 2 log α log T n i . ∴ n i ≥ -4σ 2 log α log T ∆ 2 i = L ≥ -4σ 2 log α log T ∆ 2 i ." }, { "formula_coordinates": [ 13, 364.68, 511.84, 156.26, 49.26 ], "formula_id": "formula_43", "formula_text": "1 -α = χ 2 (X < χ 2 1-α,n | n = 2) = γ( 2 2 , χ 2 1-α,2 2 ) Γ( 2 2 ) = 1 -e - χ 2 1-α,22" }, { "formula_coordinates": [ 13, 359.92, 597.54, 170.75, 35.46 ], "formula_id": "formula_44", "formula_text": "P (LCB i (T, n i ) ≤ LCB * (T, n * )) ≤ 2(T -χ 2 1-α,n i + 1 -α)." }, { "formula_coordinates": [ 13, 347.86, 675.7, 211.86, 31.35 ], "formula_id": "formula_45", "formula_text": "i ] ≤ L + T t=K+1 P (∃u, v; LCB * (t, u) ≥ LCB i (t, v)) ≤ L + T t=K+1 t-1 u=1 t-1 v=L 2(t -3 8 χ 2 1-α,v + 1 -α) ≤ L + T t=K+1 t-1 u=1 t-1 v=L 2(t -3 8 χ 2 1-α,M + 1 -α) ≤ L + T t=K+1 t u=1 t v=1 2(t -3 8 χ 2 1-α,M + 1 -α) = L + T t=K+1 2(t 2-3 8 χ 2 1-α,M + (1 -α)t 2 ) ≤ L + 2 ∞ t=1 t 2-3 8 χ 2 1-α,M + 2(1 -α) T t=1 t 2 = L + 2C + 2(1 -α) T (T + 1)(2T + 1) 6 ≤ -4σ 2 log α log T ∆ 2 i + 1 (∵ ⌈x⌉ ≤ x + 1) + 2C + (1 -α)T (T + 1)(2T + 1) 3 . C is a convergent series when 2 - 3 8 χ 2 1-α,M < -1 ⇔ 8 < χ 2 1-α,M" }, { "formula_coordinates": [ 17, 73.82, 377.53, 216.2, 95.89 ], "formula_id": "formula_46", "formula_text": "n ′ = s n then if g(n) > g(n ′ ) then continue Lock n ′ , S(n) ← S(n ′ ), Q ← Q ∪ {n, n ′ } else Compute h(s n ) # Evaluation Q ← Q ∪ {n} while n ← Q.POPMAX()" } ]
10.1007/S10462-020-09855-0/TABLES/
[ { "figure_ref": [ "fig_0", "fig_1" ], "heading": "Introduction", "publication_ref": [ "b96", "b72", "b75", "b70", "b204", "b123", "b115", "b31", "b195", "b45", "b7", "b107", "b47", "b77", "b26", "b109", "b57", "b104", "b32", "b2", "b149", "b127" ], "table_ref": [ "tab_0" ], "text": "Digital images are complex in nature and exhibit high-level information, such as objects, scenes, and patterns (Khan et al. 2021a). This information can be analyzed and interpreted by computer vision algorithms to extract meaningful insights about the image content, such as recognizing objects, tracking movements, extracting features, etc. Computer vision has been an active area of research due to its applications in various fields (Bhatt et al. 2021). However, extracting high-level information from image data can be challenging due to variations in brightness, pose, background clutter, etc.\nThe emergence of convolutional neural networks (CNNs) has brought about a revolutio nar y transformation within the realm of computer vision. These networks have been successfully applied to a diverse range of computer vision tasks (Liu et al. 2018;Khan et al. 2020Khan et al. , 2022Khan et al. , 2023;;Zahoor et al. 2022), especially image recognition (Sohail et al. 2021a;Zhang et al. 2023a), object detection (Rauf et al. 2023), and segmentation (Khan et al. 2021c). CNNs gained popularity due to their ability to automatically learn features and patterns from raw images (Simonyan and Zisserman 2014; Agbo-Ajala and Viriri 2021). Generally, local patterns, known as feature motifs are systematically distributed throughout the images. Different filters in the convolutional layers are specified to capture diverse feature motifs, while pooling layers in the CNNs are utilized for dimensionality reduction and to incorporate robustness against variations. This local-leve l processing of CNNs may result in a loss of spatial correlation, which can impact their performance when dealing with larger and more complex patterns.\nRecently in computer vision, there has been some shift toward transformers, after they were first introduced by Vaswani et al. in 2017 for text processing applications (Vaswani et al. 2017a).\nIn 2018, Parmer et al., exploited transformers for image recognition tasks, where they demonstrated outstanding results (Parmar et al. 2018). Since then, there has been a growing interest in applying transformers to various vision-related applications (Liu et al. 2021b). In 2020, Dosovitskiy et al., introduced a transformer architecture, Vision Transformer (ViT), specifica lly designed for image analysis, which showed competitive results (Dosovitskiy et al. 2020). ViT models work by dividing an input image into a certain number of patches, each patch is subsequently flattened and fed to a sequence of transformer layers. The transformer layers enable the model to learn the relationships between the patches and their corresponding features, allowing it to identify feature motifs on a global scale in the image. Unlike CNNs that have a local receptive field, ViTs utilize its self-attention module to model long-range relationships, which enables them to capture the global view of an image (Ye et al. 2019;Guo et al. 2021). The global receptive field of ViTs helps them retain the global relationship and thus identify complex visual patterns distributed across the image (Bi et al. 2021;Wu et al. 2023b). In this context, Maurício et al. have reported that ViTs may show promising results as compared to CNNs in various applicatio ns (Zhang et al. 2021a;Maurício et al. 2023). In addition to the difference in their design and the way of capturing visual patterns, (shown in Fig 1) CNNs and ViTs also differ in their inductive biases. CNNs heavily rely on the correlation among the neighboring pixels, whereas ViTs assume minimal prior knowledge, making them significa ntly dependent on large datasets (Han et al. 2023). While ViT models have produced outstanding results on object recognition, classification, semantic segmentation, and other computer vision tasks (Kirillov et al. 2023;Dehghani et al. 2023), they are not a one-size-fits-all solution. In the case of small training data, despite the large learning capacity of ViTs, they may show limited performance as compared to CNNs (Morra et al. 2020;Jamali et al. 2023). In addition, their large receptive field demands significantly more computation. Therefore, the concept of Hybrid Vis ion Transformers (HVT) also known as CNN-Transformer, was introduced to combine the power of both CNNs and ViTs (Maaz et al. 2023). These hybrid models leverage the convolutional layers of CNNs to capture local features, which are then fed to ViTs to gain global context using the selfattention mechanism. The HVTs have shown improved performance in many image recognitio n tasks.\nRecently, different interesting surveys have been conducted to discuss the recent architectura l and implementational advancements in transformers (Liu et al. 2021b;Du et al. 2022;Islam 2022;Aleissaee et al. 2022;Ulhaq et al. 2022;Shamshad et al. 2023). Most of these survey articles either focus on specific computer vision applications or delve into discussions on transformer models specifically developed for Natural Language Processing (NLP) applications. In contrast, this survey paper emphasizes recent developments in HVTs (CNN-Transformer) that combine concepts from both CNNs and transformers. It provides a taxonomy and explores various applications of these hybrid models. Furthermore, this survey also presents a taxonomy for general ViTs and aims to thoroughly classify the emerging approaches based on their core architectura l designs.\nThe paper begins with an introduction to the essential components of the ViT networks and then discusses various recent ViT architectures. The reported ViT models are broadly classified into six categories based on their distinct features. Additionally, a detailed discussion on HVTs is included, highlighting their focus on leveraging the advantages of both convolutional operations and multi-attention mechanisms. The survey paper covers the recent architectures and applicatio ns of HVTs in various computer vision tasks. Moreover, a taxonomy is presented for HVTs, classifying them based on the way these architectures incorporate convolution operations in combination with self-attention mechanisms. This taxonomy divides HVTs into seven major groups, each of which reflects a different way of taking advantage of both the convolutional and multi-attention operations. Frequently used abbreviations are listed in Table 1.\nThe paper is structured as follows: (illustrated in Fig. 2) Section 1 presents a systematic understanding of the ViT architecture, highlighting its dissimilarities with CNNs and the advent of HVT architectures. Moving on, section 2 covers the fundamental concepts used in different ViT variants, while section 3 and section 4 provide a taxonomy of the recent ViTs and HVTs architectures, respectively. Section 5 focuses on the usage of HVTs, particularly in the area of computer vision, and section 6 presents current challenges and future directions. Finally, in section 7, the survey paper is concluded. " }, { "figure_ref": [], "heading": "Fundamental Concepts in ViTs", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Patch embedding", "publication_ref": [ "b31" ], "table_ref": [], "text": "Patch embedding is an important concept in ViT architecture. It involves converting the image patches into vector representations, which enables ViT to process images as sequences of tokens using a transformer-based approach (Dosovitskiy et al. 2020). The input image is partitioned into fixed-size non-overlapping parts, flattened into one-dimensional vectors, and projected to a higherdimensional feature space using a linear layer with 𝐷 embedding dimensions (Equation 1). This approach enables ViT to learn the long-range dependencies between different patches, allowing it to attain promising results on tasks that involve images.\n𝑿 𝑝𝑎𝑡𝑐ℎ 𝑁×𝐷 = 𝑅(𝑰 𝑖𝑚𝑎𝑔𝑒 𝐴×𝐵×𝐶 )\nEq. 1\nThe input image is 𝑰 𝑖𝑚𝑎𝑔𝑒 with size 𝐴 × 𝐵 × 𝐶, 𝑅() is the reshaping function to produce 𝑁 number of patches \"𝑿 𝑝𝑎𝑡𝑐ℎ \" with size 𝐷, and 𝑁 = A/P × B/P, 𝐷= 𝑃 × 𝑃 × C, 𝑃 = patch size and C = channels." }, { "figure_ref": [], "heading": "Positional embedding", "publication_ref": [], "table_ref": [], "text": "ViTs utilize positional encoding to add positional information into the input sequence and retain it throughout the network. The sequential information between patches is captured through position embeddings, which is incorporated within the patch embeddings. Since the development of ViTs, numerous position embedding techniques have been suggested for learning sequential data (Jiang et al. 2022). These techniques fall into three categories:" }, { "figure_ref": [], "heading": "Absolute Position Embedding (APE)", "publication_ref": [ "b10" ], "table_ref": [], "text": "The positional embeddings are integrated into the patch embeddings by using APE before the encoder blocks. represents the dimension of an embedding. It is possible to train 𝑋 𝑝𝑜𝑠 corresponding to positiona l embeddings of a single or two sets that can be learned (Carion et al. 2020)." }, { "figure_ref": [], "heading": "Relative Position Embedding (RPE)", "publication_ref": [], "table_ref": [], "text": "The Relative Position Embedding (RPE) technique is primarily used to incorporate informa tio n related to relative position into the attention module (Wu et al. 2021b). This technique is based on the idea that the spatial relationships between patches carry more weight than their absolute positions. To compute the RPE value, a lookup table is used, which is based on learnable parameters. The lookup process is determined by the relative distance between patches. Although the RPE technique is extendable to sequences of varying lengths, it may increase training and testing time (Chu et al. 2021b)." }, { "figure_ref": [], "heading": "Convolution Position Embedding (CPE)", "publication_ref": [], "table_ref": [], "text": "The Convolutional Position Embeddings (CPE) method takes into account the 2D nature of the input sequences. 2D convolution is employed to gather position informa tion using zero-padding to take advantage of the 2D nature (Islam et al. 2021). Convolutional Position Embeddings (CPE) can be used to incorporate positional data at different stages of the ViT. The CPE can be introduced specifically to the self-attention modules (Wu et al. 2021a), the Feed-Forward Network (FFN) (Li et al. 2021c;Wang et al. 2021b), or in between two encoder layers (Chu et al. 2021a)." }, { "figure_ref": [], "heading": "Attention Mechanism", "publication_ref": [], "table_ref": [], "text": "The core component of the ViT architecture is the self-attention mechanism, which plays a crucial role in explicitly representing the relationships between entities within a sequence. It calculates the significance of one item to others by representing each entity in terms of the global contextual information and capturing the interaction between them (Vaswani et al. 2017b). The self-attentio n module transforms the input sequence into three different embedding spaces namely query, key, and value. The sets of key-value pairs with query vectors are taken as inputs. The output vector is calculated by taking a weighted sum of the values followed by the softmax operator, where the weights are calculated by a scoring function (Equation 3).\n𝐴𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛(𝑸, 𝑲, 𝑽) = 𝑠𝑜𝑓𝑡𝑚𝑎𝑥 ( 𝑸 ⋅ 𝑲 𝑇 √𝑑 𝑘 ) ⋅ 𝑽 Eq. 3\nwhere, 𝐐, 𝐕, and 𝐊 𝐓 are query, value, and transposed key matrix, respectively.\n1 √d k\nis the scaling factor, and d k is dimensions of the key matrix." }, { "figure_ref": [], "heading": "Multi-Head Self-Attention (MSA)", "publication_ref": [], "table_ref": [], "text": "The limited capacity of a single-head self-attention module often leads to its focus on only a few positions, potentially overlooking other important positions. To address this limitation, MSA is employed. MSA utilizes parallel stacking of self-attention blocks to increase the effectiveness of the self-attention layer (Vaswani et al. 2017b). It captures a diverse range of complex interactio ns among the sequence elements by assigning various representation subspaces (query, key, and value) to the attention layers. The MSA constitutes multiple self-attention blocks. Each equipped with learnable weight matrices for query, key, and value sub-spaces. The outputs of these blocks are then concatenated and projected to the output space using the learnable parameter 𝑊 𝑂 . This enables the MSA to focus on multiple portions and to effectively capture the relationships in all areas. The mathematical representation of the attention process is given below:\n𝑀𝑆𝐴(𝑸, 𝑲, 𝑽) = 𝐶𝑜𝑛𝑐𝑎𝑡(ℎ𝑒𝑎𝑑 1 , ℎ𝑒𝑎𝑑 2 ,⋅⋅⋅, ℎ𝑒𝑎𝑑 ℎ ) ⋅ 𝑾 𝑂 Eq. 4\nℎ𝑒𝑎𝑑 𝑖 = 𝐴𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛(𝑸 𝑖 , 𝑲 𝑖 , 𝑽 𝑖 ), and 𝑖 = 1,2, . . . , ℎ\nEq. 5\nSelf-attention's capability to dynamically compute filters for every input sequence is a significa nt advantage over convolutional processes. Unlike convolutional filters, which are often static, selfattention can adjust to the particular context of the input data. Self-attention is also robust to changes in the number of input points or their permutations, which makes it a good choice for handling irregular inputs. Traditional convolutional procedures, on the other hand, are less adaptable to handling inputs with variable objects and require a grid-like structure, like 2D images.\nSelf-attention is a powerful tool for modeling sequential data and has been effective in various tasks including NLP (Khan et al. 2021b)." }, { "figure_ref": [], "heading": "Transformer layers", "publication_ref": [], "table_ref": [], "text": "A ViT encoder consists of several layers to process the input sequence. These layers comprise the MSA mechanism, feed-forward neural network (FFN), residual connection, and normaliza tio n layer. These layers are arranged to create a unified block that is repeated several times to learn the complex representation of the input sequence." }, { "figure_ref": [], "heading": "Feed-forward network", "publication_ref": [ "b31" ], "table_ref": [], "text": "A transformer-specific feed-forward network (FFN) is employed in models to obtain more complex attributes from the input data. It contains multiple fully connected layers and a nonlinear activation function, such as GELU in between the layers (Equation 6). FFN is utilized in every encoder block after the self-attention module. The hidden layer of the FFN usually has a dimensionality of 2048. These FFNs or MLP layers are local and translationally equivalent to global self-attention layers (Dosovitskiy et al. 2020).\n𝐹𝐹𝑁(𝑿) = 𝑏2 + 𝑾2 * 𝜎(𝑏1 + 𝑾1 * 𝑿) Eq. 6\nIn Eq. 7, the non-linear activation function GELU is represented by σ. Weights of the network are represented as W1, and W2, whereas b1, and b2 correspond to layer-specific bias" }, { "figure_ref": [], "heading": "Residual connection", "publication_ref": [], "table_ref": [], "text": "Sub-layers in the encoder/decoder block (MSA and FFN) utilize a residual link to improve performance and strengthen the information flow. Original input positional embedding is added to the output vector of MSA, as additional information. The residual connection is then followed by a layer-normalization operation (Equation 7).\n𝑿 𝑜𝑢𝑡𝑝𝑢𝑡 = 𝐿𝑎𝑦𝑒𝑟𝑁𝑜𝑟𝑚(𝑿 ⊕ 𝑶 𝑆𝐿 ) Eq. 7\nWhere 𝑿 is the original input and 𝑶 𝑆𝐿 is the output of each sub-layer, ⊕ representing the residual connection." }, { "figure_ref": [], "heading": "Normalization layer", "publication_ref": [ "b76" ], "table_ref": [], "text": "There are various methods for layer normalization, such as pre-layer normalization (Pre-LN) (Kim et al. 2023), which is utilized frequently. The normalization layer is placed prior to the MSA or FFN and inside the residual connection. Other normalization procedures, including batch normalization, have been suggested to enhance the training of transformer models, however, they might not be as efficient due to changes in the feature values (Jiang et al. 2022).\n." }, { "figure_ref": [], "heading": "Hybrid Vision Transformers (CNN-Transformer Architectures)", "publication_ref": [ "b125", "b110", "b168", "b168", "b44", "b49", "b43", "b211" ], "table_ref": [], "text": "In the realm of computer vision tasks, ViTs have gained popularity, but compared to CNNs, they still lack image-specific inductive bias often referred to as prior knowledge (Seydi and Sadegh 2023). This inductive bias includes characteristics like translation and scale invariance due to the shared weights across different spatial locations (Moutik et al. 2023). In CNNs, the locality, translational equivariance, and two-dimensional neighborhood structure are ingrained in every layer throughout the whole model. Additionally, the kernel leverages the correlation between neighboring pixels, which facilitates the extraction of good features quickly (Woo et al. 2023). On the other hand, in ViT, the image is split into linear patches (tokens) that are fed into encoder blocks through linear layers to model global relationships in the images. However, linear layers lack effectiveness in extracting local correlation (Woo et al. 2023).\nMany HVT designs have focused on the efficiency of convolutions in capturing local features in images, especially at the start of the image processing workflow for patching and tokenizatio n (Guo et al. 2023). The Convolutional Vision Transformer (CvT), for instance, uses a convolutional projection to learn the spatial and low-level information in image patches. It also utilizes a hierarchical layout with a progressive decrease in token numbers and an increase in token width to mimic the spatial downsampling effect in CNNs (Wu et al. 2021a). Similar ly, Convolution-enhanced Image Transformers (CeiT) leverage convolutional operations to extract low-level features via an image-to-token module (Yuan et al. 2021a). A novel sequence pooling technique is presented by the Compact Convolutional Transformer (CCT), which also integrates conv-pool-reshape blocks to carry out tokenization (Hassani et al. 2021). It also showed an accuracy of about 95% on smaller datasets like CIFAR10 when trained from scratch, which is generally difficult for other traditional ViTs to achieve.\nSeveral recent studies have investigated ways to enhance the local feature modeling capabilities of ViTs. LocalViT employs depthwise convolutions to improve the ability to model local features (Li et al. 2021c). LeViT uses a CNN block with four layers at the beginning of the ViT architecture to gradually increase channels and improve efficiency at inference time (Graham et al. 2021).\nSimilar methods are employed by ResT, however, to manage fluctuating image sizes, depth-wise convolutions, and adaptive position encoding are used (Zhang and Yang 2021).\nWithout additional data, CoAtNets' unique architecture of depthwise convolutions and relative self-attention achieves outstanding ImageNet top-1 accuracy (Dai et al. 2021). In order to create stronger cross-patch connections, Shuffle Transformer provides a shuffle operation (Huang et al. 2021b) and CoaT is a hybrid approach that incorporates depthwise convolutions and crossattention to encode relationships between tokens at various scales (Xu et al. 2021a). Another method \"Twins\" builds upon PVT by incorporating separable depthwise convolutions and relative conditional position embedding (Chu et al. 2021a). Recently, MaxVit, hybrid architecture, introduced the idea of multi-axis attention. Their hybrid block consists of MBConv-based convolution followed by block-wise self-attention and grid-wise self-attention, and when repeated multiple times this block creates a hierarchical representation and is capable of tasks like image generation and segmentation (Tu et al. 2022b). The block-wise and grid-wise attention layers are capable of extracting local and global features respectively. Convolution and transformer model strengths are intended to be combined in these hybrid designs." }, { "figure_ref": [ "fig_4" ], "heading": "Architectural level modifications in ViTs", "publication_ref": [ "b221" ], "table_ref": [ "tab_2" ], "text": "In recent years, different modifications have been carried out in ViT architectures (Zhou et al. 2021). These modifications can be categorized based on their attention mechanism, positiona l encoding, pre-training strategies, architectural changes, scalability, etc. ViT architectures can be broadly classified into five main classes based on the type of architectural modification, namely, (i) patch-based approaches, (ii) knowledge transfer-based approaches, (iii) shifted window-based approaches, (iv) attention-based approaches, and (v) multi-transformer-based approaches.\nHowever, it is observed that with the introduction of CNN's inductive bias to ViTs there came a boost in its performance. In this regard, we also classified the HVTs into seven categories based on their structural design. The taxonomy of ViT architectures is shown in Fig. 4. In addition a comprehensive overview of various online resources relevant to ViTs including libraries, lecture series, datasets, and computing platforms are provided in Table 2. " }, { "figure_ref": [], "heading": "Patch-based approaches", "publication_ref": [ "b31" ], "table_ref": [], "text": "In ViT, an image is first divided into a grid of patches, which are subsequently flattened to generate linear embedding, treated as a sequence of tokens (Dosovitskiy et al. 2020) et al. 2021b). In this regard, we discuss several architectures and their patching criteria." }, { "figure_ref": [], "heading": "To kens-to-Token Vision Transformer (T2T-ViT)", "publication_ref": [], "table_ref": [], "text": "Tokens-to-Token Vision Transformer (T2T-ViT) utilizes a fixed size and iterative approach to generate patches (Yuan et al. 2021b). It utilizes the proposed Token-to-Token module iterative ly to generate patches from the images. The generated patches are then fed to the T2T-ViT network to obtain final predictions." }, { "figure_ref": [], "heading": "Transformer in Transformer (TNT-ViT)", "publication_ref": [ "b48" ], "table_ref": [], "text": "Transformer in Transformer ViT (TNT-ViT) presented a multi-level patching mechanism to learn representations from objects with different sizes and locations (Han et al. 2021). It first divides the input image into patches then each patch is further divided into sub-patches. Later, the architecture utilizes different transformer blocks to model the relationship between the patches and subpatches. Extensive experiments showed the efficiency of TNT-ViT in terms of image classifica tio n on the ImageNet dataset." }, { "figure_ref": [], "heading": "Deformable Patch-based Transformer (DPT)", "publication_ref": [], "table_ref": [], "text": "Deformable Patch-based Transformer (DPT) presented an adaptive patch embedding module named DePatch (Chen et al. 2021e). Fixed-size patching in transformers results in a loss of semantic information, which affects the system's performance. In this regard, the proposed DePatch module in DPT splits the images in an adaptive way to obtain patches with variable sizes and strong semantic information." }, { "figure_ref": [], "heading": "CrowdFormer", "publication_ref": [], "table_ref": [], "text": "Yang and co-authors developed a ViT architecture, CrowdFormer for crowd counting (Yang et al. 2022b). The proposed architecture utilizes its overlap patching transformer block to capture the crowd's global contextual information. To consider images at different scales and in a top-down manner, the overlap patching layer is exploited, where instead of fixed-sized patches, a sliding window is used to extract overlapping patches. These overlapping patches tend to retain the relative contextual information for effective crowd counting." }, { "figure_ref": [], "heading": "Knowledge transfer-based approaches", "publication_ref": [ "b66", "b46" ], "table_ref": [], "text": "This category enlists those ViT architectures that utilize a knowledge transfer (knowledge distillation) approach. It involves conveying knowledge from a larger network to a smaller network, much like a teacher imparting knowledge to a student (Kanwal et al. 2023;Habib et al. 2023). The teacher model is usually a complex model with ample learning capability, while the student model is simpler. The basic idea behind knowledge distillation is to facilitate the student model in acquiring and incorporating the distinctive features of the teacher model. This can be particularly useful for tasks where computational resources are limited, as the smaller ViT model can be deployed more efficiently than the larger one." }, { "figure_ref": [], "heading": "Data-efficient Image Transformers (DeiT)", "publication_ref": [ "b144" ], "table_ref": [], "text": "Deit is a smaller and more efficient version of ViT, which has shown competitive performance on various tasks (Touvron et al. 2020). It uses a pre-trained ViT model for the teacher and a smaller version for the student. Usually, supervised and unsupervised learning is used in combination, with the teacher network supervising the student network to produce the similar results. In addition to the fast inference time and limited computational resources of DeiT, it also has an improved generalization performance because the student model has learned to capture the most important features and patterns in the data, rather than just memorizing the training data." }, { "figure_ref": [], "heading": "Target-aware Transformer (TaT)", "publication_ref": [ "b94" ], "table_ref": [], "text": "Target-aware Transformer (TaT) (Lin et al. 2022) utilized one-to-many relation to exchange information from the teacher to the student network. The feature maps were first divided into a number of patches, then for each patch all the teacher's features were transferred to all the student features rather than employing correlation between all spatial regions. All the features inside a patch were then averaged into a single vector to make the knowledge transfer computationa lly efficient." }, { "figure_ref": [ "fig_6" ], "heading": "Tiny Vision Transformer (TinyViT)", "publication_ref": [], "table_ref": [], "text": "Wu et al. suggested a fast distillation methodology along with a novel architecture, known as TinyViT (Wu et al. 2022a). Their main concept was to convey the learned features of the large pre-trained models to the tiny ones during pre-training (Fig. 5). The output logits of the instructor models were reduced and stored in addition to the encoded data augmentations on the disc beforehand to save memory and computational resource. Student model then employs a decoder to re-construct the saved data augmentations and knowledge is transferred via the output logits with both the models trained independently. Results demonstrated TinyViT's effectiveness on large-scale test sets. " }, { "figure_ref": [], "heading": "Shifted window-based approaches", "publication_ref": [ "b18" ], "table_ref": [], "text": "Several ViT architectures have adopted the shifted window-based approach to enhance their performance. This approach was first introduced by Liu et al. in their Swin Transformer (Liu et al. 2021c). The Swin Transformer has a similar architecture to ViT but with a shifted windowing scheme, as shown in Song et al. proposed a novel ViT architecture for visual object tracking, named CSWinTT which utilizes cyclic shifting window-based attention at multi-scales (Song et al. 2022b). This approach enhances pixel attention to window attention, and enables cross-window multi-scale attention to aggregate attention at different scales. This ensures the integrity of the tracking object, and generates the best fine-scale match for the target object. Moreover, the cyclic shifting technique expands the window samples with positional information, which leads to greater accuracy and computational efficiency. By incorporating positional information into the attention mechanis m, the model is better equipped to handle changes in the object's position over time, and can track the object more effectively. Overall, the proposed architecture has shown promising results in improving the accuracy and efficiency of visual object tracking using ViT-based models." }, { "figure_ref": [], "heading": "Attention-based approaches", "publication_ref": [ "b31", "b62", "b18" ], "table_ref": [], "text": "Numerous ViT architectures have been proposed that modify the self-attention module to enhance their performance. Some of these models utilize dense global attention mechanisms (Vaswani et al. 2017a;Dosovitskiy et al. 2020), while other utilize sparse attention mechanisms (Jiang et al. 2021;Liu et al. 2021c;Dai et al. 2021) to capture global-level dependencies in the images with no spatial correlation. These type of attention mechanisms are known to be computationa lly expensive. A number of works have been done to improve the attention modules in terms of performance and computational complexity (Tu et al. 2022b)." }, { "figure_ref": [], "heading": "Class attention layer (CaiT)", "publication_ref": [ "b145" ], "table_ref": [], "text": "Touvron et al. introduced a new approach to improve the performance of the deep transfor mers (Touvron et al. 2021). Their architecture, named, CaiT contains a self-attention module, and a class attention module. The self-attention module is just like a normal ViT architecture, but the class token (class information) is not added in the initial layers. The class embeddings are added in the class attention module, later in the architecture. Their approach showed good results with a few numbers of parameters." }, { "figure_ref": [], "heading": "Deformable attention transformer (DAT)", "publication_ref": [ "b178" ], "table_ref": [], "text": "Xia and co-authors proposed a data-dependent attention mechanism to focus on the regions that are more reliable (Xia et al. 2022). Their architecture has a modular design with each stage have a local attention layer followed by a deformable attention layer in each stage. The proposed DAT architecture showed exemplary performance on benchmark datasets. " }, { "figure_ref": [], "heading": "Multi-transformer-based approaches", "publication_ref": [], "table_ref": [], "text": "Many approaches utilized multiple ViTs in their architecture to improve their performance on various tasks that require multi-scale features. This section discusses such type of multitransformer-based ViT architectures." }, { "figure_ref": [], "heading": "Cross Vision Transformer (CrossViT)", "publication_ref": [], "table_ref": [], "text": "Chen and co-authors proposed a ViT architecture having dual branches which they named as\nCrossViT (Chen et al. 2021a). The key innovation in the proposed model is the combination of image patches of different sizes, which enables CrossViT to generate highly domain-rele va nt features. The smaller and larger patch tokens are processed using two separate branches with varying computational complexities. The two branches are fused together multiple times using an efficient cross-attention module. This module enables the knowledge transfer between the branches by creating a non-patch token. The attention map generation is achieved linearly, rather than quadratically, through this process. This makes CrossViT more computationally efficient than other models that use quadratic attention." }, { "figure_ref": [], "heading": "Dual Vision Transformer (Dual-ViT)", "publication_ref": [], "table_ref": [], "text": "The Dual Vision Transformer (Dual-ViT) is a new ViT architecture that reduces the computatio na l cost of self-attention mechanisms (Yao et al.). This architecture utilizes two individual pathways to capture global and local level information. The semantic branch learns the coarse details, whereas the pixel pathway captures more fine details in the images. both of these branches are integrated and trained in parallel. The proposed dualViT showed good results on ImageNet dataset with fewer parameters as compared to other existing models." }, { "figure_ref": [], "heading": "Multiscale Multiview Vision Transformer (MMViT)", "publication_ref": [], "table_ref": [], "text": "Multiscale Multiview Vision Transformers (MMViT) incorporates multiscale feature maps and multiview encodings into transformer models. The MMViT model utilizes several feature extraction stages to process multiple views of the input at various resolutions in parallel. At each scale stage, a cross-attention block is exploited to merge data across various perspectives. This approach enables the MMViT model to obtain high-dimensional representations of the input at multiple resolutions, leading to complex and robust feature representations." }, { "figure_ref": [ "fig_10", "fig_11" ], "heading": "Multi-Path Vision Transformer (MPViT)", "publication_ref": [ "b51", "b130", "b134", "b165" ], "table_ref": [], "text": "MPViT utilize multi-scale patching technique and multi-path-based ViT architecture to learn feature representations at different scales (Lee et al. 2021b). Their proposed multi-scale patching technique utilize CNNs to create feature maps at different scales (Fig. 8). Later they utilize multip le in several image-related tasks (Li et al. 2022). Researchers have proposed a variety of architectures in this field by exploiting different approaches to merge CNNs and transformers (Heo et al. 2021;Si et al. 2022). These approaches include, but are not limited to, adding some CNN layers within transformer blocks (Liu et al. 2021a;He et al. 2023;Wei et al. 2023), introducing a multi-attentio n mechanism in CNNs (Zhang et al. 2021b;Ma et al. 2023b), or using CNNs to extract local features and transformers to capture long-range dependencies (Yuan et al. 2021a(Yuan et al. , 2023a;;Zhang et al. 2023c). In this regard, we define some subcategories based on the pattern of integration of convolution operation with ViT architectures. These include (1) early-layer integration, (2) laterallayer integration, (3) sequential integration, (4) parallel integration, (5) block integration, (6) hierarchical integration, ( 7) attention-based integration, and ( 8) channel boosting integratio n, depicted in Fig. 9. " }, { "figure_ref": [], "heading": "Early-layer integration", "publication_ref": [ "b114", "b116" ], "table_ref": [], "text": "Long-range dependencies in the images are well-captured by ViTs, but since there is no inductive bias, training them needs a lot of data. On the other hand, CNNs inherent image-related inductive bias and capture high-level correlation present in the images locally. Therefore, researchers are focusing on designing HVTs, to merge the benefits of both CNNs and transformers (Pan et al. 2022). A lot of work is done to find out the most optimal way to fuse the convolution and attention in the transformer architectures. CNNs can be utilized at different levels to incorporate the locality in the architectures. Various studies have suggested the idea that it is beneficial to first capture local patterns and then learn the long-range dependencies to have a more optimized local and global perspective of an image (Peng et al. 2023)." }, { "figure_ref": [], "heading": "Hybrid ViT", "publication_ref": [ "b31", "b31", "b77", "b175" ], "table_ref": [], "text": "The first ViT architecture was proposed by Dosovitskiy et al. in 2020(Dosovitskiy et al. 2020). In their work, they suggested the idea of considering image patches as sequences of tokens and feeding them into a transformer-based network to perform image recognition tasks. In their paper, they laid the foundation for HVTs by presenting a hybrid version of ViT. In the hybrid architecture, the input sequences were obtained from CNN feature maps instead of raw image patches (LeCun et al. 1989). The input sequence was created by flattening the feature maps spatially, and the patches were produced using a 1x1 filter. They utilized ResNet50 architecture to obtain the feature maps as input to ViT (Wu et al. 2019). In addition, they carried out extensive experiments to identify the optimal intermediate block for feature map extraction." }, { "figure_ref": [ "fig_12" ], "heading": "Detection Transformer (DETR)", "publication_ref": [ "b10" ], "table_ref": [], "text": "Carion et al. proposed a Detection Transformer (DETR) for performing object detection in natural images in 2020 (Carion et al. 2020). In their end-to-end proposed approach, they initially utilized a CNN to process the input before feeding it to the ViT architecture. The feature maps from the CNN backbone were combined with fixed-sized positional embeddings to create input for the ViT encoder. The outputs from the ViT decoder were then fed to a feed-forward network to make final predictions. DETR showed better performance when compared to other revolutionary detection models like Faster R-CNN. Their detailed idea is depicted in Fig. 10." }, { "figure_ref": [], "heading": "LeNet-based Vision Transformer (LeViT)", "publication_ref": [ "b43" ], "table_ref": [], "text": "Graham et al. proposed a hybrid ViT \"LeViT\" in 2021 (Graham et al. 2021). In their model, they utilized convolution layers initially for processing the input. The proposed architecture combined " }, { "figure_ref": [], "heading": "Lateral-layer integration", "publication_ref": [], "table_ref": [], "text": "Models that use a CNN layer or block at the end of the transformer network, such as in place of the last linear layer, or as a post-processing layer fall under this category." }, { "figure_ref": [], "heading": "Dense Prediction Transformer (DPT)", "publication_ref": [ "b121" ], "table_ref": [], "text": "Ranftl et al. proposed a dense prediction transformer \"DPT\" for segmentation in natural images.\nDPT has an encoder-decoder-based design, with a ViT as the encoder and a CNN as the decoder.\nIt captured the global perspective and long-range dependencies by the backbone architecture. The learned global representations were then decoded into image-based embeddings taken by utilizing a CNN. Outputs from the ViT-based encoder were decoded at different levels to carry out dense predictions (Ranftl et al. 2021)." }, { "figure_ref": [], "heading": "Local Vision Transformer (LocalViT)", "publication_ref": [], "table_ref": [], "text": "Li et al, in their research, also incorporated locality into ViT architecture for image classificatio n.\nThe architecture of LocalViT is just like a conventional ViT, with its MSA module specialized to capture global-level features of images. The feed-forward network in ViT encoders performs final predictions by taking input from the learned encodings from the attention module. LocalVit modifies its FFN to incorporate local information into its architecture by employing depth-wise convolution (Li et al. 2021c)." }, { "figure_ref": [], "heading": "Sequential integration", "publication_ref": [], "table_ref": [], "text": "This category describes some of the popular hybrid ViTs that leveraged the benefits of CNN in their ViT architectures by following some sequential integration (Wang et al. 2023c)." }, { "figure_ref": [], "heading": "Convolution and Attention Networks (CoAtNet)", "publication_ref": [], "table_ref": [], "text": "Dai et al. carried out an extensive study to find out the most optimal and efficient way of merging convolutions and attention mechanisms in a single architecture to increase its generalization and capacity (Dai et al. 2021). In this regard, they introduced CoAtNet, by vertically stacking several convolutional and transformer blocks. For the convolutional blocks, they employed MBConv blocks which are based on depth-wise convolutions. Their findings suggested that stacking two convolutional blocks followed by two transformers blocks, sequentially showed efficient results." }, { "figure_ref": [], "heading": "CNNs Meet Transformers (CMT)", "publication_ref": [], "table_ref": [], "text": "Despite The architecture of BoTNet is simply a sequential combination of ResNet blocks where the attention mechanism is incorporated in the last three blocks. ResNet block contains two 1x1 convolutions and a 3x3 convolution. The MSA is added in place of 3x3 convolution to capture long-term dependencies in addition to local features." }, { "figure_ref": [], "heading": "Parallel integration", "publication_ref": [], "table_ref": [], "text": "This category includes those HVT architectures that use both CNNs and transformer architectures in parallel and their predictions are then combined in the end (Wang et al. 2021a)." }, { "figure_ref": [ "fig_17" ], "heading": "Convolution-augmented Transformer (Conformer)", "publication_ref": [], "table_ref": [], "text": "In Concatenated output from both branches followed by a pooling layer was then fed to a two-layer classifier for final predictions. Fig. 13 shows their detailed architecture. block should be trained separately (Li et al. 2021a). In every layer within the HyTra search space, they utilized CNN and transformer blocks with various resolutions in parallel and freely selectable form. This broad search area includes conventional CNNs with progressively smaller spatial scales and pure transformers with fixed content lengths." }, { "figure_ref": [], "heading": "Hierarchical integration", "publication_ref": [], "table_ref": [], "text": "Those HVT architectures that adopt a hierarchical design, similar to CNNs, fall under this category. Many of these models have designed a unified block for integrating CNN and ViT, which is then repeated throughout the architecture (Tu et al. 2022b)." }, { "figure_ref": [], "heading": "Multi-Axis Attention-based Vision Transformer (MaxViT)", "publication_ref": [], "table_ref": [], "text": "MaxViT is a variant of the ViT architecture that was introduced by " }, { "figure_ref": [], "heading": "Convolutional Vision Transformer (CvT)", "publication_ref": [], "table_ref": [], "text": "CvT was introduced by Wu et al. in 2021 (Wu et al. 2021a). The architecture of CvT contained several stages like CNNs to make up a hierarchical framework. They added convolution in their architecture in two ways. At first, they used a convolutional token embedding to extract token sequences, which not only incorporated locality in the network but also shortened the sequence length gradually. Secondly, they proposed a convolutional projection that used depth-wise separable convolution to replace the linear projection before each self-attention block in the encoder block. CvT outperformed other approaches for image recognition." }, { "figure_ref": [], "heading": "Vision-Friendly Transformer (Visformer)", "publication_ref": [], "table_ref": [], "text": "Visformer was introduced as a vision-friendly transformer in 2020 (Chen et al. 2021d) presenting a modular design for efficient performance. The architecture had several modifications to a conventional ViT network. In Visformer, global average pooling was employed in place of classification token and layer normalization was replaced with batch normalization. In addition, they utilized convolutional blocks inspired by ResNeXt (Xie et al.) instead of self-attention in each stage to efficiently capture both spatial and local features. However, to model the global dependencies they adopted self-attention in the last two stages. Another notable modification in Visformer's architecture was the addition of 3x3 convolutions in the MLP block." }, { "figure_ref": [ "fig_18" ], "heading": "Convolution-Transformer Network (ConTNet)", "publication_ref": [], "table_ref": [], "text": "A novel Convolution-Transformer Network (ConTNet) is proposed for computer vision tasks to address the challenges faced in this area. The ConTNet is implemented by stacking multiple ConT blocks (Yan et al.) (shown in Fig. 15). The ConT block treats the standard transformer encoder (STE) as an independent component similar to a convolutional layer. Specifically, a feature map is divided into several patches of equal size and each patch is flattened to a (super) pixel sequence, which is then input to the STE. After reshaping the patch embeddings, the resulting feature maps are then passed on to the next convolutional layer or to the STE module." }, { "figure_ref": [], "heading": "Attention-based integration", "publication_ref": [], "table_ref": [], "text": "This section discusses those HVT architectures, which have utilized CNNs in their attention mechanism to incorporate locality." }, { "figure_ref": [], "heading": "Evolving Attention with Residual Convolutions (EA-AA-ResNet)", "publication_ref": [], "table_ref": [], "text": "Due to the limited generalizability of independent self-attention layers in capturing underlying dependencies between tokens, Wang et al. extended the attention mechanism by adding convolutional modules (Wang et al.). Specifically, they adopted a convolutional unit with residual connections to generalize the attention maps in each layer by exploiting the knowledge inher ited from previous layers, named as Evolving Attention (EA). The proposed EA-AA-ResNet architecture extends attention mechanisms by bridging attention maps across different layers and learning general patterns of attention using convolutional modules." }, { "figure_ref": [], "heading": "ResNet Transformer (ResT)", "publication_ref": [ "b211" ], "table_ref": [], "text": "A hybrid architecture that integrates convolution operation in its attention mechanism, allowing it to capture both global and local features effectively (Zhang and Yang 2021). The authors utilized a new efficient transformer block in their architecture where they replaced the conventional MSA block with its efficient variant. In the proposed efficient multi-head self-attention, they employed depth-wise convolution to reduce the spatial dimensions of the input token map before computing the attention function." }, { "figure_ref": [], "heading": "Convolution-Enhanced Image Transformer (CeiT)", "publication_ref": [], "table_ref": [], "text": "CeiT was proposed by Yuan et al. in and Oxford-102) but is also computationally efficient as compared to ViT." }, { "figure_ref": [], "heading": "Channel boosting-based integration", "publication_ref": [], "table_ref": [], "text": "Channel boosting (CB) is an idea used in DL to increase the representation learning ability of CNN models. In CB, besides the original channels, boosted channels are generated using transfer " }, { "figure_ref": [], "heading": "Empirical comparison of different methods", "publication_ref": [], "table_ref": [ "tab_4", "tab_6" ], "text": "In this section, we present a brief yet comprehensive empirical comparison of several ViT and HVT architectures that have demonstrated exceptional performance across various computer vision tasks. To get insights into their strengths and weaknesses, we have provided a detailed overview in Table 3 andTable 4. In addition, we have also highlighted the primary modificatio ns made in each model, along with the underlying rationale, as per their taxonomy. The deformable attention may have increased computational cost.\nTo enable better modeling of local image structures and deformable objects." }, { "figure_ref": [], "heading": "ImageNet", "publication_ref": [ "b140" ], "table_ref": [], "text": "84.8% Top-1 Acc @ 384x384 SeT (Sun et al. 2022) Factorizes the spatial attention into pixelwise and patch-wise attention, which reduces the computational cost." }, { "figure_ref": [], "heading": "May have limited representation capacity compared", "publication_ref": [], "table_ref": [], "text": "to non-separable approaches.\nTo effectively capture both fine-grained and coarse-grained features." }, { "figure_ref": [], "heading": "ImageNet", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "83.3% Top-1 Acc @ 224x224", "publication_ref": [ "b121" ], "table_ref": [], "text": "Multi-Transformer-Based Approaches CrossViT (Chen et May require more memory than some of the best CNNs.\nTo combine the strengths of both CNNs and ViTs, while avoiding some of their weaknesses.\nImageNet 82.6% Top-1 Acc @ 384x384 CPVT (Chu et al. 2021b) CODE\nUtilized a new scheme for conditional position encodings for boosted performance May have increased complexity and computational requirements.\nTo make positional embeddings translational invariant, they used depthwise convolutions\nImageNet 82.7% Top-1 Acc @ 224x224\nLateral-Layer Integration DPT (Ranftl et al. 2021) CODE" }, { "figure_ref": [], "heading": "Utilized a ViT-based and a CNN-based decoder", "publication_ref": [], "table_ref": [], "text": "Higher memory requirements compared to some traditional dense prediction models.\nTo demonstrate the suitability of transformers for dense prediction tasks by capturing long-range dependencies between pixels." }, { "figure_ref": [], "heading": "ADE20K", "publication_ref": [], "table_ref": [], "text": "49.02% IoU @ 520x520 LocalViT (Li et al. 2021c As compared to traditional CNNs, it may have complex to train and deploy.\nTo combine the strengths of both CNNs and selfattention mechanisms, while mitigating their weaknesses." }, { "figure_ref": [], "heading": "ImageNet", "publication_ref": [], "table_ref": [], "text": "84.1% Top-1 Acc @ 224x224 Mobile-Former (Chen et al. 2022e) CODE\nThe bridge between MobileNet and transformer enables bidirectional fusion of local and global features.\nThe light-weight cross attention in the bridge may not be able to fully capture the interactions between local and global features.\nTo provide parallel interaction of MobileNet and transformer, allowing the model to achieve a good balance between efficiency and representation power." }, { "figure_ref": [], "heading": "ImageNet", "publication_ref": [], "table_ref": [], "text": "79.3% Top-1 Acc @ 224x224 BossNAS (Li et al. 2021a) CODE\nCan effectively search for hybrid CNNtransformer architectures.\nCan be computationally expensive to train, especially for large search spaces.\nLarge and diverse search space of hybrid architectures makes it difficult for traditional NAS methods to be effective.\nImageNet 82.5% Top-1 Acc @ 512x512 Hierarchical Integration MaxViT (Tu et al. 2022b) CODE Introduces a number of novel ideas, including multi-axis attention, hierarchical stacking, and linearcomplexity global attention.\nCan be more difficult to train because of the complex attention mechanism and may require more data to achieve good results.\nTo enable local and global feature extraction through self-attention in linear time." }, { "figure_ref": [], "heading": "ImageNet", "publication_ref": [], "table_ref": [], "text": "86.70% Top-1 Acc @ 512x512 CvT (Wu et al. 2021a) CODE Combines convolutional and MSA blocks, striking a balance between efficiency and accuracy.\nPerformance may be influenced by the specific configuration of the CvT architecture." }, { "figure_ref": [], "heading": "Integrates convolutional and", "publication_ref": [ "b129" ], "table_ref": [], "text": "ViT elements for effective vision tasks.\nImageNet 87.7% Top-1 Acc @ 384x384 Visformer (Chen et al. 2021d) CODE Optimizes transformer architecture for vision tasks, considering image-specific challenges.\nMay require further architectural advancements to achieve state-of-theart performance.\nTo tailor the transformer architecture for visionspecific challenges. (Shi et al. 2023). Their proposed approach demonstrated good accuracy values for the road recognition task." }, { "figure_ref": [], "heading": "Image generation", "publication_ref": [ "b38", "b4", "b67", "b112", "b122", "b143", "b224", "b166", "b28", "b220" ], "table_ref": [], "text": "Image generation is an interesting task in computer vision and can serve as a baseline for many downstream tasks (Frolov et al. 2021). Generative adversarial networks (GANs) are widely used for image generation in various domains (Arjovsky et al. 2017;Karras et al. 2019). Additiona l ly, transformer-based GANs have shown promising performance in this task (Lee et al. 2021a;Naveen et al. 2021;Rao et al. 2022;Gao et al. 2022b). Recently, researchers have also utilized HVT-based GANs and demonstrated outstanding performance on various benchmark datasets (Tu et al. 2022a;Lyu et al. 2023). Torbunov, et al. reported UVCGAN, a hybrid GAN model, for image generation (Torbunov et al. 2022). The architecture of the UVCGAN model is based on the origina l CycleGAN model (Zhu et al. 2017) with some modifications. The generator of UVCGAN is a hybrid architecture based on a UNet (Weng and Zhu 2015) and a ViT bottleneck (Devlin et al. 2018). Experimental results demonstrated its superior performance compared to earlier best performing models while retaining a strong correlation between the original and generated images.\nIn another work, SwinGAN was introduced for MRI reconstruction by Zhao, et al (Zhao et al. 2023) Zheng et al. in their approach presented a HVT-based GAN network for medical image generation (Zheng et al. 2023). In their approach, named L-former they utilize transformers in the shallow layers and CNNs in the deeper layers. Their approach demonstrated outperformance as compared to conventional GAN architectures." }, { "figure_ref": [], "heading": "Image segmentation", "publication_ref": [ "b30", "b164", "b64", "b126" ], "table_ref": [], "text": "Although CNNs and ViT-based approaches have shown exceptional performance in complex image-related tasks such as image segmentation, there is currently an emphasis on combining the strengths of both approaches to achieve boosted performance (Dolz et al. 2019;Wang et al. 2020Wang et al. , 2022c, b;, b;Jing et al. 2023;Shafri et al. 2023;Yang et al. 2023b) " }, { "figure_ref": [], "heading": "Image Restoration", "publication_ref": [ "b134", "b197" ], "table_ref": [], "text": "A crucial task in computer vision is image restoration, which tends to restore the original image from its corrupted version. Image restoration-based systems have shifted from the use of CNNs to\nViT models (Song et al. 2023), and more recently to HVTs that combine the strengths of both CNNs and transformers (Gao et al. 2022a;Wu et al. 2023c). Yi et al. proposed an auto-encoder based hybrid method to carry out single infrared image blind deblurring (Yi et al. 2023). Their approach utilizes hybrid convolution-transformer blocks for extracting context-related informa tio n between the objects and their backgrounds. Their method was able to focus on global information and overcome the flaws of CNN-based methods. In addition, to retain the textural and spatial information a specialized loss function is designed." }, { "figure_ref": [], "heading": "Feature extraction", "publication_ref": [ "b138" ], "table_ref": [], "text": "Feature extraction is essential in computer vision to identify and extract relevant visual informa tio n from images. Initially CNNs were used for this purpose, but now transformers have gained attention due to their impressive results in image classification as well as other applications like pose estimation, and face recognition (Wang et al. 2023d;Zhu et al. 2023a;Su et al. 2023).\nLi and Li, in their work presented a hybrid approach, ConVit, to merge the advantages of both CNNs and transformers for effective feature extraction to identify crop disease (Li and Li 2022).\nThe experimental results of the developed approach showed good performance in plant disease identification task. A cascaded approach was proposed by Li et al. for recaptured scene image identification (Li et al. 2023b). In their approach they initially employed CNN layers to extract local features and later in the deeper layers they utilized transformer blocks to learn global level image representations. High accuracy value of their proposed approach demonstrated its effectiveness in identifying recaptured images. " }, { "figure_ref": [], "heading": "Medical image analysis", "publication_ref": [ "b203" ], "table_ref": [], "text": "CNN-based approaches have been frequently employed for analyzing medical images due to their capability to capture diverse and complex patterns (Zafar et al. 2021;Sohail et al. 2021b;Rauf et al. 2023) " }, { "figure_ref": [], "heading": "Object Detection", "publication_ref": [ "b33", "b10" ], "table_ref": [], "text": "Object detection is a crucial computer vision task with a wide range of real-world applicatio ns such as surveillance, robotics, crowd counting, and autonomous driving (Liu et al. 2023a). The progress of DL has significantly contributed to the advancements in object detection over the years (Er et al. 2023). ViTs have also shown impressive performance in object detection due to its selfattention mechanism that allows them to capture long-range dependencies between image pixels and identify complex object patterns across the entire image (Carion et al. 2020;Chen et " }, { "figure_ref": [], "heading": "Pose Estimation", "publication_ref": [ "b139", "b53", "b9", "b138", "b106" ], "table_ref": [], "text": "Human pose estimation tends to identify important points in various scenarios. Both CNNs and transformers have shown exemplary performance in pose estimation task (Sun et al. 2019;Huang et al. 2019;Cao et al. 2022). Currently researchers are focusing to combine CNNs and transfor mers in a unified method to incorporate both local and global level information for accurate pose estimation (Stoffl et al. 2021;Mao et al. 2021;Li et al. 2021d;Wu et al. 2022b). Zhao et al.\npresented a new Dual-Pipeline Integrated Transformer \"DPIT\" for human pose estimation (Zhao et al. 2022c). In Zhao's approach initially two CNN-based branches are employed to extract local features followed by the transformer encoder blocks to capture long range dependencies in the image (Wang et al. 2022a). In another technique Wang and coauthors used a CNN and a transformer branch to learn local and global image representations, which were then integrated to generate the final output. Their approach demonstrated significant improvement as compared to other existing approaches. Hampali and co-authors developed a hybrid pose estimation method, named as Keypoint Transformer (Hampali et al. 2021). In the proposed method they utilized both CNN and transformer-based modules to efficiently estimate human joints as 2D keypoints.\nExperimental results showed exemplary results of this approach on datasets includ ing InterHand2.6M." }, { "figure_ref": [], "heading": "Challenges", "publication_ref": [], "table_ref": [], "text": "HVTs have demonstrated exceptional performance not only in computer vision but also in various other domains. Nonetheless, integrating convolutional operations effectively into the transformer architecture poses several challenges for HVTs. Some of these challenges include:\n The MSA mechanism in transformers and the convolution operation in CNNs both rely on dense matrix multiplication to capture data dependencies. However, HVT architectures (CNN-Transformers) may face high computational complexity and memory overhead. As a result, they may encounter challenges when attempting to model dense applications such as volumetric analysis and segmentation.\n Training HVTs requires powerful hardware resources like GPUs due to their computatio na l complexity. This can limit their deployment in real-world applications, especially on edge devices, due to the hardware constraints and associated costs.\n A major challenge faced by HVT architectures is the efficient merging of learned features from both transformer and convolutional layers. While the transformer layers learn global features that are independent of spatial location, convolutional layers learn local features that are spatially correlated. In architectural terms, the efficient unification of MSA and CNN layers can potentially result in improved performance in various vision tasks.\n HVTs have the ability to process complex image data accurately due to their high learning capacity. However, this also means that they require large training datasets to effective ly learn and generalize from the data. This poses a challenge, particularly in the medical image domain, where obtaining a large amount of annotated data is often difficult and timeconsuming. The need for obtaining extensive labeled data can be a significant obstacle, 59\nconsuming valuable resources and time, and impeding the development and application of HVTs in medical imaging." }, { "figure_ref": [], "heading": "Future directions", "publication_ref": [], "table_ref": [], "text": "HVTs are large models with billions of parameters, which necessitates the need for lightwe ight architectures. Their high complexity may lead to latency in inference and significant overhead on energy consumption. There is a need to explore new and innovative design principals for effic ie nt\nHVTs with significant inference rates to enable their practical deployment in real-world applications, edge devices, and computationally limited systems, such as satellites. Knowledge distillation emerges as a promising approach in generating data-efficient and compact models by transferring knowledge from high-capacity models to simpler ones.\nHVTs combine the strengths of CNNs and transformers, making significant advancements in image analysis and computer vision. However, to fully utilize their potential, it is important to explore suitable ways in which the convolution and self-attention mechanisms can be integrated for specific vision applications. This involves in depth analysis of integration methods based on their suitability for various contexts, such as early layer integration, lateral layer integratio n, sequential integration, parallel integration, hierarchical integration, attention-based integration and attention-based integration.\nThe HVT's local and global processing capabilities make them quite promising for a wide range of vision applications, with potential benefits beyond vision-related tasks. To further enhance the performance of HVTs, it is important to gain a deeper understanding of image content and associated operations, which can help in devising better hybrid and deep architectures. The investigation of the potential utilization of hand-crafted operators in combination with the hybrid and dynamic feature extraction mechanisms of CNN-Transformer architectures may be particularly important in the near future. Developing new and effective blocks using both convolution and self-attention mechanisms is also a promising area for research.\nIn summary, the future of HVTs looks bright, with immense potential for various applications in the field of image analysis, computer vision, etc. In our opinion, it is better to also focus on possible integration methods that merge self-attention and convolution layers within HVT architectures for specific vision tasks. This focus should also extend to understanding image content and operations, developing effective blocks that combine convolution and self-attention, utilizing multimoda lity and multitasking in ViT and HVT architectures." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "The ViT has gained considerable attention in research due to its promising performance in specific image-related tasks. This success can be attributed to the MSA module integrated into ViT architectures, enabling the modeling of global interactions within images. To enhance their performance, various architectural improvements have been introduced. These improvements can be categorized as patch-based, knowledge distillation-based, attention-based, multi-transformerbased, and hybrid approaches. This paper not only examines the architectural taxonomy of ViTs, but also explores the fundamental concepts underlying ViT architectures.\nWhile ViTs have impressive learning capacities, they may suffer from limited generalization in some applications due to their lack of inductive bias that can capture local relations in images. To address this, researchers have developed HVTs, also known as CNN-Transformers, which leverage both self-attention and convolution mechanisms to learn both local and global information.\nSeveral studies have proposed ways to integrate convolution specific inductive bias into transformers to improve their generalization and capacity. Integration methodologies include early-layer integration, lateral-layer integration, sequential integration, parallel integratio n, hierarchical integration, and channel boosting-based integration. In addition to introducing taxonomy for HVT architectures based on their integration methodology, we also provide an overview of how they are used in various real-world computer vision applications. Despite current challenges, we believe that HVTs have enormous potential due to their capability to perform learning at both local and global levels." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This work has been conducted at the pattern recognition lab, Pakistan Institute of Engineering and Applied Sciences, Islamabad, Pakistan. We extend our sincere gratitude to Dr. Abdul Majid and Dr. Naeem Akhter of DCIS, PIEAS for their invaluable assistance in improving the manuscr ipt.\nAdditionally, we acknowledge Pakistan Institute of Engineering and Applied Sciences (PIEAS) for a healthy research environment which led to the work presented in this article." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Visual Genome: http://visualgenome.org/ Open images: https://ai.googleblog.com/2016/09/introducing-open-imagesdataset.html Places: https//places.csail.mit.edu/index.html Youtube-8M: https://research.google.com/youtube8m/index.html CelebA: http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html Wiki Links: https://code.google.com/archive/p/wiki-links/downloads EXCITEMENT dataset: https://github.com/hltfbk/EOP-1.2.1/wiki/Data-Sets Ubuntu Dialogue Corpus: https://www.kaggle.com/datasets/rtatman/ubuntu-dialoguecorpus ConvAI3: https://github.com/DeepPavlovAdmin/convai Large Movie Review Dataset: https://ai.stanford.edu/~amaas/data/sentiment/ CIFAR10: https://www.cs.toronto.edu/~kriz/cifar.html Indoor Scene Recognition: http://web.mit.edu/torralba/www/indoor.html Computer Vision Datasets: https://computervisiononline.com/datasets MonuSeg: https://monuseg.grand-challenge.org/Data/ Oxford-IIIT Pets: https://www.robots.ox.ac.uk/~vgg/data/pets/ Fashion MNIST: https://research.zalando.com/welcome/mission/researchprojects/fashion-mnist Blogs/ Repositories High-quality, free online articles and blogs Github.io: http://jalammar.github.io/illustrated-transformer/ Github: https://github.com/huggingface/pytorch-image-models Viso Ai: https://viso.ai/deep-learning/vision-transformer-vit/ Github: https://github.com/google-research/vision_transformer HuggingFace: https://huggingface.co./docs/transformers/model_doc/vit" }, { "figure_ref": [], "heading": "Vision Transformer Advanced by Exploring intrinsic Inductive Bias (ViTAE)", "publication_ref": [], "table_ref": [], "text": "The authors suggested a novel ViT architecture called ViTAE, that combines two different basic cell types (shown in Fig. 14): reduction cells (RC) and normal cells (NC) (Xu et al. 2021b). RCs are used to downscale input images and embed them into enriched multi-scale contextual tokens, while NCs are used to model local and long-term dependencies concurrently within the token sequence. The underlying structure of these two types of cells is also similar, consisting of parallel attention modules, convolutional layers, and an FFN. RC includes contextual information in tokens by utilizing several convolutions with different dilation rates in the pyramid reduction module.\nThe authors also presented a more optimized version, ViTAEv2, that showed better performance than earlier method (Zhang et al. 2022d)." }, { "figure_ref": [], "heading": "Figure 14: Architectural diagram of ViTaE", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Competing interests", "publication_ref": [], "table_ref": [], "text": "The authors declare no competing financial and/or non-financial interests about the described work." } ]
Vision transformers have become popular as a possible substitute to convolutional neural networks (CNNs) for a variety of computer vision applications. These transformers, with their ability to focus on global relationships in images, offer large learning capacity. However, they may suffer from limited generalization as they do not tend to model local correlation in images. Recently, in vision transformers hybridization of both the convolution operation and self-attention mechanis m has emerged, to exploit both the local and global image representations. These hybrid vision transformers, also referred to as CNN-Transformer architectures, have demonstrated remarkable results in vision applications. Given the rapidly growing number of hybrid vision transformers, it has become necessary to provide a taxonomy and explanation of these hybrid architectures. This survey presents a taxonomy of the recent vision transformer architectures and more specifica lly that of the hybrid vision transformers. Additionally, the key features of these architectures such as the attention mechanisms, positional embeddings, multi-scale processing, and convolution are also discussed. In contrast to the previous survey papers that are primarily focused on individual vision transformer architectures or CNNs, this survey uniquely emphasizes the emerging trend of hybrid vision transformers. By showcasing the potential of hybrid vision transformers to deliver exceptional performance across a range of computer vision tasks, this survey sheds light on the future directions of this rapidly evolving architecture.
A survey of the Vision Transformers and its CNN-Transformer based Variants
[ { "figure_caption": "Figure 1 :1Figure 1: Depiction of the multi self-attention (MSA) mechanism and convolution operation. MSA tends to capture global relationships, whereas the convolution operation has a local receptive filed to model pixel neighborhood information in the images.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Layout of the different sections of the survey paper.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 33Figure 3 illustrates the fundamental architectural layout of a transformer. Initially, the input image", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Detail architecture of ViT. Input image is at first divided into patches, then their linearly transformed embeddings are combined with positional information and processed through multiple encoder/decoder blocks for downstream task.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Taxonomy of Vision ViT architectures.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": ". Positiona l embedding and class tokens are added to these embeddings, which are then given to the encoder for feature learning. Several studies exploited different ways of patch extraction mechanisms to improve the performance of ViTs. These mechanisms include fixed-size patching (Wang et al. 2021c), dynamic patching (Ren et al. 2022; Zhang et al. 2022c), and overlapping patching (Wang", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Detailed workflow of knowledge transfer-based approach (TinyViT).", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6scheme, as shown in Fig.6. It controls the self-attention computation by computing it within each", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Architectural diagram of Swin Transformer (shifted window-based approach).", "figure_data": "", "figure_id": "fig_8", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Architecture of patch-based Separable Transformer (SeT), which modified its MSA layer by introducing two diverse attention blocks. 3.4.3. Patch-based Separable Transformer (SeT)", "figure_data": "", "figure_id": "fig_9", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Architecture of Multi-transformer-based MPViT, which utilize multiple transformers in its architecture.", "figure_data": "", "figure_id": "fig_10", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Taxonomy of Hybrid ViTs.", "figure_data": "", "figure_id": "fig_11", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Architecture of DETR, with CNN integration as an initial stem block.", "figure_data": "", "figure_id": "fig_12", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Architecture of CPVT, which incorporated CNN in their PEG block.", "figure_data": "", "figure_id": "fig_13", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "their successful performance, ViTs face three main issues, a) inability to capture low-level features by considering correlation in the local neighborhood, b) expensive in terms of computation and memory consumption due to their MSA mechanism, c) and fixed sized input tokens, embedding. To overcome these issues, there comes the boom of hybridization of CNNs and ViTs after 2021. Guo et al. in 2021 also proposed a hybrid ViT, named CMT (CNNs Meet Transformers) (Guo et al. 2021). Inspired by CNNs (Tan and Le 2019), CMT also consists of an initial stem block followed by the sequential stacking of the CNN layer and CMT block. The designed CMT block was inspired by the ViT architecture, therefore contained a lightweight MSA block in place of conventional MSA, and the MLP layer was replaced with an inverted residual feed-forward network (IRFFN). In addition, a local perception unit (LPU) is added in the CMT block to increase the representation capacity of the network. The architecture is shown in Fig. 12.", "figure_data": "", "figure_id": "fig_14", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: Architecture of CMT, with integration of CNN in sequential order Bottleneck Transformers (BoTNet)", "figure_data": "", "figure_id": "fig_15", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "2021, Peng et al. conducted a study to perform visual recognition in natural images. In this regard, they proposed an architecture named, Conformer (Peng et al. 2021). Due to the popularity of ViTs the architecture of Conformer was also based on ViTs. To improve the perception capacity of the network, they integrated the benefits of CNN and to multi-head self-attention mechanis m. Conformer, a hybrid ViT, contained two separate branches, a CNN branch to capture local perceptions and a transformer branch to capture global-level features. Subsequent connections were built from the CNN branch to the transformer branch to make each branch local-globa l context-aware. Final predictions were obtained from a CNN classifier and a transformer classifier. Cross-entropy loss function was used to train each classifier. Conformer showed better performance than other outperforming ViT architectures such as DeiT, and VIT. MobileNet-based Transformer (Mobile-Former) Chen et al. proposed a concurrent hybrid ViT architecture with two different pathways for a CNN and transformer (Chen et al. 2022e). Like other hybrid ViTs, Mobile-Former employed the CNN model to learn spatial correlation and used a transformer to capture long-term dependencies in the images, thus fusing both the local correlation and global representations. The CNN architecture was based on MobileNet, which uses inverted residual blocks with a reduced number of parameters. Information among both the branches was synchronized using connections, which kept the CNN pathway aware of global information and the transformer aware of local informatio n.", "figure_data": "", "figure_id": "fig_16", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 13: Architecture of Mobile-former (CNN and transformer with parallel integration)", "figure_data": "", "figure_id": "fig_17", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 15 :15Figure 15: Architecture of ConTNet, which integrated CNN and ViT in its ConT block to form a hierarchical architecture.", "figure_data": "", "figure_id": "fig_18", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "2021 in their paper \"Incorporating Convolution Designs into Visual Transformers\"(Yuan et al. 2021a). The proposed CeiT combined the benefits of CNNs and ViTs in extracting low level features, capturing locality, and learning long-range dependencies. In their CeiT, they made three main advancements in a conventional ViT architecture. They modified the patch extraction scheme, the MLP layer and added a last layer above the ViT architecture. For patch extraction they proposed an Image-to-Tokens (I2T) module in which they utilized CNNbased blocks to process the inputs. Instead of utilizing raw input images, they used low-level features learned from the initial convolutional blocks to extract patches. I2T contained convolutional, max pooling, and batch normalization layers in its architecture to fully leverage the benefits of CNNs in ViTs. They utilized a Locally-enhanced Feed-Forward (LeFF) layer in place of the conventional MLP layer in the ViT encoder, in which depth-wise convolutions were utilized to capture more spatial correlation. In addition, a last class token attention (LCA) layer was devised to systematically combine the outputs from different layers of ViT. CeiT not only showed promising results on several image and scene recognition datasets (including ImageNet, CIFAR,", "figure_data": "", "figure_id": "fig_19", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "learning-based auxiliary learners to capture diverse and complex patterns from images. CB-based CNNs (CB-CNN) have shown outstanding performance in various vision-related tasks. In a study by Ali et al., they proposed a CB-based HVT architecture (Ali et al. 2023). In CB-HVT they utilized CNNs and ViT-based auxiliary learners to generate boosted channels. The CNN-based channels captured local-level diversity in the image patterns, whereas Pyramid Vision Transfor mer (PVT)-based channels learned global-level contextual information. The authors evaluated CB-HVT on the lymphocyte assessment dataset, where it showed reasonable performance. Overview of their architecture is shown in Fig. 16.", "figure_data": "", "figure_id": "fig_20", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 16 :16Figure 16: Overview of CB-HVT, where PVT (a VIT) is combined within CNN architecture using channel boosting.", "figure_data": "", "figure_id": "fig_21", "figure_label": "16", "figure_type": "figure" }, { "figure_caption": ". They utilized Swin Transformer U-Net-based generator network and CNN-based discriminator network. The generated MRI images by SwinGAN showed good reconstructio n quality due to its ability to capture more effective information. Tu et al. combined Swin transformer and CNN layers in their proposed SWCGAN (Tu et al. 2022a). In their architecture they utilized CNN layers initially to capture local level features and then in later layers util ized Residual Dense Swin Transformer Blocks \"RDST\" to capture global level features. The developed method showed good reconstruction performance compared to existing approaches in remote sensing images. Recently, Bao et al. proposed a spatial attention-guided CNN-Transformer aggregation network (SCTANet) to reconstruct facial images (Bao et al. 2023b). They utilized both CNN and transformer in their Hybrid Attention Aggregation (HAA) block for deep feature extraction. Their experimental results demonstrated better performance than other techniques.", "figure_data": "", "figure_id": "fig_22", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "To hasten the convergence of the training process and achieve superior image deblurring outcomes, the study also incorporated a multi-stage training technique and mixed error function. In another technique, Chen et al. developed an efficient image restoration architecture called Dual-former, which combines the local modeling ability of convolutions and the global modeling ability of self-attention modules (Chen et al. 2022d). The proposed architecture achieves superior performance on multiple image restoration tasks while consuming significantly fewer GFLOPs than previously presented methods. To address the issue of high computational complexity Fang et al. utilized a hybrid network, HNCT, for lightwe ight image super-resolution (Fang et al. 2022). HNCT leverages the advantages of both CNN and ViT and extract features that consider both local and non-local priors, resulting in a lightweight yet effective model for super-resolution. Experimental results demonstrate that HNCT's improved results as compared to existing approaches with fewer parameters. Zhao et al. developed a hybrid denoising model, called Transformer Encoder and Convolutional Decoder Network (TECDNet), for efficient and effective real image denoising (Zhao et al. 2022b). TECDNet attained outstanding denoising results while maintaining relatively low computational cost. Recently, Chen et al.presented an end-to-end HVT-based image fusion approach for infrared and visible image fusion(Chen et al. 2023b). The proposed technique consists of a CNN module with two branches to extract coarse features, and a ViT module to obtain global and spatial relationships in the image.", "figure_data": "", "figure_id": "fig_23", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Table of abbreviations.", "figure_data": "AbbreviationDefinitionCNNConvolutional Neural NetworkViTVision TransformerNLPNatural Language ProcessingHVTHybrid Vision TransformerDLDeep LearningMSAMulti-Head Self-AttentionFFNFeed Forward NetworkMLPMulti-Layer PerceptronAPEAbsolute Position EmbeddingRPERelative Position EmbeddingCPEConvolution Position EmbeddingPre-LNPre-Layer NormalizationGELUGaussian Error Linear UnitCBChannel BoostingCvTConvolutional Vision TransformerLeFFLocally-enhanced Feed ForwardCeiTConvolution Enhanced Image TransformerI2TImage To TransformerMoFFNMobile Feed Forward NetworkCCTCompact Convolutional TransformerLocal ViTLocal Vision TransformerLeViTLeNet-Based Vision TransformerPVTPyramid Vision TransformerMaxViTMulti-Axis Attention-based Vision TransformerMBConvMobile inverted bottleneck convolutionDPTDeformable Patch-based TransformerTNTTransformer iN TransformerDeiTData-efficient Image TransformerTaTTarget aware TransformerCaiTClass attention in image TransformerIRFFNInverted Residual Feed Forward NetworkLPULocal Perceptron UnitResNetResidual NetworkSTEStandard Transformer EnoderSE-CNNSqueeze and Excitation CNNFPNFeature Pyramid NetworkUAVUnmanned Aerial VehicleEAEvolving AttentionRCReduction CellsNCNormal CellsConTNetConvolution Transformer NetworkFCTFully Convolutional Transformer", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "𝑿 = 𝑿 𝑝𝑎𝑡𝑐ℎ + 𝑿 𝑝𝑜𝑠 Eq. 2 where, the transformer's input is represented by 𝑋 , 𝑋 𝑝𝑎𝑡𝑐ℎ represents patch embeddings, and 𝑋 𝑝𝑜𝑠 is the learnable position embeddings. Both 𝑋 𝑝𝑎𝑡𝑐ℎ & 𝑋 𝑝𝑜𝑠 have dimensions (𝑁 + 1) × 𝐷, where D", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Distinct online resources relevant to DL and ViT.", "figure_data": "CategoryDescriptionSourceOnline availabledatasets andcomputationalresourcesCloudComputingSolutions", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "mechanism as compared to previous architectures. A new hybrid block was introduced as a basic element, which consists of MBConv-based convolution and Multi-Axis based attention. The basic hybrid block was repeated over multiple stages to obtain a hierarchica l", "figure_data": "backbone, similar to CNN-based backbones that can be used for classification, object detection,segmentation, and generative modeling. MaxViT can see locally and globally across the wholenetwork, including the earlier stages.", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Empirical comparison of various ViT architectures, based on their strengths, weaknesses, rationale and performance on benchmark datasets (For comparison, we havereported the results of the best performing variants of the mentioned architecture).", "figure_data": "StrengthArchitecture(ArchitecturalWeaknessRationaleMetricmodification)Patch-Based ApproachesT2T-ViT (Yuan et al. 2021b) CODEUtilized Token-to-Token module for iterative patching.May have higher computational requirements due to increased tokenization.To improve the representation by focusing on tokens instead of patches.ImageNet 83.3% Top-1 Acc @ 384×384TNT-ViT (Han et al. 2021) CODEUtilizes multi-level patching, to capture objects with spatial and size variations.May require more parameters, leading to increased model size.To capture the attetnion inside the local patches.ImageNet 82.9% Top-1 Acc @ 224x224DPT (Chen et al. 2021e) CODEUsed DePatch, to have patches of variable sizes.Could be sensitive to the selection of deformable patches, impacting performance.For better handling of irregular-shaped objects in the image.ImageNet 81.9% Top-1 Acc @ 224x224Models global contextCrowdFormer (Yang et al. 2022b)by learning features at different scales, effective for crowd counting.Could be computationally expensive for real-world applications.The global context is incorporated to deal with the uneven distribution of crowds.NWPU 67.1 MAE, 301.6 MSE @ 512x512", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Empirical comparison of several HVT architectures, based on their strengths, weaknesses, rationale and performance on benchmark datasets (For comparison, we have reported the results of the best performing variants of the mentioned architecture)", "figure_data": "ArchitectureStrengthWeaknessRationaleMetricEarly-Layer Integration", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Huang et al. 2021a;GE et al. 2021; Yang et al. 2022a;Leong et al. 2022; Zhao et al. 2022a;Raghavendra et al. 2023; Zhu et al. 2023b). Xiong et al. proposed a hybrid multi-moda l approach based on ViT and CNN to enhance fine-grained 3D object recognition(Xiong and Kasaei 2022). Their approach encodes the global information of the object using the ViT network and the local representation of the object using a CNN network through both RGB and depth views of the object. Their technique outperforms both CNN-only and ViT-only baselines. In another technique Tiong et al. presented a novel hybrid attention vision transformer (HA-ViT) to carry out faceperiocular cross identification (Tiong et al. 2023). HA-ViT utilizes depth-wise convolution and convolution-based MSA concurrently in its hybrid attention module in parallel to integrate local and global features. The proposed methodology outperforms three benchmark datasets in terms of Face Periocular Cross Identification (FPCI) accuracy. Wang et al. proposed a novel approach for visual place recognition using an HVT-based architecture (Wang et al. 2022d). Their method aims to improve the robustness of the visual place recognition system by combining both CNN and ViT to capture local details, spatial context, and high-level semantic information. To recognize vehicles Shi et al. developed a fused network that used SE-CNN architecture for feature extraction followed by the ViT architecture to capture global contextual information", "figure_data": "ViTAE (Xu et al. 2021b) CODE al. 2021b;generalization. improving architecture, inductive bias-aware Introduces anvisual reasoning require fine-grained effective for tasks that May not be asadaptability. generalization and biases to enhance model To incorporate the inductive@ 384x384 Top-1 Acc 83.0% ImageNetConTNet (Yan et al.) CODEMore robust to changes in the input data than transformer-based models.Model complexity may increase due to the combination of transformers. convolution andTo obtain hierarchical features using both convolution and vision-related tasks. transformers for variousImageNet 81.8% Top-1 Acc @ 224x224Attention-Based IntegrationEA-AA-ResNet (Wang et al.) CODEEvolves attention mechanisms with residual convolutions, enhancing feature representation.May have higher computational cost compared to standard convolutional models.To improve feature representation through evolving attention with residual convolutions.ImageNet 79.63% Top-1 Acc @ 224x224The introduction ofResT (Zhang and Yang 2021) CODEMemory-Efficient MSA, Spatial Attention for positional encoding and Stack of conv. layers for patchMay be computationally more expensive than traditional CNN-based models.To provide novel and innovative techniques that make transformer models efficient and versatile for visual recognition tasks.ImageNet 83.6% Top-1 Acc @ 224x224embedding.CeiT (Yuan et al. 2021a) CODEEnhances ViTs with convolutional operations, improving efficiency and performance.Model complexity may increase with the addition of convolutional operations.To improve local features extraction of ViTs with convolutional components.ImageNet 83.3% Top-1 Acc @ 384x384Channel Boosting-Based IntegrationLYSTO88.0%CB-HVT (Ali et al. 2023)Employs channel boosting for better feature representation, improving model accuracy.Increased computational cost due to additional channel boosting computations.To enhance feature representation through channel boosting in a hybrid architecture.F-Score @ 256x256 NuClick 82.0%F-ScoreImageNet @ 256x25682.19%Top-1 Acc@ 224x224", "figure_id": "tab_8", "figure_label": "", "figure_type": "table" }, { "figure_caption": ". In this regard,Wang et al. presented a new semantic segmentation method called DualSeg for grapes segmentation(Wang et al. 2023a). Their method combines Swin Transformer and CNN to leverage the advantages of both global and local features. In another work, Zhou and co-authors proposed a hybrid approach named SCDeepLab to segment tunnel cracks(Zhou et al. 2023b). Their approach outperformed other CNN-only and transformer-only-based models in segmenting cracks in tunnel lining.Feng et al. carried out segmentation recognition in metal couplers to detect fracture surfaces(Feng et al. 2023). To this end, they proposed an end-to-end HVT-based approach by utilizing a CNN for automatic feature extraction and a hybrid convolution and transformer (HCT) module for feature fusion and global modeling. Recently, Xia and Kim developed Mask2Former, an HVT approach, to address the limitations of ViT or CNN-based systems(Xia and Kim 2023). The developed approach achieved better results as compared to other techniques on both the ADE20K and Cityscapes datasets. Li et al. proposed an HVT-based method called MCAFNet for semantic segmentation of remote sensing images(Li et al. 2023d).", "figure_data": "", "figure_id": "tab_9", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Li and co-authors developed HVT architecture to detect defects in strip steel surfaces. Their approach utilized a CNN module, followed by a patch embedding block and two transformer blocks to extract high domain relevant features. Their experiments showed good classification performance as compared to existing methods. Recently,", "figure_data": "Rajani et al. in their approach, proposed an encoder-decoder approach for categorizing differe ntseafloor types. Their developed method is a ViT-based architecture with its MLP block replacedwith CNN-based feature extraction module. The modified architecture achieves outstandingresults while meeting real-time computational requirements.", "figure_id": "tab_10", "figure_label": "", "figure_type": "table" }, { "figure_caption": ". However, due to the need for modeling global level image representations, researchers have been inspired to utilize Transformers in the medical image analysis domain(Obeid et al. 2022;Cao et al. 2023;Zou and Wu 2023; Li et al. 2023c;Zidan et al. 2023;Xiao et al. 2023).Recently, several studies have proposed integrating CNNs and transformers to capture both local and global image features in medical images, allowing for more comprehensive analysis(Tragakis et al.;Springenberg et al. 2022; Wu et al. 2022c;Jiang and Li 2022; Bao et al. 2023a;Dhamija et The developed approach showed outstanding performance on various medical challenge datasets as compared to other existing architectures. In another work,Heidari, et al. proposed HiFormer, an HVT to capture multi-sca le feature representations by utilizing a Swin Transformer module and a CNN-based encoder(Heidari et al. 2022). Experimental results demonstrated the effectiveness of HiFormer in segmenting medical images in various benchmark datasets. In their paper, Yang and colleagues presented a novel hybrid approach called TSEDeepLab, which combines convolutional operations with transformer blocks to analyze medical images(Yang et al. 2023a). Specifically, the approach utilizes convolutional layers in the early stages for learning local features, which are then processed by transformer blocks to extract global patterns. Their approach demonstrated exceptional segmentation accuracy and strong generalization performance on multiple medical image segmentation datasets.", "figure_data": "", "figure_id": "tab_11", "figure_label": "", "figure_type": "table" }, { "figure_caption": "CNN object detector. ViT-FRCNN showed improved detection results with a better generaliza tio n ability(Beal et al. 2020). Chen et al. introduced a single-stage hybrid detector for detection in remote sensing images. their proposed approach, MDCT leveraged both the CNNs and transformers in its architecture and showed better performance as compared to other single-sta ge detectors(Chen et al. 2023c). Lu et al. developed an HVT-based approach for object detection in unmanned aerial vehicle (UAV) images(Lu et al. 2023b). The proposed approach utilized a transformer-based backbone to extract features with global level information, which were then fed to FPN for multi-scale feature learning. The proposed method demonstrated good performance as compared to earlier approaches. Yao and his colleagues proposed a fusion network that utilize individual transformer and CNN-based branches to learn global and local level features(Yao et al. ", "figure_data": "2023). Experimental results showed satisfactory performance of the developed method ascompared to other methods.", "figure_id": "tab_12", "figure_label": "", "figure_type": "table" } ]
Asifullah Khan; Zunaira Rauf; Anabia Sohail; Abdul Rehman Khan; Hifsa Asif; Aqsa Asif; Umair Farooq
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "years across a range of vision-based applications", "year": "2018" }, { "authors": " Yao", "journal": "Artif Intell Rev", "ref_id": "b1", "title": "Nevertheless, ViTs have revolutionized the field of computer vision by achieving outstanding performance on various challenging tasks, includ ing image and video recognition", "year": "2019" }, { "authors": "A A Aleissaee; A Kumar; R M Anwer", "journal": "", "ref_id": "b2", "title": "Transformers in Remote Sensing: A Survey", "year": "2022" }, { "authors": "M L Ali; Z Rauf; A Khan", "journal": "Sensors", "ref_id": "b3", "title": "CB-HVTNet: A channel-boosted hybrid vision transformer network for lymphocyte assessment in histopathological images An", "year": "2022" }, { "authors": "M Arjovsky; S Chintala; L ; ) Wasserstein Gan Bottou; H Bao; Y Zhu; Q Li", "journal": "Comput Biol Med", "ref_id": "b4", "title": "Hybrid-scale contextual fusion network for medical image segmentation", "year": "2017" }, { "authors": "Q Bao; Y Liu; B Gang", "journal": "IEEE Trans Multimed", "ref_id": "b5", "title": "SCTANet: A Spatial Attention-Guided CNN-Transformer Aggregation Network for Deep Face Image Super-Resolution", "year": "2023" }, { "authors": "J Beal; E Kim; E Tzeng", "journal": "Electron", "ref_id": "b6", "title": "CNN Variants for Computer Vision: History, Architecture, Application, Challenges and Future Scope", "year": "2020" }, { "authors": "J Bi; Z Zhu; Q Meng", "journal": "IEEE Int Conf Comput Sci Electron Inf Eng Intell Control Technol CEI", "ref_id": "b7", "title": "Transformer in Computer Vision", "year": "2021" }, { "authors": "H Cao; Y Wang; J Chen", "journal": "", "ref_id": "b8", "title": "Swin-Unet: Unet-Like Pure Transformer for Medical Image Segmentation", "year": "2023" }, { "authors": "X Cao; X Li; L Ma", "journal": "", "ref_id": "b9", "title": "AggPose: Deep Aggregation Vision Transformer for Infant Pose Estimation", "year": "2022" }, { "authors": "N Carion; F Massa; G Synnaeve", "journal": "LNCS", "ref_id": "b10", "title": "End-to-End Object Detection with Transformers", "year": "2020" }, { "authors": "C F Chen; Q Fan; R Panda", "journal": "Proc IEEE Int Conf Comput Vis", "ref_id": "b11", "title": "CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification", "year": "2021" }, { "authors": "H Chen; C Li; G Wang", "journal": "Pattern Recognit", "ref_id": "b12", "title": "GasHis-Transformer: A multi-scale visual transformer approach for gastric histopathological image detection", "year": "2022" }, { "authors": "J Chen; X Chen; S Chen", "journal": "Inf Fusion", "ref_id": "b13", "title": "Shape-Former: Bridging CNN and Transformer via ShapeConv for multimodal image matching", "year": "2023" }, { "authors": "J Chen; J Ding; Y Yu; W Gong", "journal": "Neurocomputing", "ref_id": "b14", "title": "THFuse: An infrared and visible image fusion network using transformer and hybrid feature extractor", "year": "2023" }, { "authors": "J Chen; C M Ho", "journal": "", "ref_id": "b15", "title": "MM-ViT: Multi-Modal Video Transformer for Compressed Video Action Recognition", "year": "1910" }, { "authors": "J Chen; H Hong; B Song", "journal": "Remote Sens", "ref_id": "b16", "title": "MDCT: Multi-Kernel Dilated Convolution and Transformer for One-Stage Object Detection of Remote Sensing Images", "year": "2023" }, { "authors": "J Chen; Y Lu; Q Yu", "journal": "Neural Networks", "ref_id": "b17", "title": "TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation", "year": "2021" }, { "authors": "S Chen; C Ge; Z Tong", "journal": "", "ref_id": "b18", "title": "AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition", "year": "2021" }, { "authors": "Y Chen; X Dai; D Chen", "journal": "Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit", "ref_id": "b19", "title": "Mobile-Former: Bridging MobileNet and Transformer", "year": "2022-06" }, { "authors": "Z Chen; L Xie; J Niu", "journal": "Proc IEEE Int Conf Comput Vis", "ref_id": "b20", "title": "Visformer: The Vision-friendly Transformer", "year": "2021" }, { "authors": "Z Chen; Y Zhu; C Zhao", "journal": "", "ref_id": "b21", "title": "DPT: Deformable Patch-based Transformer for Visual Recognition", "year": "2021" }, { "authors": "M Cheng; H Ma; Q Ma", "journal": "", "ref_id": "b22", "title": "Hybrid Transformer and CNN Attention Network for Stereo Image Super-resolution", "year": "2023" }, { "authors": "X Chu; Z Tian; Y Wang", "journal": "Adv Neural Inf Process Syst", "ref_id": "b23", "title": "a) Twins: Revisiting the Design of Spatial Attention in Vision Transformers", "year": "2021" }, { "authors": "X Chu; Z Tian; B Zhang", "journal": "Adv Neural Inf Process Syst", "ref_id": "b24", "title": "Conditional Positional Encodings for Vision Transformers Dai Z", "year": "2021" }, { "authors": "S Dehghani-Dehcheshmeh; M Akhoondzadeh; S Homayouni", "journal": "Mar Pollut Bull", "ref_id": "b25", "title": "Oil spills detection from SAR Earth observations based on a hybrid CNN transformer networks", "year": "2023" }, { "authors": "M Dehghani; B Mustafa; J Djolonga", "journal": "", "ref_id": "b26", "title": "Patch n' Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution", "year": "2023" }, { "authors": "Y Deng; Y Meng; J Chen", "journal": "Remote Sens", "ref_id": "b27", "title": "TChange: A Hybrid Transformer-CNN Change Detection Network", "year": "2023" }, { "authors": "J Devlin; M W Chang; K Lee; K Toutanova", "journal": "", "ref_id": "b28", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "year": "2018" }, { "authors": "T Dhamija; A Gupta; S Gupta", "journal": "Appl Intell", "ref_id": "b29", "title": "Semantic segmentation in medical images through transfused convolution and transformer networks", "year": "2023" }, { "authors": "J Dolz; K Gopinath; J Yuan", "journal": "IEEE Trans Med Imaging", "ref_id": "b30", "title": "HyperDense-Net: A Hyper-Densely Connected CNN for Multi-Modal Image Segmentation", "year": "2019" }, { "authors": "A Dosovitskiy; L Beyer; A Kolesnikov", "journal": "", "ref_id": "b31", "title": "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale", "year": "2020" }, { "authors": "Y Du; Z Liu; J Li; W X Zhao", "journal": "", "ref_id": "b32", "title": "A Survey of Vision-Language Pre-Trained Models", "year": "2022" }, { "authors": "M J Er; Y Zhang; J Chen; W Gao", "journal": "Artif Intell Rev", "ref_id": "b33", "title": "Ship detection with deep learning: a survey", "year": "2023" }, { "authors": "Y Fan; X Lu; D Li; Y Liu", "journal": "", "ref_id": "b34", "title": "Video-Based emotion recognition using CNN-RNN and C3D hybrid networks", "year": "2016" }, { "authors": "J Fang; H Lin; X Chen; K Zeng", "journal": "IEEE Comput Soc Conf Comput Vis Pattern Recognit Work", "ref_id": "b35", "title": "A Hybrid Network of CNN and Transformer for Lightweight Image Super-Resolution", "year": "2022-06" }, { "authors": "W Fang; F Zhang; V S Sheng; Y Ding", "journal": "Comput Mater Contin", "ref_id": "b36", "title": "A method for improving CNN-based image recognition using DCGAN", "year": "2018" }, { "authors": "Q Feng; F Li; H Li", "journal": "Eng Fail Anal", "ref_id": "b37", "title": "Hybrid convolution and transformer network for coupler fracture failure pattern segmentation recognition in heavy-haul trains", "year": "2023" }, { "authors": "S Frolov; T Hinz; F Raue", "journal": "Neural Networks", "ref_id": "b38", "title": "Adversarial text-to-image synthesis: A review", "year": "2021" }, { "authors": "G Gao; Z Xu; J Li", "journal": "", "ref_id": "b39", "title": "CTCNet: A CNN-Transformer Cooperation Network for Face Image Super-Resolution", "year": "2022" }, { "authors": "P Gao; X Yang; R Zhang", "journal": "Neural Networks", "ref_id": "b40", "title": "Generalised Image Outpainting with U-Transformer", "year": "2022" }, { "authors": "Y Gao; M Zhou; D N Metaxas", "journal": "Lect Notes Comput Sci", "ref_id": "b41", "title": "UTNet: A Hybrid Transformer Architecture for Medical Image Segmentation", "year": "2021" }, { "authors": "C Ge; Y Liang; Y Song", "journal": "Adv Neural Inf Process Syst", "ref_id": "b42", "title": "Revitalizing CNN Attention via Transformers in Self-Supervised Visual Representation Learning", "year": "2021" }, { "authors": "B Graham; A El-Nouby; H Touvron", "journal": "Proc IEEE Int Conf Comput Vis", "ref_id": "b43", "title": "LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference", "year": "2021" }, { "authors": "H Guo; M Song; Z Ding", "journal": "Sensors", "ref_id": "b44", "title": "Vision-Based Efficient Robotic Manipulation with a Dual-Streaming Compact Convolutional Transformer", "year": "2023" }, { "authors": "J Guo; K Han; H Wu", "journal": "Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit", "ref_id": "b45", "title": "CMT: Convolutional Neural Networks Meet Vision Transformers", "year": "2021" }, { "authors": "G Habib; T J Saleem; B ; Lall; S D Sarkar; Rad M Lepetit; V ", "journal": "Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit", "ref_id": "b46", "title": "Keypoint Transformer: Solving Joint Identification in Challenging Hands and Object Interactions for Accurate 3D Pose Estimation", "year": "2021" }, { "authors": "K Han; Y Wang; H Chen", "journal": "IEEE Trans Pattern Anal Mach Intell", "ref_id": "b47", "title": "A Survey on Vision Transformer", "year": "2023" }, { "authors": "K Han; A Xiao; E Wu", "journal": "Adv Neural Inf Process Syst", "ref_id": "b48", "title": "Transformer in Transformer", "year": "2021" }, { "authors": "A Hassani; S Walton; N Shah", "journal": "Comput Biol Med", "ref_id": "b49", "title": "HCTNet: A hybrid CNN-transformer network for breast ultrasound image segmentation", "year": "2021" }, { "authors": "M Heidari; A Kazerouni; M Soltany", "journal": "", "ref_id": "b50", "title": "HiFormer: Hierarchical Multi-scale Representations Using Transformers for Medical Image Segmentation", "year": "2022" }, { "authors": "B Heo; S Yun; D Han", "journal": "", "ref_id": "b51", "title": "Rethinking Spatial Dimensions of Vision Transformers", "year": "2021" }, { "authors": "Y J Heo; W H Yeo; B G Kim", "journal": "Appl Intell", "ref_id": "b52", "title": "DeepFake detection algorithm based on improved vision transformer", "year": "2023" }, { "authors": "J Huang; Z Zhu; G ; Huang; K Huang; M Wen; C Wang; L Ling", "journal": "", "ref_id": "b53", "title": "Multi-Stage HRNet: Multiple Stage High-Resolution Network for Human Pose Estimation", "year": "2019" }, { "authors": "Q Huang; C Huang; X Wang; F Jiang", "journal": "Inf Sci (Ny)", "ref_id": "b54", "title": "a) Facial expression recognition with gridwise attention and visual transformer", "year": "2021" }, { "authors": "X Huang; J Chen; M Chen", "journal": "Biocybern Biomed Eng", "ref_id": "b55", "title": "FRE-Net: Full-region enhanced network for nuclei segmentation in histopathology images", "year": "2023" }, { "authors": "Z Huang; Y Ben; G Luo", "journal": "", "ref_id": "b56", "title": ") Recent Advances in Vision Transformer: A Survey and Outlook of Recent Work Islam MA", "year": "2021" }, { "authors": "A Jamali; S K Roy; P Ghamisi", "journal": "Int J Appl Earth Obs Geoinf", "ref_id": "b57", "title": "WetMapFormer: A unified deep CNN and vision transformer for complex wetland mapping", "year": "2023" }, { "authors": "G P Ji; M Zhuge; D Gao", "journal": "Mach Intell Res", "ref_id": "b58", "title": "Masked Vision-language Transformer in Fashion", "year": "2023" }, { "authors": "A Jiang; N Yan; F Wang", "journal": "", "ref_id": "b59", "title": "Visible Image Recognition of Power Transformer Equipment Based on Mask R-CNN", "year": "2019" }, { "authors": "K Jiang; P Peng; Y Lian; W Xu", "journal": "J Vis Commun Image Represent", "ref_id": "b60", "title": "The encoding method of position embeddings in vision transformer", "year": "2022" }, { "authors": "S Jiang; J Li", "journal": "Comput Biol Med", "ref_id": "b61", "title": "TransCUNet: UNet cross fused transformer for medical image segmentation", "year": "2022" }, { "authors": "Y Jiang; S Chang; Z Wang", "journal": "Adv Neural Inf Process Syst", "ref_id": "b62", "title": "TransGAN: Two Pure Transformers Can Make One Strong GAN, and That Can Scale Up", "year": "2021" }, { "authors": "Jin W Yu; H Luo; X ", "journal": "", "ref_id": "b63", "title": "CvT-ASSD: Convolutional vision-Transformer Based Attentive Single Shot MultiBox Detector", "year": "2021-11" }, { "authors": "T Jing; Q-H Meng; H-R Hou", "journal": "IEEE Trans Ind Informatics", "ref_id": "b64", "title": "SmokeSeger: A Transformer-CNN coupled model for urban scene smoke segmentation", "year": "2023" }, { "authors": "Y Jing; F Wang", "journal": "", "ref_id": "b65", "title": "TP-VIT: A TWO-PATHWAY VISION TRANSFORMER FOR VIDEO ACTION RECOGNITION", "year": "2022-05" }, { "authors": "N Kanwal; T Eftestøl; F Khoraminia", "journal": "", "ref_id": "b66", "title": "Vision Transformers for Small Histological Datasets Learned Through Knowledge Distillation", "year": "2023" }, { "authors": "T Karras; S Laine; M Aittala", "journal": "Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit", "ref_id": "b67", "title": "Analyzing and Improving the Image Quality of StyleGAN", "year": "2019" }, { "authors": "G Kaur; R Sinha; P K Tiwari", "journal": "Neurosci Informatics", "ref_id": "b68", "title": "Face mask recognition system using CNN model", "year": "2022" }, { "authors": "J Ke; Y Lu; Y Shen", "journal": "Med Image Anal", "ref_id": "b69", "title": "ClusterSeg: A crowd cluster pinpointed nucleus segmentation framework with cross-modality datasets", "year": "2023" }, { "authors": "A Khan; S H Khan; M Saif", "journal": "", "ref_id": "b70", "title": "A Survey of Deep Learning Techniques for the Analysis of COVID-19 and their usability for Detecting Omicron", "year": "2023" }, { "authors": "A Khan; A S Qureshi; N Wahab", "journal": "Comput Intell", "ref_id": "b71", "title": "A recent survey on the applications of genetic programming in image processing", "year": "2021" }, { "authors": "A Khan; A Sohail; U Zahoora; A S Qureshi", "journal": "Artif Intell Rev", "ref_id": "b72", "title": "A survey of the recent architectures of deep convolutional neural networks", "year": "2020" }, { "authors": "S Khan; M Naseer; M Hayat", "journal": "ACM Comput Surv", "ref_id": "b73", "title": "Transformers in Vision: A Survey", "year": "2021" }, { "authors": "S H Khan; A Khan; Y S Lee", "journal": "", "ref_id": "b74", "title": "Segmentation of Shoulder Muscle MRI Using a New Region and Edge based Deep Auto-Encoder", "year": "2021" }, { "authors": "S H Khan; N S Shah; R Nuzhat", "journal": "Microscopy", "ref_id": "b75", "title": "Malaria parasite classification framework using a novel channel squeezed and boosted CNN", "year": "2022" }, { "authors": "B J Kim; H Choi; H Jang", "journal": "Pattern Recognit", "ref_id": "b76", "title": "Improved robustness of vision transformers via prelayernorm in patch embedding", "year": "2023" }, { "authors": "A Kirillov; E Mintun; N Ravi", "journal": "Neural Comput", "ref_id": "b77", "title": "Backpropagation Applied to Handwritten Zip Code Recognition", "year": "1989" }, { "authors": "K Lee; H Chang; L Jiang", "journal": "Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit", "ref_id": "b78", "title": "MPViT: Multi-Path Vision Transformer for Dense Prediction", "year": "2021" }, { "authors": "M C Leong; H Zhang; H L Tan", "journal": "Proc IEEE Int Conf Comput Vis", "ref_id": "b79", "title": "BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search", "year": "2021" }, { "authors": "G Li; R Chen; J Zhang", "journal": "Biomed Signal Process Control", "ref_id": "b80", "title": "Fusing enhanced Transformer and large kernel CNN for malignant thyroid nodule segmentation", "year": "2023" }, { "authors": "G Li; H Yao; Y Le; C Qin", "journal": "J Vis Commun Image Represent", "ref_id": "b81", "title": "Recaptured screen image identification based on vision transformer", "year": "2023" }, { "authors": "J Li; J Chen; Y Tang", "journal": "Med Image Anal", "ref_id": "b82", "title": "Transforming medical imaging with Transformers? A comparative review of key properties, current progresses, and future perspectives", "year": "2023" }, { "authors": "J Li; Q Du; W Li", "journal": "Remote Sens", "ref_id": "b83", "title": "MCAFNet: A Multiscale Channel Attention Fusion Network for Semantic Segmentation of Remote Sensing Images", "year": "2023" }, { "authors": "R Li; Z Mai; Z Zhang", "journal": "J Vis Commun Image Represent", "ref_id": "b84", "title": "TransCAM: Transformer attention-based CAM refinement for Weakly supervised semantic segmentation", "year": "2023" }, { "authors": "X Li; S Li", "journal": "Agric", "ref_id": "b85", "title": "Transformer Help CNN See Better: A Lightweight Hybrid Apple Disease Identification Model Based on Transformers", "year": "2022" }, { "authors": "X Li; X Li; S Zhang", "journal": "J King Saud Univ -Comput Inf Sci", "ref_id": "b86", "title": "SLViT: Shuffle-convolution-based lightweight Vision transformer for effective diagnosis of sugarcane leaf diseases", "year": "2023" }, { "authors": "X Li; Y Xiang; S Li", "journal": "Comput Electron Agric", "ref_id": "b87", "title": "Combining convolutional and vision transformer structures for sheep face recognition", "year": "2023" }, { "authors": "Y Li; T Yao; Y Pan; T Mei", "journal": "IEEE Trans Pattern Anal Mach Intell", "ref_id": "b88", "title": "Contextual Transformer Networks for Visual Recognition", "year": "2021" }, { "authors": "Y Li; K Zhang; J Cao", "journal": "", "ref_id": "b89", "title": "LocalViT: Bringing Locality to Vision Transformers", "year": "2021" }, { "authors": "Y Li; S Zhang; Z Wang", "journal": "", "ref_id": "b90", "title": "TokenPose: Learning Keypoint Tokens for Human Pose Estimation", "year": "2021" }, { "authors": "Z Li; D Li; C Xu", "journal": "Lect Notes Comput Sci", "ref_id": "b91", "title": "TFCNs: A CNN-Transformer Hybrid Network for Medical Image Segmentation", "year": "2022" }, { "authors": "J; Lian; T; Liu; Y Zhou", "journal": "Universe", "ref_id": "b92", "title": "Aurora Classification in All-Sky Images via CNN-Transformer", "year": "2023" }, { "authors": "S Liang; Z Hua; J Li", "journal": "", "ref_id": "b93", "title": "Hybrid transformer-CNN networks using superpixel segmentation for remote sensing building change detection", "year": "2023" }, { "authors": "S Lin; H Xie; B Wang", "journal": "Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit", "ref_id": "b94", "title": "Knowledge Distillation via the Target-aware Transformer", "year": "2022-06" }, { "authors": "J Liu; H Li; W Kong", "journal": "Eng Appl Artif Intell", "ref_id": "b95", "title": "Multi-level learning counting via pyramid vision transformer and CNN", "year": "2023" }, { "authors": "X Liu; Z Deng; Y Yang", "journal": "Artif Intell Rev", "ref_id": "b96", "title": "Recent progress in semantic image segmentation", "year": "2018" }, { "authors": "Y Liu; N Ong; K Peng", "journal": "", "ref_id": "b97", "title": "MMViT: Multiscale Multiview Vision Transformers", "year": "2021" }, { "authors": "Z Liu; Y Lin; Y Cao", "journal": "", "ref_id": "b98", "title": "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows", "year": "2021" }, { "authors": "T Lu; L Wan; S Qi; M Gao", "journal": "Sensors", "ref_id": "b99", "title": "Land Cover Classification of UAV Remote Sensing Based on Transformer-CNN Hybrid Architecture", "year": "2023" }, { "authors": "W Lu; C Lan; C Niu", "journal": "IEEE J Sel Top Appl Earth Obs Remote Sens", "ref_id": "b100", "title": "A CNN-Transformer Hybrid Model Based on CSWin Transformer for UAV Image Object Detection", "year": "2023" }, { "authors": "J Lyu; G Li; C Wang", "journal": "Med Image Anal", "ref_id": "b101", "title": "Region-focused multi-view transformer-based generative adversarial network for cardiac cine MRI reconstruction", "year": "2023" }, { "authors": "F Ma; B Sun; S Li", "journal": "IEEE Trans Affect Comput", "ref_id": "b102", "title": "Facial Expression Recognition With Visual Transformers and Attentional Selective Fusion", "year": "2023" }, { "authors": "Z Ma; Y Qi; C Xu", "journal": "Comput Biol Med", "ref_id": "b103", "title": "ATFE-Net: Axial Transformer and Feature Enhancement-based CNN for ultrasound breast mass segmentation", "year": "2023" }, { "authors": "M Maaz; A Shaker; H Cholakkal", "journal": "LNCS", "ref_id": "b104", "title": "EdgeNeXt: Efficiently Amalgamated CNN-Transformer Architecture for Mobile Vision Applications", "year": "2023" }, { "authors": "M Maaz; A Shaker; H Cholakkal", "journal": "", "ref_id": "b105", "title": "EdgeNeXt: Efficiently Amalgamated CNN-Transformer Architecture for Mobile Vision Applications", "year": "2022" }, { "authors": "W Mao; Y Ge; C Shen", "journal": "", "ref_id": "b106", "title": "HaloAE: An HaloNet based Local Transformer Auto-Encoder for Anomaly Detection and Localization", "year": "2021" }, { "authors": "J Maurício; I Domingues; J Bernardino", "journal": "Appl Sci", "ref_id": "b107", "title": "Comparing Vision Transformers and Convolutional Neural Networks for Image Classification: A Literature Review", "year": "2023" }, { "authors": "J N Mogan; C P Lee; K M Lim", "journal": "", "ref_id": "b108", "title": "Gait-CNN-ViT: Multi-Model Gait Recognition Page 3809", "year": "2023" }, { "authors": "L Morra; L Piano; F Lamberti; T Tommasi", "journal": "", "ref_id": "b109", "title": "Bridging the gap between natural and medical images through deep colorization", "year": "2020" }, { "authors": "O Moutik; H Sekkat; S Tigani", "journal": "Sensors", "ref_id": "b110", "title": "Convolutional Neural Networks or Vision Transformers: Who Will Win the Race for Action Recognitions in Visual Data?", "year": "2023" }, { "authors": "S I Nafisah; G Muhammad; M S Hossain; S A Alqahtani", "journal": "Math", "ref_id": "b111", "title": "A Comparative Evaluation between Convolutional Neural Networks and Vision Transformers for COVID-19 Detection", "year": "2023" }, { "authors": "S Naveen; Ram Kiran; Mss Indupriya; M ", "journal": "Image Vis Comput", "ref_id": "b112", "title": "Transformer models for enhancing AttnGAN based text to image generation", "year": "2021" }, { "authors": "A Obeid; T Mahbub; S Javed", "journal": "Lect Notes Comput Sci", "ref_id": "b113", "title": "NucDETR: End-to-End Transformer for Nucleus Detection in Histopathology Images", "year": "2022" }, { "authors": "X Pan; C Ge; R Lu", "journal": "", "ref_id": "b114", "title": "On the Integration of Self-Attention and Convolution", "year": "2022" }, { "authors": "N Parmar; A Vaswani; J Uszkoreit", "journal": "", "ref_id": "b115", "title": "Image transformer. 35th Int Conf Mach Learn ICML 2018", "year": "2018" }, { "authors": "Z Peng; Z Guo; W Huang", "journal": "IEEE Trans Pattern Anal Mach Intell", "ref_id": "b116", "title": "Conformer: Local Features Coupling Global Representations for Recognition and Detection", "year": "2023" }, { "authors": "Z Peng; W Huang; S Gu", "journal": "Proc IEEE Int Conf Comput Vis", "ref_id": "b117", "title": "Conformer: Local Features Coupling Global Representations for Visual Recognition", "year": "2021" }, { "authors": "J Quan; B Ge; M Wang", "journal": "Neural Comput Appl", "ref_id": "b118", "title": "CrackViT: a unified CNN-transformer model for pixellevel crack extraction", "year": "2023" }, { "authors": "G Rafiq; • Rafiq; Muhammad; • Gyu", "journal": "Artif Intell Rev", "ref_id": "b119", "title": "Video description: A comprehensive survey of deep learning approaches", "year": "2023" }, { "authors": "S Raghavendra; Ramyashree; S K Abhilash", "journal": "IEEE Access", "ref_id": "b120", "title": "Efficient Deep Learning Approach to Recognize Person Attributes by Using Hybrid Transformers for Surveillance Scenarios", "year": "2023" }, { "authors": "R Ranftl; A Bochkovskiy; V Koltun", "journal": "Proc IEEE Int Conf Comput Vis", "ref_id": "b121", "title": "Vision Transformers for Dense Prediction", "year": "2021" }, { "authors": "D Rao; X-J Wu; T Xu", "journal": "", "ref_id": "b122", "title": "TGFuse: An Infrared and Visible Image Fusion Approach Based on Transformer and Generative Adversarial Network", "year": "2022" }, { "authors": "Z Rauf; A Sohail; S H Khan", "journal": "Reprod Syst Sex Disord", "ref_id": "b123", "title": "Attention-guided multi-scale deep object detection framework for lymphocyte analysis in IHC histological images", "year": "2023" }, { "authors": "A Rehman; A Khan; P ; Ren; C Li; G Wang", "journal": "Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit", "ref_id": "b124", "title": "MaxViT-UNet: Multi-Axis Attention for Medical Image Segmentation", "year": "2022-06" }, { "authors": "S T Seydi; M Sadegh", "journal": "Measurement", "ref_id": "b125", "title": "Improved burned area mapping using monotemporal Landsat-9 imagery and convolutional shift-transformer", "year": "2023" }, { "authors": "; Shafri; Al-Ruzouq Hzm; Shanableh R;", "journal": "Drones", "ref_id": "b126", "title": "Large-Scale Date Palm Tree Segmentation from Multiscale UAV-Based and Aerial Images Using Deep Vision Transformers", "year": "2023" }, { "authors": "F Shamshad; S Khan; S W Zamir", "journal": "Med Image Anal", "ref_id": "b127", "title": "Transformers in medical imaging: A survey", "year": "2023" }, { "authors": "X Shen; J Xu; H Jia", "journal": "Comput Med Imaging Graph", "ref_id": "b128", "title": "Self-attentional microvessel segmentation via squeezeexcitation transformer Unet", "year": "2022" }, { "authors": "R Shi; S Yang; Y Chen", "journal": "Pattern Recognit Lett", "ref_id": "b129", "title": "CNN-Transformer for visual-tactile fusion applied in road recognition of autonomous vehicles", "year": "2023" }, { "authors": "C Si; W Yu; P Zhou", "journal": "", "ref_id": "b130", "title": ") Very Deep Convolutional Networks for Large-Scale Image Recognition", "year": "2014" }, { "authors": "A Sohail; A Khan; H Nisar", "journal": "", "ref_id": "b131", "title": "Mitotic Nuclei Analysis in Breast Cancer Histopathology Images using Deep Ensemble Classifier Mitotic Nuclei Analysis in Breast Cancer Histopathology Images using Deep Ensemble Classifier", "year": "2021" }, { "authors": "A Sohail; A Khan; H Nisar", "journal": "Med Image Anal", "ref_id": "b132", "title": "Mitotic nuclei analysis in breast cancer histopathology images using deep ensemble classifier", "year": "2021" }, { "authors": "L Song; G Liu; M Ma", "journal": "Appl Intell", "ref_id": "b133", "title": "TD-Net:unsupervised medical image registration network based on Transformer and CNN", "year": "2022" }, { "authors": "Y Song; Z He; H Qian; X Du", "journal": "IEEE Trans Image Process", "ref_id": "b134", "title": "Vision Transformers for Single Image Dehazing", "year": "2023" }, { "authors": "Z Song; J Yu; Ypp Chen; W Yang", "journal": "Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit", "ref_id": "b135", "title": "Transformer Tracking with Cyclic Shifting Window Attention", "year": "2022-06" }, { "authors": "M Springenberg; A Frommholz; M Wenzel", "journal": "", "ref_id": "b136", "title": "From CNNs to Vision Transformers --A Comprehensive Evaluation of Deep Learning Models for Histopathology", "year": "2022" }, { "authors": "A Srinivas; T Y Lin; N Parmar", "journal": "Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit", "ref_id": "b137", "title": "Bottleneck Transformers for Visual Recognition", "year": "2021" }, { "authors": "L Stoffl; M Vidal; A ; Mathis; W Su; Y Wang; K Li", "journal": "Pattern Recognit", "ref_id": "b138", "title": "End-to-End Trainable Multi-Instance Pose Estimation with Transformers", "year": "2021" }, { "authors": "K Sun; B Xiao; D Liu; J Wang", "journal": "", "ref_id": "b139", "title": "Deep High-Resolution Representation Learning for Human Pose Estimation", "year": "2019" }, { "authors": "S Sun; X Yue; H Zhao", "journal": "IEEE Trans Pattern Anal Mach Intell", "ref_id": "b140", "title": "Patch-based Separable Transformer for Visual Recognition", "year": "2022" }, { "authors": "M Tan; Q V Le", "journal": "", "ref_id": "b141", "title": "EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks", "year": "2019-06" }, { "authors": "Lco Tiong; D Sigmund; Abj Teoh", "journal": "IEEE Signal Process Lett", "ref_id": "b142", "title": "Face-Periocular Cross-Identification via Contrastive Hybrid Attention Vision Transformer", "year": "2023" }, { "authors": "D Torbunov; Y Huang; H Yu", "journal": "", "ref_id": "b143", "title": "UVCGAN: UNet Vision Transformer cycleconsistent GAN for unpaired image-to-image translation", "year": "2022" }, { "authors": "H Touvron; M Cord; M Douze", "journal": "", "ref_id": "b144", "title": "Training data-efficient image transformers & distillation through attention", "year": "2020" }, { "authors": "H Touvron; M Cord; A Sablayrolles", "journal": "Proc IEEE Int Conf Comput Vis", "ref_id": "b145", "title": "Going deeper with Image Transformers", "year": "2021" }, { "authors": "A Tragakis; C Kaul; R Murray-Smith; D Husmeier", "journal": "", "ref_id": "b146", "title": "The Fully Convolutional Transformer for Medical Image Segmentation", "year": "" }, { "authors": "J Tu; G Mei; Z Ma; F Piccialli", "journal": "IEEE J Sel Top Appl Earth Obs Remote Sens", "ref_id": "b147", "title": "SWCGAN: Generative Adversarial Network Combining Swin Transformer and CNN for Remote Sensing Image Super-Resolution", "year": "2022" }, { "authors": "Z Tu; H Talebi; H Zhang", "journal": "LNCS", "ref_id": "b148", "title": "MaxViT: Multi-Axis Vision Transformer", "year": "2022" }, { "authors": "A Ulhaq; N Akhtar; G Pogrebna; A Mian", "journal": "", "ref_id": "b149", "title": "Vision Transformers for Action Recognition: A Survey", "year": "2022" }, { "authors": "W Ullah; T Hussain; Fum Ullah", "journal": "Eng Appl Artif Intell", "ref_id": "b150", "title": "TransCNN: Hybrid CNN and transformer mechanism for surveillance anomaly detection", "year": "2023" }, { "authors": "A Vaswani; G Brain; N Shazeer", "journal": "Adv Neural Inf Process Syst", "ref_id": "b151", "title": "a) Attention is All you Need", "year": "2017" }, { "authors": "A Vaswani; N Shazeer; N Parmar", "journal": "Adv Neural Inf Process Syst", "ref_id": "b152", "title": "Attention Is All You Need", "year": "2017-12" }, { "authors": "H Wang; Y Zhu; H Adam", "journal": "Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit", "ref_id": "b153", "title": "a) Max-DeepLab: End-to-End Panoptic Segmentation with Mask Transformers", "year": "2021" }, { "authors": "J Wang; Z Zhang; L Luo", "journal": "Comput Electron Agric", "ref_id": "b154", "title": "DualSeg: Fusing transformer and CNN structure for image segmentation in complex vineyard environment", "year": "2023" }, { "authors": "L Wang; L Pan; H Wang", "journal": "Biomed Signal Process Control", "ref_id": "b155", "title": "DHUnet: Dual-branch hierarchical global-local fusion network for whole slide image segmentation", "year": "2023" }, { "authors": "L Wang; A Tien", "journal": "", "ref_id": "b156", "title": "Aerial Image Object Detection With Vision Transformer Detector (ViTDet)", "year": "2023" }, { "authors": "R Wang; F Geng; X Wang", "journal": "Neural Process Lett", "ref_id": "b157", "title": "MTPose: Human Pose Estimation with High-Resolution Multi-scale Transformers", "year": "2022" }, { "authors": "W Wang; Chen W Qiu; Q ", "journal": "", "ref_id": "b158", "title": "CrossFormer++: A Versatile Vision Transformer Hinging on Cross-scale Attention Wang", "year": "2022" }, { "authors": "W Wang; C Tang; X Wang; B Zheng", "journal": "IEEE Geosci Remote Sens Lett", "ref_id": "b159", "title": "A ViT-Based Multiscale Feature Fusion Approach for Remote Sensing Image Segmentation", "year": "2022" }, { "authors": "W Wang; J Wang; B Lu", "journal": "Remote Sens", "ref_id": "b160", "title": "MCPT: Mixed Convolutional Parallel Transformer for Polarimetric SAR Image Classification", "year": "2023" }, { "authors": "W Wang; E Xie; X Li", "journal": "Comput Vis Media", "ref_id": "b161", "title": "PVT v2: Improved Baselines with Pyramid Vision Transformer", "year": "2021" }, { "authors": "W Wang; E Xie; X Li", "journal": "Proc IEEE Int Conf Comput Vis", "ref_id": "b162", "title": "Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions", "year": "2021" }, { "authors": "Y Wang; Y Qiu; P Cheng; J Zhang", "journal": "IEEE Trans Circuits Syst Video Technol", "ref_id": "b163", "title": "Hybrid CNN-Transformer Features for Visual Place Recognition", "year": "2022" }, { "authors": "Y Wang; Z Xu; X Wang", "journal": "Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit", "ref_id": "b164", "title": "End-to-End Video Instance Segmentation with Transformers", "year": "2020" }, { "authors": "Y Wang; Y Yang; J Bai; M Zhang; Convolutions Evolving Attention With Residual; Z Wei; H Pan; L Li", "journal": "", "ref_id": "b165", "title": "DMFormer: Closing the gap Between CNN and Vision Transformers", "year": "2023" }, { "authors": "W Weng; X Zhu", "journal": "IEEE Access", "ref_id": "b166", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation", "year": "2015" }, { "authors": "J Wensel; H Ullah; S S Member", "journal": "", "ref_id": "b167", "title": "ViT-ReT: Vision and Recurrent Transformer Neural Networks for Human Activity Recognition in Videos", "year": "2022" }, { "authors": "S Woo; S Debnath; R Hu", "journal": "", "ref_id": "b168", "title": "ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders", "year": "2021" }, { "authors": "J Wu; R Fu; H Fang", "journal": "", "ref_id": "b169", "title": "MedSegDiff-V2: Diffusion based Medical Image Segmentation with Transformer", "year": "2021" }, { "authors": "K Wu; J Zhang; H Peng", "journal": "Lect Notes Comput Sci", "ref_id": "b170", "title": "TinyViT: Fast Pretraining Distillation for Small Vision Transformers", "year": "2022" }, { "authors": "Q Wu; Y Wu; Y Zhang; L Zhang", "journal": "IEEE Trans Instrum Meas", "ref_id": "b171", "title": "A Local-Global Estimator Based on Large Kernel CNN and Transformer for Human Pose Estimation and Running Pose Measurement", "year": "2022" }, { "authors": "Y Wu; C Lian; Z Zeng", "journal": "IEEE Trans Emerg Top Comput Intell", "ref_id": "b172", "title": "An Aggregated Convolutional Transformer Based on Slices and Channels for Multivariate Time Series Classification", "year": "2023" }, { "authors": "Y Wu; G Wang; Z Wang", "journal": "Biomed Signal Process Control", "ref_id": "b173", "title": "DI-Unet: Dimensional interaction self-attention for medical image segmentation", "year": "2022" }, { "authors": "Z Wu; W Liao; C Yan", "journal": "Comput Methods Programs Biomed", "ref_id": "b174", "title": "Deep learning based MRI reconstruction with transformer", "year": "2023" }, { "authors": "Z Wu; C Shen; A Van Den Hengel", "journal": "Pattern Recognit", "ref_id": "b175", "title": "Wider or Deeper: Revisiting the ResNet Model for Visual Recognition", "year": "2019" }, { "authors": "W Xia; D Han; D Li", "journal": "", "ref_id": "b176", "title": "An ensemble learning integration of multiple CNN with 158", "year": "2023" }, { "authors": "Z Xia; J Kim", "journal": "Sensors", "ref_id": "b177", "title": "Enhancing Mask Transformer with Auxiliary Convolution Layers for Semantic Segmentation", "year": "2023" }, { "authors": "Z Xia; X Pan; S Song", "journal": "P roc IEEE Comput Soc Conf Comput Vis Pattern Recognit", "ref_id": "b178", "title": "Vision Transformer with Deformable Attention", "year": "2022-06" }, { "authors": "H Xiao; L Li; Q Liu", "journal": "Biomed Signal Process Control", "ref_id": "b179", "title": "Transformers in medical image segmentation: A review", "year": "2023" }, { "authors": "T Xiao; M Singh; E Mintun", "journal": "Adv Neural Inf Process Syst", "ref_id": "b180", "title": "Early Convolutions Help Transformers See Better", "year": "2021" }, { "authors": "S Xie; R Girshick; P Dollár", "journal": "", "ref_id": "b181", "title": "Aggregated residual transformations for deep neural networks", "year": "" }, { "authors": "S Xiong; H ; Kasaei; W Xu; Y Xu; T Chang; Z Tu", "journal": "", "ref_id": "b182", "title": "Fine-grained Object Categorization for Service Robots", "year": "2021" }, { "authors": "Y Xu; Q Zhang; J Zhang; D Tao", "journal": "Adv Neural Inf Process Syst", "ref_id": "b183", "title": "ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias", "year": "2021" }, { "authors": "T Xue; P Ma", "journal": "Appl Intell", "ref_id": "b184", "title": "TC-net: transformer combined with cnn for image denoising", "year": "2023" }, { "authors": "C Yan; Fan X Fan; J ", "journal": "Int J Environ Res Public Heal", "ref_id": "b185", "title": "HyFormer: Hybrid Transformer and CNN for Pixel-Level Multispectral Image Land Cover Classification", "year": "2023" }, { "authors": "H Yan; Z Li; W Li", "journal": "Expert Syst Appl", "ref_id": "b186", "title": "CSwin-PNet: A CNN-Swin Transformer combined pyramid network for breast lesion segmentation in ultrasound images", "year": "2023" }, { "authors": "J Yang; B Du; C Wu", "journal": "", "ref_id": "b187", "title": "Hybrid Vision Transformer Model for Hyperspectral Image Classification", "year": "2022-07" }, { "authors": "J Yang; J Tu; X Zhang", "journal": "Biomed Signal Process Control", "ref_id": "b188", "title": "TSE DeepLab: An efficient visual transformer for medical image segmentation", "year": "2023" }, { "authors": "S Yang; W Guo; Y Ren", "journal": "", "ref_id": "b189", "title": "CrowdFormer: An Overlap Patching Vision Transformer for Top-Down Crowd Counting", "year": "2022" }, { "authors": "Y Yang; L Zhang; L Ren; X Wang", "journal": "Comput Methods Programs Biomed", "ref_id": "b190", "title": "MMViT-Seg: A lightweight transformer and CNN fusion network for COVID-19 segmentation", "year": "2023" }, { "authors": "C Yao; L Feng; Y Kong", "journal": "Neurocomputing", "ref_id": "b191", "title": "Transformers and CNNs fusion network for salient object detection", "year": "2023" }, { "authors": "G Yao; T Lei; J Zhong", "journal": "Pattern Recognit Lett", "ref_id": "b192", "title": "A review of Convolutional-Neural-Network-based action recognition", "year": "2019" }, { "authors": "T Yao; Y Li; Y Pan", "journal": "IEEE Trans Pattern Anal Mach Intell", "ref_id": "b193", "title": "", "year": "" }, { "authors": "D Ye; Z Ni; H Wang", "journal": "IEEE Trans Image Process", "ref_id": "b194", "title": "CSformer: Bridging Convolution and Transformer for Compressive Sensing", "year": "2023" }, { "authors": "L Ye; M Rochan; Z Liu; Y Wang", "journal": "Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit", "ref_id": "b195", "title": "Cross-Modal Self-Attention Network for Referring Image Segmentation", "year": "2019-06" }, { "authors": "T Ye; W Qin; Z Zhao", "journal": "IEEE Trans Instrum Meas", "ref_id": "b196", "title": "Real-Time Object Detection Network in UAV-Vision Based on CNN and Transformer", "year": "2023" }, { "authors": "S Yi; L Li; X Liu", "journal": "Infrared Phys Technol", "ref_id": "b197", "title": "HCTIRdeblur: A hybrid convolution-transformer network for single infrared image deblurring", "year": "2023" }, { "authors": "G Yu; X Zhou", "journal": "Math", "ref_id": "b198", "title": "An Improved YOLOv5 Crack Detection Method Combined with a Bottleneck Transformer", "year": "2023" }, { "authors": "F Yuan; Z Zhang; Z Fang", "journal": "Pattern Recognit", "ref_id": "b199", "title": "An effective CNN and Transformer complementary network for medical image segmentation", "year": "2023" }, { "authors": "J Yuan; F Zhou; Z Guo", "journal": "J Digit Imaging", "ref_id": "b200", "title": "HCformer: Hybrid CNN-Transformer for LDCT Image Denoising", "year": "2023" }, { "authors": "K Yuan; S Guo; Z Liu", "journal": "Proc IEEE Int Conf Comput Vis", "ref_id": "b201", "title": "a) Incorporating Convolution Designs into Visual Transformers", "year": "2021" }, { "authors": "L Yuan; Y Chen; T Wang", "journal": "Proc IEEE Int Conf Comput Vis", "ref_id": "b202", "title": "Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet", "year": "2021" }, { "authors": "M M Zafar; Z Rauf; A Sohail", "journal": "Photodiagnosis Photodyn Ther", "ref_id": "b203", "title": "Detection of Tumour Infiltrating Lymphocytes in CD3 and CD8 Stained Histopathological Images using a Two-Phase Deep CNN", "year": "2021" }, { "authors": "M M Zahoor; S A Qureshi; S Bibi", "journal": "Sensors", "ref_id": "b204", "title": "A New Deep Hybrid Boosted and Ensemble Learning-Based Brain Tumor Analysis Using MRI", "year": "2022" }, { "authors": "C Zhang; M Zhang; S Zhang", "journal": "Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit", "ref_id": "b205", "title": "a) Delving Deep into the Generalization of Vision Transformers under Distribution Shifts", "year": "2021" }, { "authors": "J Zhang; C Li; Y Yin", "journal": "Artif Intell Rev", "ref_id": "b206", "title": "Applications of artificial neural networks in microorganism image analysis: a comprehensive review from conventional multilayer perceptron to popular convolutional neural network and potential visual transformer", "year": "2023" }, { "authors": "K Zhang; Y Su; X Guo", "journal": "IEEE/CAA J Autom Sin", "ref_id": "b207", "title": "MU-GAN: Facial Attribute Editing Based on Multi-Attention Mechanism", "year": "2021" }, { "authors": "N Zhang; F Nex; G Vosselman; N ; Kerle; N Zhang; L Yu; D Zhang", "journal": "Comput Biol Med", "ref_id": "b208", "title": "Lite-Mono: A Lightweight CNN and Transformer Architecture for Self-Supervised Monocular Depth Estimation", "year": "2022" }, { "authors": "Q Zhang; Y Xu; J Zhang; D Tao", "journal": "LNCS", "ref_id": "b209", "title": "VSA: Learning Varied-Size Window Attention in Vision Transformers", "year": "2022" }, { "authors": "Q Zhang; Y Xu; J Zhang; D Tao", "journal": "Int J Comput Vis", "ref_id": "b210", "title": "ViTAEv2: Vision Transformer Advanced by Exploring Inductive Bias for Image Recognition and Beyond", "year": "2022" }, { "authors": "Q L Zhang; Y Yang; Bin", "journal": "Adv Neural Inf Process Syst", "ref_id": "b211", "title": "ResT: An Efficient Transformer for Visual Recognition", "year": "2021" }, { "authors": "X Zhang; S Cheng; L Wang; H Li", "journal": "IEEE Trans Geosci Remote Sens", "ref_id": "b212", "title": "Asymmetric Cross-Attention Hierarchical Network Based on CNN and Transformer for Bitemporal Remote Sensing Images Change Detection", "year": "2023" }, { "authors": "X Zhang; Y Zhang", "journal": "Int J Mach Learn Cybern", "ref_id": "b213", "title": "Conv-PVT: a fusion architecture of convolution and pyramid vision transformer", "year": "2022" }, { "authors": "Y Zhang; H Liu; Q Hu", "journal": "Lect Notes Comput Sci", "ref_id": "b214", "title": "TransFuse: Fusing Transformers and CNNs for Medical Image Segmentation", "year": "2021" }, { "authors": "Z Zhang; G Sun; K Zheng", "journal": "Comput Biol Med", "ref_id": "b215", "title": "TC-Net: A joint learning framework based on CNN and vision transformer for multi-lesion medical images segmentation", "year": "2023" }, { "authors": "L Zhao; Q Yu; Y Yang", "journal": "", "ref_id": "b216", "title": "Video Person Re-identification Based on Transformer-CNN Model", "year": "2022" }, { "authors": "M Zhao; G Cao; X Huang; L Yang", "journal": "IEEE Signal Process Lett", "ref_id": "b217", "title": "Hybrid Transformer-CNN for Real Image Denoising", "year": "2022" }, { "authors": "S Zhao; K Liu; Y Huang", "journal": "Lect Notes Comput Sci", "ref_id": "b218", "title": "DPIT: Dual-Pipeline Integrated Transformer for Human Pose Estimation", "year": "2022" }, { "authors": "X Zhao; T Yang; B Li; X Zhang", "journal": "Comput Biol Med", "ref_id": "b219", "title": "SwinGAN: A dual-domain Swin Transformerbased generative adversarial network for MRI reconstruction", "year": "2023" }, { "authors": "T Zheng; H Oda; Y Hayashi", "journal": "", "ref_id": "b220", "title": "L-former : a lightweight transformer for realistic medical image generation and its application to super-resolution", "year": "2023" }, { "authors": "D Zhou; B Kang; Jin X ", "journal": "Displays", "ref_id": "b221", "title": "A hybrid of transformer and CNN for efficient single image super-resolution via multi-level distillation", "year": "2021" }, { "authors": "Z Zhou; J Zhang; C Gong", "journal": "Comput Civ Infrastruct Eng", "ref_id": "b222", "title": "Hybrid semantic segmentation for tunnel lining cracks based on Swin Transformer and convolutional neural network", "year": "2023" }, { "authors": "D Zhu; J Tan; C Wu", "journal": "Sensors", "ref_id": "b223", "title": "Crop Disease Identification by Fusing Multiscale Convolution and Vision Transformer", "year": "2023" }, { "authors": "J Y Zhu; T Park; P Isola; A A Efros", "journal": "Proc IEEE Int Conf Comput Vis", "ref_id": "b224", "title": "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks", "year": "2017-10" }, { "authors": "X Zhu; Z Li; J Sun", "journal": "Math Found Comput", "ref_id": "b225", "title": "Expression recognition method combining convolutional features and Transformer", "year": "2023" }, { "authors": "U Zidan; M M Gaber; M M Abdelsamea", "journal": "Expert Syst Appl", "ref_id": "b226", "title": "SwinCup: Cascaded swin transformer for histopathological structures segmentation in colorectal cancer", "year": "2023" }, { "authors": "P Zou; J S Wu", "journal": "Prog Artif Intell", "ref_id": "b227", "title": "SwinE-UNet3+: swin transformer encoder network for medical image segmentation", "year": "2023" } ]
[ { "formula_coordinates": [ 9, 215.52, 306.63, 105.61, 15.75 ], "formula_id": "formula_0", "formula_text": "𝑿 𝑝𝑎𝑡𝑐ℎ 𝑁×𝐷 = 𝑅(𝑰 𝑖𝑚𝑎𝑔𝑒 𝐴×𝐵×𝐶 )" }, { "formula_coordinates": [ 11, 153.2, 343.41, 382.17, 35.29 ], "formula_id": "formula_1", "formula_text": "𝐴𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛(𝑸, 𝑲, 𝑽) = 𝑠𝑜𝑓𝑡𝑚𝑎𝑥 ( 𝑸 ⋅ 𝑲 𝑇 √𝑑 𝑘 ) ⋅ 𝑽 Eq. 3" }, { "formula_coordinates": [ 11, 458.07, 392.2, 15.54, 22.95 ], "formula_id": "formula_2", "formula_text": "1 √d k" }, { "formula_coordinates": [ 46, 509.88, 252.07, 53.57, 47.65 ], "formula_id": "formula_3", "formula_text": "ImageNet 82.7% Top-1 Acc @ 224x224" } ]
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0", "fig_0", "fig_0", "fig_0" ], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b2", "b3", "b4", "b5", "b6", "b7", "b8" ], "table_ref": [], "text": "R ECENTLY, the unsupervised domain adaptation (UDA) semantic segmentation of remote sensing images (RSIs) has attracted more and more attention [1]- [3]. Aiming to the knowledge transfer across RSIs domains, the UDA technologies can effectively improve the segmentation accuracy of unlabeled target RSIs, and greatly reduces the tedious workload of labeling RSIs. Benefiting from the development of deep learning, various UDA methods have been proposed, constantly improving the performance of RSIs domain adaptation [4], [5]. The existing methods can be roughly divided into This work was supported in part by the National Natural Science Foundation of China under Grant 42130112, 42101458 and 41801388.\nKuiliang Gao, Anzhu Yu, Xiong You, Wenyue Guo, Ke Li, Ningbo Huang, are with the PLA Strategic Support Force Information Engineering University, Zhengzhou,450001, China. (e-mail: [email protected]) three categories: RSIs style transfer, adversarial learning and self-supervised learning. The style transfer methods take the lead in exploring RSIs domain adaptation, which mainly align source and target RSIs in the input space [6]. Subsequently, with the continuous improvement of generative adversarial networks (GANs), it has become a popular solution to extract the domain-invariant features using adversarial learning [7]. More recently, the self-supervised learning methods have been gradually applied with the stable and efficient training process and superior performance, further improving the segmentation accuracy of target RSIs [8].\nAlthough there have been extensive researches on UDA semantic segmentation of RSIs, the existing methods generally follow an ideal assumption that labeled source domains and unlabeled target domains have exactly the same classes [9], which can be referred to as class symmetry. As shown in Fig. 1 (a), the source class set is equal to the target class set. However, in practical application, it is often difficult and timeconsuming to find a source RSI whose class set is completely consistent with that of target RSI. More commonly, the class set of available source RSIs is different from that of target RSIs, and the two have inclusion or intersection relationship. Fig. 1 (b) shows the case where the target class set includes the source class set, and the latter is actually a subset of the former. Fig. 1 (c) illustrates the intersection relationship between the source class set and target class set, where each domain has a unique class that the other does not. The above two relationships can be collectively referred to as class asymmetry, which is characterized by the fact that the target RSIs contain classes that do not appear in the source RSIs. Considering the practical application situations of RSIs domain adaptation, the class asymmetry cases are obviously more common, but also more challenging. However, the existing UDA methods following the class symmetry assumption cannot deal with the class asymmetry cases because of the inconsistency between source and target class sets.\nNo matter in the inclusion relationship of Fig. 1 (b) or the intersection relationship of Fig. 1 (c), the knowledge learned from source RSIs could not be well adapted and generalized to target RSIs because of the class asymmetry problem. Indeed, the necessary condition for implementing RSIs domain adaptation is that the source class set includes the target class set. In other words, any target class can be found in source domains, to ensure that the knowledge required by target RSIs can be learned from source RSIs. Therefore, a natural idea for the class asymmetry domain adaptation is to arXiv:2305.09893v1 [cs.CV] 17 May 2023 collect more source RSIs, thus ensuring that the class union of multiple source domains is exactly equal to the target class set, or, under a laxer condition, includes the target class set. Fig. 1 (d) illustrates an example of the former, in which each source class set is a subset of the target class set, and the source class union has an equality relationship with the target class set. Fig. 1 (e) illustrates an example of the latter, in which each source class set intersects the target class set, and the source class union includes the target class set. In both cases, the class spaces of different source RSIs are also different, which is close to the practical scenarios. Undeniably, the introduction of more sources can provide the necessary information for multisource RSIs domain adaptation in the case of class asymmetry. However, in this novel and challenging experimental setup, there are still two key challenges that need to be addressed.\nChallenge 1: In addition to distribution discrepancy, the class space between each source-target pair is also different, creating greater difficulties for knowledge transfer. Under this more complex condition, how to achieve the adaptation and alignment between each single source RSI and the target RSI is the first key challenge.\nChallenge 2: There are distribution and class discrepancies among different source RSIs simultaneously. Therefore, how to integrate the strengths and characteristics of multiple sources, complement each other, and efficiently transfer knowledge to target RSIs is the second key challenge.\nObviously, existing UDA methods of RSIs are struggling to address the above challenges, since they require a completely symmetric class relationship between source and target RSIs. To this end, a novel Class Asymmetry Domain Adaptation method with Multiple Sources (MS-CADA) is proposed in this paper, which can not only integrate the diverse knowledge from multiple source RSIs to achieve better domain adaptation performance, but also greatly relax the strict restrictions of existing UDA methods. For challenge 1, the proposed method utilizes a novel cross-domain mixing strategy to supplement class information for each source branch, and adapts each source domain to the target domain through collaborative learning between different sources RSIs. For challenge 2, on the one hand, a multi-source pseudo-label generation strategy is proposed to provide self-supervised information for domain adaptation; on the other hand, a multiview-enhanced knowledge integration module based on the hypergraph convolutional network (HGCN) is developed for high-level relation learning between different domains, to fully fuse the different source and target knowledge and achieve better multi-source domain adaptation performance. To sum up, the main contributions include the following four points.\n1) A novel multi-source UDA method is proposed, which can effectively improve the performance of RSIs domain adaptation in the case of class asymmetry. To our knowledge, this is the first exploration of class asymmetry domain adaptation segmentation of RSIs. 2) A collaborative learning method based on the crossdomain mixing strategy is proposed, to achieve the domain adaptation between each source-target pair through supplementing class information for each source branch. 3) A pseudo-label generation strategy is proposed to deal with two different scenarios where the source class union is equal to or includes the target class set. A multiviewenhanced knowledge integration module is developed for efficient multi-domain knowledge routing and transfer by fully fusing the advantages of different branches. 4) Extensive experiments are conducted on airborne and spaceborne RSIs, and the results of three different scenarios, including two-source union equality, three-source union equality and two-source union inclusion, show that the proposed method can effectively perform the class asymmetry domain adaptation of RSIs with multiple sources, and significantly improves the segmentation accuracy of target RSIs." }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. RSIs domain adaptation segmentation", "publication_ref": [ "b9", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b18", "b19", "b20", "b21", "b22", "b23", "b24", "b25" ], "table_ref": [], "text": "Accurately labeling RSIs is a very complicated and timeconsuming work [10]- [12]. To improve the generalization ability of deep segmentation models, more and more attention has been paid to the researches of RSIs domain adaptation. From an intuitive visual perspective, the differences between RSIs mainly exist in color and other style attributes. Therefore, the initial exploration mainly focuses on RSIs translation, aiming to reduce the discrepancies between the source and target domains by unifying the styles of different RSIs. Tasar et al. present a color mapping GAN, which can generate fake images that are semantically exactly the same as real training RSIs [13]. Sokolov et al. focus on the semantic consistency and per-pixel quality [14], while Zhao et al. introduce the depth information to improve the quality of synthetic RSIs [15]. Only the implementation of style transfer in the input space often produces unstable domain adaptation performance. Therefore, obtaining the domain-invariant deep features through adversarial learning has gradually become the mainstream in RSIs domain adaptation segmentation. Cai et al. develop a novel multitask network based on the GAN structure, which possesses the better segmentation ability for low-resolution RSIs and small objects [16]. Zhu et al. embed an invariant feature memory module into the conventional adversarial learning framework, which can effectively store and memorize the domain-level context information in the training sample flow [17]. Zheng et al. improve the highresolution network (HRNet) according to the RSIs characteristics, making it more suitable for RSIs domain adaptation. In addition, the attention mechanism [18], [19], contrastive learning [20], graph network [21] and consistency and diversity metric [22] are also integrated into the adversarial learning framework, further improving the segmentation accuracy of target RSIs. Recently, the self-supervised learning based on the mean teacher framework [23] has been gradually applied in RSIs domain adaptation due to its excellent knowledge transfer effect and stable training process. Yan et al. design a cross teacher-student network, and improve the domain adaptation performance on target RSIs through the cross consistency constraint loss [24]. Wang et al. focus on the problem of spatial resolution inconsistency in self-supervised adaptation, effectively improving the effect of knowledge transfer from airborne to spaceborne RSIs [25]. In addition, combining the above different types of methods is a common idea to further improve the performance of RSIs domain adaptation [26].\nAlthough various UDA methods for RSI semantic segmentation have sprung up, they all follow a common ideal assumption of class symmetry. Under the condition that the source and the target class set are different, the existing methods often fail to achieve the satisfactory performance." }, { "figure_ref": [], "heading": "B. RSIs multi-source domain adaptation", "publication_ref": [ "b26", "b29", "b30", "b32", "b33", "b35", "b35", "b35" ], "table_ref": [], "text": "The effective utilization of multiple RSIs sources can provide more abundant and diverse knowledge for improving RSIs domain adaptation performance. However, most of the existing UDA methods of RSIs focus on the knowledge transfer from a single source to a single target domain, and there are relatively few researches tailored for multi-source RSIs domain adaptation. The existing methods can be divided into two categories: constructing a combined source and aligning each source-target pair separately. The former usually combines several different RSIs into a single source domain and performs cross-domain knowledge transfer according to the conventional UDA mode [27]- [30]. The obvious shortcoming is that the RSIs sources with different distribution in the combined domain will interfere with each other during domain adaptation process, thus weakening the effect of knowledge transfer [31], [32]. In contrast, the latter typically aligns each source and target RSI separately, and merges the results from different sources as the final prediction of target RSIs [33]- [35]. The multiple feature spaces adaptation network proposed by Wang et al. is the closest to our work, which explores the performance of multi-source UDA in crop mapping by separately aligning each source-target RSI pair [35]. However, all the above work, including literature [35], require that each source and target RSI share exactly the same class space. Obviously, meeting this requirement is even more laborious and cumbersome than the conventional UDA methods in Section II-A.\nOur work focuses on the problem of RSIs class asymmetry domain adaptation. Compared with existing multi-source UDA method, the proposed method can deal with the scenario closer to practical situation, that is, performing knowledge transfer using multiple RSIs sources with different class spaces." }, { "figure_ref": [], "heading": "C. Incomplete and partial domain adaptation", "publication_ref": [ "b36", "b37", "b38", "b39", "b40", "b41", "b42", "b44", "b46" ], "table_ref": [], "text": "In the field of computer vision, there are two research directions related to our work, namely incomplete domain adaptation (IDA) and partial domain adaptation (PDA). The problem setting of IDA is to utilize multiple sources with incomplete class for domain adaptation, and the existing few researches mainly focus on the image classification task [36], [37]. Lu et al. and Gong et al. introduce IDA into the remote sensing field and preliminarily explore the performance of RSIs cross-domain scene classification [38], [39], and Ngo et al. further deepen the research on this issue [40]. In addition, Li et al. propose to conduct the class-incomplete model adaptation without accessing source information, and design a deep model for street scene semantic segmentation [41]. However, the performance of this method on target domain is dissatisfactory, since it abandons the source knowledge in the domain adaptation process. Different from the IDA setting, PDA is a single-source to single-target domain adaptation task. However, in the PDA setting, the number of source classes is required to be greater than the number of target classes. In the field of computer vision, various PDA methods have been developed for the image classification task [42]- [44]. Meanwhile, the performance of PDA methods on RSIs scene classification has begun to be studied, and the typical methods include the weight aware domain adversarial network proposed by Zheng et al. [45].\nThe differences between the existing IDA and PDA methods and our work can be summarized into the two aspects. First, most of the existing methods are designed for the classification task of natural images, and the performance of these methods directly applied to the RSIs domain adaptation segmentation is greatly degraded. Secondly, our work focuses on the class asymmetry domain adaptation of RSIs with multiple sources, and can simultaneously cover the two scenarios where the source class union is equal to or includes the target class set. On " }, { "figure_ref": [], "heading": "B. Workflow", "publication_ref": [], "table_ref": [], "text": "This paper proposes to integrate multiple sources knowledge for RSIs domain adaptation segmentation in the case of class asymmetry. To clearly articulate the proposed MS-CADA method, the two-source scenario is used as an example to describe its workflow, as shown in Fig. 2. The whole deep model 𝑀 consists of a feature extraction network 𝐺 and a multibranch segmentation head. The former is shared by multiple domains for common feature extraction existing in different RSIs, while the latter includes two expert networks 𝐸 1 and 𝐸 2 , two classifiers 𝐹 1 and 𝐹 2 , and a knowledge integration module 𝐻. The expert networks are used to learn the source-specific deep features, and the module 𝐻 is responsible for integrating knowledge from different domains and inferring target RSIs. Overall, the proposed method follows the self-supervised adaptation mode, thus in each iteration, the teacher model 𝑀 is established based on the exponential moving average (EMA) algorithm. Specifically in each iteration, the proposed method performs three different learning tasks simultaneously, including supervised learning, collaborative learning, multidomain knowledge transfer.\nFirstly, the source RSIs and their corresponding true labels are used for model supervised training. Due to the discrepancies in class space and data distribution, different RSIs are respectively fed into the corresponding expert networks and classifiers for loss calculation after passing through the shared 𝐺. This step provides the basic supervision information for model optimization. Secondly, to supplement the class information that each source branch does not have, the mixing of true labels and target pseudo-labels is performed between each source pair. Correspondingly, the source and target RSIs are also mixed, and collaborative learning on multiple source branches is carried out according to the mixing results. This step builds on the problem setting in Section III-A, where each source most likely contains class information that the other does not. Thirdly, the predictions from different source branches in the teacher model are used to generate the final target pseudo-labels. In the student model, the module 𝐻 fuses the deep features from different branches and performs multidomain knowledge transfer based on the final target pseudolabels. In the following sections, the above learning process will be described in detail." }, { "figure_ref": [], "heading": "C. Multi-source supervised learning", "publication_ref": [ "b38", "b40" ], "table_ref": [], "text": "Acquiring rich and robust source knowledge is a prerequisite for realizing class asymmetry multi-source domain adaptation. Related researches have shown that simply combining different RSIs into one single source will lead to suboptimal domain adaptation effect due to the domain discrepancies [38]- [40]. Therefore, in each iteration, the RSIs from different sources will be fed into the corresponding expert branches, and supervised training will be conducted on the basis of learning common and source-specific features, which can be expressed as:\nL 𝑆 𝑖 𝑠𝑢 𝑝 = -E ( 𝑥 𝑆 𝑖 𝑗 ,𝑦 𝑆 𝑖 𝑗 )∼𝐷 𝑆 𝑖 ∑︁ 𝑦 𝑆 𝑖 𝑗 𝑙𝑜𝑔(𝐹 𝑖 (𝐸 𝑖 (𝐺 (𝑥 𝑆 𝑖 𝑗 )))),(1)\nwhere 𝑖 and 𝑗 index different source domains and RSIs samples respectively in a training batch. Utilizing different expert branches for supervised learning can effectively avoid the interference of domain discrepancies on model training, so as to provide support for the multi-source collaborative learning and knowledge integration in the next steps." }, { "figure_ref": [], "heading": "D. Collaborative learning with cross-domain mixing", "publication_ref": [ "b47", "b48" ], "table_ref": [], "text": "Different source RSIs have different class sets. Therefore, the multi-source supervised learning can only enable each expert branch to learn the knowledge within its corresponding class space. In this case, the domain adaptation cannot be achieved due to the class asymmetry problem between each source-target pair. To this end, a novel cross-domain mixing strategy is proposed to supplement each expert branch with the class information that it does not possess.\nSpecifically, the information supplementation from source 2 to source 1 is used as an example for detailed explanation. As shown in Fig. 2, the class set of source 1 contains only Building, Low vegetation, and Impervious surface, thus during the initial phase of model training, the source 1 branch can Fig. 2: Workflow of the proposed method. Different source RSIs are fed into their respective branches for supervised learning. For each source branch, the true labels are mixed with the target pseudo-labels of other branches, and the mixed results are used for collaborative learning and domain adaptation of each source-target pair. The final target pseudo-labels are generated from the predictions of different source branches, and the multi-domain knowledge is integrated for target inference. The dashed lines represent the data flows through the EMA model. only accurately recognize the three classes in target RSIs. In contrast, the class set of source 2 contains the Tree class that source 1 does not have, and thus the target segmentation results of the source 2 branch will contain high-quality pseudo-labels of the Tree class. Based on these observations, the proposed mixing strategy pastes part of the true label of source 1 onto the target pseudo-label generated by the source 2 branch, and the source 1 RSIs and the target RSIs are also mixed according to the same mask, which actually introduces the unseen class information into the source 1 branch. Obviously, the mixing techniques play a key role in the proposed strategy. In our work, the coarse region-level mixing [46] and fine classlevel mixing [47] are adopted simultaneously. Both the two techniques belong to the local replacement approaches, so they can be uniformly formalized as:\n𝑥 𝑆 1 𝑚𝑖 𝑥 = M 𝑥 𝑆 1 + (1 -M) 𝑥 𝑇 𝑦 𝑆 1 𝑚𝑖 𝑥 = M 𝑦 𝑆 1 + (1 -M) 𝑦 𝑇 𝑆 2 ,(2)\nwhere 𝑦 𝑇 𝑆 2 denotes the target pseudo-labels of the source 2 branch of the EMA teacher model, the symbol represents the element-wise multiplication, and M is a binary mask determining which region of pixels is cut and pasted. When the class-level mixing is performed, the mask M is generated by randomly selecting partial classes in the true source labels, which delivers the inherent properties of a certain class of objects completely. When the region-level mixing is performed, the mask M is obtained by randomly cutting a region patch from the true source labels, which can retain the local structure and context information. Therefore, the application of the two mixing techniques can combine different advantages at the fine class and coarse region levels, effectively improving the performance of the information supplement between different sources. In addition, the diversity of mixed samples is further enhanced, which can help to improve the robustness of the trained model.\nAfter the mixed sample-label pairs (𝑥 𝑆 1 𝑚𝑖𝑥 , 𝑦 𝑆 1 𝑚𝑖 𝑥 ) are obtained, the source 1 branch will carry out the weighted selfsupervised learning. Specifically, the confidence-based weight map is first generated as follows:\n𝑤 𝑆 1 = M 1 + (1 -M) 𝑤 𝑡 𝑤 𝑡 = ℎ •𝑤 𝑙=1 [max 𝑐 𝐹 1 (𝐸 1 (𝐺 (𝑥 𝑇 ))) (𝑙,𝑐) > 𝜏] ℎ • 𝑤 ,(3)\nwhere ℎ and 𝑤 denote the height and width of RSIs samples, the operation [•] is the Iverson bracket, and 𝑤 𝑡 represents the pixel percentage in which the maximum softmax probability of prediction class exceeds the threshold 𝜏. Then, the self-Fig. 3: Target pseudo-labels of different source experts.\nsupervised loss of source 1 branch can be calculated as:\nL 𝑆 1 𝑠𝑠𝑙 = -E ( 𝑥 𝑆 1 𝑚𝑖 𝑥 ,𝑦 𝑆 1 𝑚𝑖 𝑥 ) ∑︁ 𝑤 𝑆 1 𝑦 𝑆 1 𝑚𝑖 𝑥 𝑙𝑜𝑔(𝐹 1 (𝐸 1 (𝐺 (𝑥 𝑆 1 𝑚𝑖 𝑥 )))),(4\n) which actually achieves the domain adaptation from source 1 to the target domain, since the input sample contains partial target RSIs.\nAt this point, the process of class information supplementation from source 2 to source 1 has been completed. Conversely, the information supplementation from source 1 to source 2 is similar to the above process. To intuitively observe the effectiveness of the proposed cross-domain mixing strategy, the target pseudo-labels generated by different source experts are visualized, as shown in Fig. 3. Obviously, at the initial training stage, each source expert can only segment the target RSIs within its own class space. For example, in the first column of Fig. 3, the source 1 expert could not accurately identify the Car class, while the source 2 expert wrongly classifies the objects of impervious surface and building into the Cluster and Low vegetation classes. With the continuous training, however, the class information among multiple source domains is supplemented with each other, and all the expert branches can identify all source classes more accurately.\nWhen more source RSIs are adopted, the Equation 2 will be extended to more forms including multiple mixed results of pairwise source combinations. Summarily, in the proposed mixing strategy, each source can provide additional class information for other sources, which is actually the collaborative learning process between different experts. As a result, each expert can learn all the class knowledge contained in the source union, thus effectively solving the class asymmetry problem during the domain adaptation process of each source-target pair." }, { "figure_ref": [ "fig_1" ], "heading": "E. Knowledge integration for multi-domain transfer", "publication_ref": [ "b49", "b51", "b52", "b53" ], "table_ref": [], "text": "As stated in challenge 2, discrepancies between source RSIs domains can give different expert branches the strengthes of focusing on different feature knowledge. After each source domain is adapted to the target domain separately, the advantages of different source experts should be combined to further improve the performance of multi-domain transfer. Therefore, a multi-source pseudo-label generation strategy is proposed, which can deal with both cases where the source class union is equal to or includes the target class set. And a multiviewenhanced knowledge integration module is developed for target inference through learning the high-level relations between different domains.\n1) Multi-source pseudo-label generation: As shown in Fig. 2, in each iteration, the target RSIs will be fed into different expert branches of the EMA teacher model, which will output different predictions according to their respective ability of learning features. Therefore, the final target pseudo-labels 𝑦 𝑇 can be obtained:\nŷ𝑇 = max(𝐹 1 (𝐸 1 (𝐺 (𝑥 𝑇 ))), 𝐹 2 (𝐸 2 (𝐺 (𝑥 𝑇 )))) 𝑦 𝑇 = 𝑀 𝑐 ( ŷ𝑇 ),(5)\nwhere the max operation selects the class corresponding to the maximum softmax probability of the results of the two source branches. And 𝑀 𝑐 denotes the class filter operation, which is determined according to the relationship between the prediction class and target class set:\n𝑀 𝑐 = ŷ𝑇 , if ŷ𝑇 ∈ C 𝑇 ; 255, if ŷ𝑇 ∉ C 𝑇 .(6)\nIn short, the proposed multi-source pseudo-label generation strategy has two effects. (1) Equation 5 can flexibly select the predictions of expert who is better at a certain class by comparing the confidence probabilities, which actually combines the advantages of different source branches. (2) When the source class union contains the outlier classes that do not exist in the target RSIs, Equation 6 discards these classes by assigning them the value of 255 (pixels with the value of 255 are not used for loss calculation). Actually, considering that the target RSIs do not contain the objects belonging to the outlier class, there will be a few target pixels classified into this class, which are often located in the boundary region between objects in target RSIs. Therefore, the removal of these pixels from the final predictions can play a positive role in selectively preserving the high-quality pseudo-labels. Fig. 4 shows an example to visually explain the proposed strategy.\n2) Multiview-enhanced high-level relation learning: As shown in Fig. 2, for the input target RSIs after strong transformation, the features with source 1 bias learned by 𝐸 1 , features with source 2 bias learned by 𝐸 2 and features learned by 𝐺 are fed into the module 𝐻 simultaneously, to realize the multi-domain knowledge transfer. Theoretically, any network structure can act as the role of 𝐻. Inspired by the recent development of HGCN in the computer vision Fig. 5: Multiview-enhanced knowledge integration module based on hypergraph network for high-level relation learning.\nfield [48]- [50], a multiview-enhanced knowledge integration module based on HGCN is developed, as shown in Fig. 5. In the hypergraph structure G = (V, E) where V = {𝑣 1 , ..., 𝑣 𝑛 } is the vertice set and E = {𝑒 1 , ..., 𝑒 𝑚 } is the hyperedge set, a hyperedge can connect arbitrary number of vertices simultaneously. Therefore, superior to the abstraction of GCN on the pairwise connections or the modeling of CNN on the local features, the HCGN can better describe the high-level relations [51], [52], which provides an effective solution for expressing the knowledge routing from different domains to the target predictions.\nFirstly, the features learned by 𝐺 are compressed by a network with the same structure as 𝐸 Next, the formal description of the above learning process and loss calculation is given. For each vertex 𝑣 in the matrix X 1 or X 2 , the 𝐾 nearest neighbor vertices are selected to build the hyperedge 𝑒, which can be denoted as:\n𝑒 𝑖 = {𝑣 𝑖 , ∀𝑣 𝑗 ∈ N 𝐾 (𝑣 𝑖 )},(7)\nwhere N 𝐾 is actually the neighbor set containing 𝐾 vertices. Specifically, the k-neighbor algorithm based on the Euclidean distance is used for generating the set N 𝐾 . After all possible hyperedges are built, the incidence matrix H can be obtained:\nℎ(𝑣, 𝑒) = 1, if 𝑣 ∈ 𝑒; 0, if 𝑣 ∉ 𝑒,(8)\nwhere each entry describes the connection between vertices and hyperedges, and thus the matrix H can represent the whole topological structure of hypergraph. Then, the hypergraph convolution operation is performed on the matrices X and H:\nY = 𝜎(D -1/2 𝑣 HWD -1 𝑒 H D -1/2 𝑣 X𝛩), (9\n)\nwhere Y is the output results of hypergraph convolution, 𝜎 is the ReLU activation function, W is the hyperedge weight matrix, and 𝛩 is the trainable parameter matrix. In addition, D 𝑣 and D 𝑒 denote the diagonal matrices of the vertex degrees and the edge degrees respectively, with each vertex degree calculated as 𝑑 (𝑣) = 𝑒 ∈E 𝑤(𝑒)ℎ(𝑣, 𝑒) and each edge degree calculated as 𝛿(𝑒) = 𝑣 ∈V ℎ(𝑣, 𝑒).\nAfter the high-level learning by hypergraph convolution, the outputs of different hypergraph branches are reshaped into the feature maps with the spatial size of 𝑁 ℎ × 𝑁 𝑤 , and the channel concatenation and classification operations are performed to produce the prediction results ȳ𝑇 of target RSIs. Therefore, the self-supervised loss for multi-domain knowledge transfer can be calculated as:\nL 𝑀 𝑠𝑠𝑙 = -E ( ȳ𝑇 ,𝑦 𝑇 ) trans(𝑦 𝑇 )𝑙𝑜𝑔( ȳ𝑇 ),(10)\nwhere the operation trans(•) represents the same spatial transformation as in the strong transformation of input target RSIs." }, { "figure_ref": [], "heading": "F. Optimization objective", "publication_ref": [], "table_ref": [], "text": "By combining the losses of multi-source supervised learning, collaborative learning and multi-domain knowledge transfer, the final optimization objective can be obtained:\nL = L 𝑠𝑢 𝑝 + 𝛼L 𝑠𝑠𝑙 + 𝛽L 𝑀 𝑠𝑠𝑙 ,(11)\nwhere L 𝑠𝑢 𝑝 and L 𝑠𝑠𝑙 both contains the sum of the losses of different source branches, and 𝛼 and 𝛽 denote the weight coefficients. In addition, Algorithm 1 summarizes the pseudo code of the proposed MS-CADA method, to clearly show the entire workflow of class asymmetry RSIs domain adaptation in the two-source scenario." }, { "figure_ref": [], "heading": "IV. EXPERIMENTAL RESULTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Datasets description and experimental setup", "publication_ref": [ "b54", "b55", "b20", "b56", "b55" ], "table_ref": [ "tab_2" ], "text": "The four public RSIs datasets, including ISPRS Potsdam (PD), ISPRS Vaihingen (VH), LoveDA [53] and BLU [54], are used for experiments, and the details are listed in Table I.\nThe VH and PD datasets contain the same six classes: Impervious surface, Building, Low vegetation, Tree, Car and Cluster. Referring to relevant researches, the target VH dataset Calculate L 𝑆 1 𝑠𝑢 𝑝 and L 𝑆 2 𝑠𝑢 𝑝 with Equation 17:\nGenerate 𝑥 𝑆 1 𝑚𝑖 𝑥 , 𝑦 𝑆 1 𝑚𝑖 𝑥 , 𝑥 𝑆 2 𝑚𝑖 𝑥 , 𝑦 𝑆 2 𝑚𝑖 𝑥 with Equation 28:\nCalculate L 𝑆 1 𝑠𝑠𝑙 and L 𝑆 2 𝑠𝑠𝑙 with Equation 49:\nGenerate 𝑦 𝑇 with Equation 510:\nCalculate L 𝑀 𝑠𝑢 𝑝 with Equation 1011:\nend for 12:\nCalculate L with Equation 1113:\nUpdate 𝑀 and 𝑀 14: end while [21], [55]. The space size of each sample is 512 × 512. In the two-source union equality scenario of airborne RSIs, the whole PD training set is divided into two groups, i.e., the PD1 subset with 1728 RGB samples and the PD2 subset with 1728 IRRG samples. As listed in Table. II, the first two settings are obtained by discarding part of the classes in PD1 and PD2 respectively. In addition, the class symmetry two-source setting is also used for experiments.\nThe class space merge is first performed on the LoveDA dataset by referring to [54], so that LoveDA and BLU datasets have the same six classes: Background, Building, Vegetation, Water, Agricultural and Road. The Urban dataset used as the target domain contains 677 training samples and 1156 testing samples, and the first three tiles in the BLU dataset are used for In addition, the two-source union inclusion scenario is established on the airborne RSIs, as shown in Table . IV. Through discarding the RSIs containing the Clutter class of the VH dataset, the partial VH subset is obtained, where the number of training and testing samples is reduced to 350 and 319 respectively." }, { "figure_ref": [], "heading": "B. Environment and hyperparameters", "publication_ref": [ "b57" ], "table_ref": [], "text": "All algorithms are developed based on Python 3.8 and relevant machine learning libraries. A computer equipped with an Intel Xeon Gold 6152 CPU and an Nvidia A100 PCIE GPU provides hardware support for programs running.\nReferring to most relevant researches, the ResNet-101 network pretrained on ImageNet dataset is used as the shared backbone 𝐺. Each expert branch actually contains a improved atrous spatial pyramid pooling (ASPP) module as 𝐸 and a 1 × 1 convolution classification layer as 𝐹. The number of output channels of 𝐺, 𝐸 and 𝐹 is 2048, 64 and 𝑁 C 𝑆 respectively, where 𝑁 C 𝑆 is the number of all source classes. The HGCN in the knowledge integration module 𝐻 contains two hypergraph convolution layers. In the view of space, the number of neighbor vertices used for building hypergraph is 64, and the number of channels of two layers is 64 × 𝑘 and 64 respectively, while in the view of feature, the above values are changed to 8, 𝑁 ℎ × 𝑁 𝑤 × 𝑘 and 𝑁 ℎ × 𝑁 𝑤 respectively. In the multi-source union equality scenario, the liner classifier in 𝐻 outputs the 𝑁 C 𝑆 classes, while in the multi-source union inclusion scenario, it outputs the 𝑁 C 𝑇 classes.\nDuring the domain adaptation training, the batchsize is set to 4, which means that there are 4×𝑘 source samples and 4 target samples in a training batch. For each source domain, half of the samples are mixed using the coarse region-level strategy, and the other half are mixed using the fine class-level strategy. More specifically, in each source labeled sample, half of the classes or 40% of the region is pasted to the target sample. In the process of multi-domain transfer, the strong transformation of target RSIs containing flipping, rotation, cropping, color jitter and gaussian blur, and only the pixels with the prediction probability larger than 0.968 are used for loss calculation. In addition, the Adam algorithm is used for model optimization, the training iteration is set to 40,000, and the learning rates of the backbone and multi-branch segmentation head are set to 6× 10 -5 and 6 × 10 -4 respectively. The weight coefficients 𝛼 and 𝛽 are both set to 1. Other related hyperparameter optimization techniques are consistent with [56]." }, { "figure_ref": [], "heading": "C. Methods and measures for comparison", "publication_ref": [ "b58", "b57", "b59", "b56", "b41", "b36", "b32", "b40" ], "table_ref": [], "text": "To verify the effectiveness of the proposed MS-CADA method, four conventional single-source UDA methods including Li's [57], DAFormer [56], HRDA [58] and PCEL [55], and four multi-source UDA methods including UMMA [41], DCTN [36], He's [32] and MECKA [40], are used for performance comparison.\nThe UDA method presented by Li et al. introduces the two strategies of gradual class weights and local dynamic quality into the process of self-supervised learning, which can achieve better performance than most adversarial learning methods. DAFormer is a simple and efficient UDA method, which can improve the target segmentation accuracy to a certain extent, by improving the training strategies of class sampling, feature utilization and learning rate. HRDA is an improvement method over DAFormer, which can effectively combine the advantages of high-resolution fine segmentation and low-resolution longrange context learning. PCEL is an advanced UDA method for RSIs segmentation, which can significantly improve the domain adaptation performance through the enhancement of prototype and context. The above methods take the simple combination of different RSIs domains as the singe source, which can be referred to as \"Combined source\" for short.\nConsidering that there is no existing method that can exactly fit the problem setting of class asymmetry domain adaptation segmentation of RSIs, several multi-source UDA methods designed for related problems are modified appropriately and used for comparison. UMMA is an advanced model adaptation method for street scene segmentation, which can achieve the adaptation to target domains using multiple models trained on different sources. DCTN is an adversarial-based UDA method for natural images classification. It can utilize different discriminators to generate the weights of different sources and combine different classifiers to achieve multisource adaptation. The method proposed by He et al. is actually a multi-source UDA method for street scene segmentation, and MECKA is a multi-source UDA method for RSIs scene classification based on the consistency learning between different domains. Among them, only the methods UMMA and He's are capable of dealing with the multi-source union equality scenario of RSIs segmentation. Therefore, appropriate modifications are imposed on the methods DCTN and MECKA. Specifically, the classifiers of DCTN is replaced by the ASPP module, and the complete predictions are obtained through casting the results of different classifiers over the target class space and calculating the weighted results. And the cross-domain mixing strategy proposed in Section III-D is integrated into different branches of MECKA. In addition, in the multi-source union inclusion scenario, for the above four methods, the outlier class that target RSIs do not contain will be discarded to re-form the multi-source union equality scenario, which is significantly different from the proposed MS-CADA method. Consequently, the above methods can utilize multiple separated sources for class asymmetry UDA of RSIs, which can be referred to as \"Separated multiple sources\" for short.\nAll the above methods utilize the ResNet-101 trained on the ImageNet dataset as the backbone network, and perform domain adaptation training with the random seed of 0. In addition, to fairly compare the performance of different methods, the three widely used measures including IoU per class, mIoU and mF1, are selected for quantitative comparisons." }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "D. Results of the two-source union equality scenario of airborne RSIs", "publication_ref": [ "b0" ], "table_ref": [ "tab_3", "tab_8" ], "text": "Tables V-VII list the segmentation results of different methods in the two-source union equality scenario of airborne RSIs, which correspond to the three different class settings in Table II respectively. Observations and analysis can be obtained from the following four aspects.\nFirstly, simply combining different RSIs and implementing domain adaptation based on the single-source UDA methods will weaken the segmentation performance of target RSIs. According to the results of DAFormer and PCEL in Table VII, the mIoU based on the combined source decreases to different degrees, compared with the mIoU based on the best single source. This is not difficult to understand that, different RSIs with domain discrepancies interfere with each other, resulting in the suboptimal domain adaptation performance.\nSecondly, the performance of the four conventional singlesource UDA methods with the combined source is obviously inferior to that of the proposed method. Specifically, in the three different class settings, the mIoU of the proposed method is 2.76%, 3.36% and 3.23% higher than that of the conventional UDA method with the best performance, respectively. Correspondingly, the mF1 value of the proposed method is increased by 2.35%, 3.07% and 2.82% respectively. These observations directly verify the effectiveness of the knowledge integration and transfer with multiple separated sources, which effectively avoids the negative transfer problem in the combined source.\nThirdly, the results of four improved multi-source UDA methods and the proposed MS-CADA method are compared. Specifically, UMMA can only utilize the models pretrained on different sources for domain adaptation without feature alignment between source and target RSIs, resulting in the relatively poor results. DCTN can simply combine the advantages of different classifiers to some extent by weighted calculation, and its performance is better than UMMA. The two methods He's and MECKA can perform the consistency learning and knowledge supplement between different source domains, and thus their domain adaptation performance is further improved. The proposed method always performs better than the other four methods, wether in the class asymmetry case or the class symmetry case. Compared to the second place, the proposed method improved by 9.23%, 12.80% and 4.58% in mIoU and 8.94%, 11.43% and 4.23% in mF1, in the three different class settings. This demonstrates the superiority of the proposed methods in airborne RSIs domain adaptation segmentation in the two-source union equality scenario.\nFourthly, the performance of the proposed method in the class asymmetry scenario is still better than that of some advanced methods in the class symmetry scenario. For example, the proposed method can achieve the higher mIoU and mF1 in class setting 2 than the other eight methods do in class setting 3, which fully demonstrates the effectiveness of the proposed method in integrating and transferring multi-source knowledge in the case of class asymmetry.\nTo intuitively compare the segmentation results of different methods, Fig. 6 shows the segmentation maps of several examples in the target RSIs. Specifically, the results of the three representative methods including PCEL, MECKA and MS-CADA in class settings 2 and 3 are visualized, from which the three main points can be drawn. (1) The quality of the segmentation maps of the proposed method is superior to that of PCEL using the combined source. Specifically, the segmentation maps of the proposed method possess the more complete context structure and less noise. (2) Compared with MECKA, the proposed method can produce the more accurate segmentation maps, which is especially obvious for the minority class and small objects. For example, in line 3, the proposed method can recognize the Clutter class accurately, and in line 5, the proposed method can locate the objects of the Car class more precisely. (3) The segmentation maps in the setting 3 of class symmetry are superior to those in the setting 2 of class asymmetry, which can be clearly observed from Fig. 6 (d)-(g). Implementing RSIs domain adaptation with completely consistent class space can provide more class information and labeled samples for finely segmenting objects, so as to produce the segmentation maps with better quality." }, { "figure_ref": [ "fig_4" ], "heading": "E. Results of the three-source union equality scenario of spaceborne RSIs", "publication_ref": [], "table_ref": [ "tab_10" ], "text": "The segmentation results of different methods in the threesource union equality scenario of spaceborne RSIs are given in Tables. VIII-IX. Compared with airborne RSIs, the domain adaptation of large-scale spaceborne RSIs is more difficult, especially knowledge integration and transfer using multiple class asymmetry sources. It can be seen that, there is a general decline in the segmentation performance of different methods, however, several observations similar to the airborne RSIs scenario can be obtained.\nFirstly, according to the results of PCEL in Table IX, the performance based on the combined source is still inferior to that based on the best single source. Although the expansion of training samples from three sources effectively reduces the decline range, the discrepancies between different RSIs sources still damage the segmentation performance in the target domain to some extent. Secondly, in the two different class settings, the proposed method performs better than the best UDA method PCEL with the combined source, with the increase of 1.38% and 2.11% in mIoU, and 1.46% and 2.30% in mF1. Thirdly, the segmentation results of the proposed method are better than those of other improved multi-source UDA methods. Specifically, the proposed method improves mIoU by at least 1.81% and 3.45%, and mF1 by at least 1.83% and 3.77%. It can be seen from the above statistics that, the proposed method can achieve better results than existing advanced single-source and multi-source UDA methods, whether in the class symmetry case or the class asymmetry case. Fig. 7 shows the segmentation maps of PCEL, MECKA and MS-CADA. Compared with other methods, the segmentation maps of the proposed method in class setting 2 have the best visual effect and are closest to the ground truth. Specifically, the recognition of the Building objects is more accurate and complete, and the edge of the segmentation results of linear objects is smoother. In addition, under the same class setting, the segmentation maps of the proposed method are better than those of MECKA, which is mainly reflected in the less misclassification and noise. This verifies the advantages of the proposed method in the three-source union equality scenario of spaceborne RSIs from the perspective of visualization." }, { "figure_ref": [ "fig_5" ], "heading": "F. Results of the two-source union inclusion scenario of airborne RSIs", "publication_ref": [], "table_ref": [ "tab_5", "tab_12" ], "text": "In addition to the multi-source union equality scenario, this section verifies the effectiveness of the proposed method in the two-source union inclusion scenario, and the results are listed in Table X. It should be pointed out that, for the eight UDA methods for comparison, the pixels corresponding to the Clutter class of the PD2 dataset in Table IV As listed in Table X, the proposed method achieves the highest mIoU and mF1, which increase by 1.63% and 1.75% respectively compared with PCEL, and 4.22% and 3.61% respectively compared with MECKA. It is believed that, such improvement benefits from the proposed multi-source pseudo-label generation strategy, which can effectively filter out the low-quality pseudo-labels located at the boundary between objects, and improves the performance of knowledge transfer in the self-supervised learning process. In addition to the statistics, the segmentation maps of different methods are visualized, as shown in Fig. 8. Compared with PCEL and MECKA, the proposed method can produce more fine segmentation maps. For example, the segmentation results of small targets of the Car class and the edge of the Building objects are more accurate and detailed." }, { "figure_ref": [], "heading": "V. ANALYSIS AND DISCUSSION", "publication_ref": [], "table_ref": [ "tab_13", "tab_14", "tab_15" ], "text": "A. Ablation studies 1) The cross-domain mixing strategy: The proposed crossdomain mixing strategy can supplement the class information existing in other sources for each source branch, which is the key to achieve the collaborative learning among different branches and the adaptation from each source to the target RSIs. The results of the ablation studies are presented in Table XI, where the Baseline method means that there is no mixing operation between different source branches. As can be seen, the performance of the Baseline method is far behind that of other methods, since it can only implement multi-source domain adaptation using the pseudo-labels generated by the results with severe class bias. In contrast, the performance brought by the class-level or region-level mixing strategy alone is significantly improved. It is worth noting that, the region-level mixing can lead to higher mIoU in settings where class differences between sources are small, and the classlevel mixing can lead to higher mIoU in settings where class differences between sources are large, such as the class setting 2 in the two-source union equality scenario. The proposed strategy can absorb different advantages of class-level and region-level mixing, and simultaneously enhances the learning of fine inherent properties of objects and coarse local context structure. Therefore, in the three different scenarios, the proposed strategy can enable the MS-CADA method to achieve the best performance.\n2) The multi-source pseudo-label generation strategy: How to generate the target pseudo-labels for multi-source adaptation based on the results of different expert branches is also a very key link in the proposed method. The comparison of different pseudo-label generation strategies is shown in Table XII. The Best expert method means that, only the results of the best expert branch are used for self-supervised training, without any integration operation on the different results of multiple experts. In the Summation method, the logit results of different Therefore, integrating different strengths of source experts can effectively improve the performance of target tasks. Compared with other methods, the proposed strategy can achieve the higher mIoU, which verifies its effectiveness in combining the advantages of different source experts in the process of multi-source domain adaptation. In addition, it can deal with the scenario where the source class union includes the target class set, which other methods do not handle well.\n3) The multiview-enhanced knowledge integration module: As described in the challenge 2 in the section I, it is very important to fully fuse the feature information existing in different domains and realize efficient knowledge transfer in the process of class asymmetry RSIs domain adaptation. This subsection explores the influence of different module 𝐻 on the domain adaptation performance, and the results are presented in Table XIII. The four typical deep models, CNN, Transformer, GCN and HGCN, are used for comparison. They all have two layers, and the channel dimension of each layer is consistent with that of the designed module in the spatial view. Except for the designed module, the other four models all perform feature learning based on the concatenation results of multi-domain features in the channel dimension. Compared with CNN, Transformer and GCN, the HGCN model can obtain higher mIoU in target RSIs. This validates the greater representation ability of the HGCN model, which is obtained by modeling the global many-to-many context relations. The designed multiview-enhanced knowledge integration module can simultaneously perform the high-level relation learning in the views of space and feature, so it can realize better knowledge routing and transfer from different domains to target predictions. According to the mIoU value, in the three different scenarios, the proposed multiview enhancement strategy brings at least 0.62% improvement." }, { "figure_ref": [ "fig_6" ], "heading": "B. Feature visualization", "publication_ref": [ "b60" ], "table_ref": [], "text": "To intuitively verify the effectiveness of the proposed method in the class asymmetry RSIs domain adaptation and better understand the process of class information supplement and multi-domain knowledge transfer, the visualization analysis is carried out for the deep features generated by different branches at different training iterations. Fig. 9 shows the distribution of features after the t-SNE dimension reduction [59]. Specifically, the same target RSI sample in the class setting 2 of the two-source union equality scenario is used as an example for analysis, and the features used for visualization are all derived from the stage before the classifiers in the source branches or in the module 𝐻. Next, the detailed comparison and analysis are given from three different domain adaptation stages.\n1) 1000 iterations: As shown in the first column, in the initial training stage, each source branch maps the same target RSI into its own class space. Therefore, the features produced by the source 1 expert are limited to the first four classes, while those produced by the source 2 expert are limited to the last four classes. Such a mapping is bound to contain a lot of errors, and the ability of the module 𝐻 learned from these results is also relatively poor. It can be seen that, there is a serious class bias problem in the feature distribution map of the module 𝐻.\n2) 10000 iterations: With the progress of domain adaptation training, the proposed cross-domain mixing strategy gradually comes into play. As we can see, each source branch is supplemented with the initially unavailable class information, and thus it can more accurately map the target RSI sample into the full class space. At the same time, the module 𝐻 can utilize the more accurate pseudo-labels for knowledge integration and transfer, and the class bias problem in the feature maps of different branches could be solved effectively.\n3) 20000 iterations: The whole model has been fully trained, and each source branch has a better ability to identify all classes. Moreover, compared with the first two columns, the separability and discriminability of the features generated by source branches are significantly improved. However, the mapping results of different source branches are still different, indicating that they have respective emphasis on different feature information. Therefore, the model 𝐻 can draw complementary advantages from different domains, making the features belonging to the same class more clustered and the features belonging to different classes more separated.\nFrom the variation of feature distribution with the process of domain adaptation training, it can be seen that the proposed cross-domain mixing strategy can complete the class information supplement, and achieve the domain adaptation of each source-target pair. Moreoever, the designed module 𝐻 can effectively realize the knowledge transfer and class asymmetry multi-source adaptation based on the full utilization of different source features." }, { "figure_ref": [ "fig_7", "fig_8" ], "heading": "C. Hyperparameters discussion", "publication_ref": [], "table_ref": [], "text": "In this section, the influence of three important hyperparameters on segmentation results is analyzed, to explore the sensitivity of the proposed method. Firstly, the optimal combination of class mixing ratio and region mixing ratio in the proposed cross-domain mixing strategy is explored, and the results are shown in Fig. 10. As we can see, when one ratio is too small, increasing the other ratio can lead to a significant improvement. However, when one ratio is large enough, endlessly increasing the other ratio results in a slight decline in performance. The purpose of constructing the mixed samples and labels is to supplement the feature information of other sources to a certain source branch and realize the domain adaptation of each source-target pair. Therefore, the appropriate mixing ratios can better play the advantages of the proposed strategy. Obviously, the best combination is \"50% class + 40% region\".\nThe channel dimension of the intermediate features 𝑁 𝑓 determines the richness of information in the process of knowledge transfer from multiple domains to target predictions. Specifically, the value of 𝑁 𝑓 directly affects the structure and size of the constructed hypergraphes. Fig. 11 shows the influence of 𝑁 𝑓 on the segmentation results of target RSIs. On the whole, the curves corresponding to the three different scenarios show a trend of first rising and then stabilizing or even slightly declining. Therefore, considering the performance and cost simultaneously, setting 𝑁 𝑓 to 64 is a better choice for the experimental settings in this paper. Last but no least, the influence of the weight coefficients of different losses on the domain adaptation performance is analysed. By observing the performance when 𝛼 and 𝛽 are set to different values, the contribution of different components to the obtained results can be analyzed. When one of 𝛼 or 𝛽 is set to 0.5, the segmentation results of target RSIs are significantly worse, which indicates that the proposed collaborative learning method and the multi-domain knowledge integration module are very important for the proposed method to achieve excellent performance. When one of A or B is set to 2, the performance is suboptimal, while setting both A and B to 1 can lead to the optimal performance. This indicates that, it is better to treat L 𝑠𝑠𝑙 and L 𝑀 𝑠𝑠𝑙 equally, because they respectively achieve the adaptation of each source-target pair and the knowledge aggregation of multiple domains, which together contribute to the significant improvement in the performance of class asymmetry RSIs domain adaptation with multiple sources." }, { "figure_ref": [], "heading": "VI. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "Class symmetry is an ideal assumption that the existing UDA methods of RSIs generally follow, but it is actually difficult to be satisfied in practical situations. Therefore, this paper proposes a novel class asymmetry RSIs domain adaptation method with multiple sources, to further improve the segmentation performance of target RSIs. Firstly, a multi-branch segmentation network is built to conduct supervised learning of different source RSIs separately, which can effectively avoid the interference of domain discrepancies while learning the basic and rich source knowledge. Then, the labels and RSIs samples are mixed simultaneously between different branches to supplement each source with the class information it does not originally have, and the collaborative learning among multiple branches is used to further promote the domain adaptation performance of each source to the target RSIs. Next, the different advantages of source branches are combined for generating the final target pseudo-labels, which provides the self-supervised information for multi-source RSIs domain adaptation in the equality or inclusion scenario. Finally, the knowledge aggregation module performs the multi-domain knowledge routing and transfer simultaneously from the views of feature and space, to achieve the better performance of multi-source RSIs domain adaptation. The three scenarios and six class settings are established with the widely used airborne and spaceborne RSIs, where the experimental results show that the proposed method can achieve effective multi-source RSIs domain adaptation in the case of class asymmetry, and its segmentation performance in target RSIs is significantly better than the existing relevant methods.\nAlthough the proposed method preliminarily explores the problem of RSIs domain adaptation with multiple class asymmetry sources, it is still limited to the transfer and adaptation of one target RSI domain. In future work, the domain generalization, meta-learning and other related techniques will be introduced, to improve the adaptability of the proposed method in multiple target RSIs domains." } ]
In the existing unsupervised domain adaptation (UDA) methods for remote sensing images (RSIs) semantic segmentation, class symmetry is an widely followed ideal assumption, where the source and target RSIs have exactly the same class space. In practice, however, it is often very difficult to find a source RSI with exactly the same classes as the target RSI. More commonly, there are multiple source RSIs available. And there is always an intersection or inclusion relationship between the class spaces of each source-target pair, which can be referred to as class asymmetry. Obviously, implementing the domain adaptation learning of target RSIs by utilizing multiple sources with asymmetric classes can better meet the practical requirements and has more application value. To this end, a novel class asymmetry RSIs domain adaptation method with multiple sources is proposed in this paper, which consists of four key components. Firstly, a multibranch segmentation network is built to learn an expert for each source RSI. Secondly, a novel collaborative learning method with the cross-domain mixing strategy is proposed, to supplement the class information for each source while achieving the domain adaptation of each source-target pair. Thirdly, a pseudo-label generation strategy is proposed to effectively combine strengths of different experts, which can be flexibly applied to two cases where the source class union is equal to or includes the target class set. Fourthly, a multiview-enhanced knowledge integration module is developed for the high-level knowledge routing and transfer from multiple domains to target predictions. The experimental results of six different class settings on airborne and spaceborne RSIs show that, the proposed method can effectively perform the multi-source domain adaptation in the case of class asymmetry, and the obtained segmentation performance of target RSIs is significantly better than the existing relevant methods.
Integrating Multiple Sources Knowledge for Class Asymmetry Domain Adaptation Segmentation of Remote Sensing Images
[ { "figure_caption": "Fig. 1 :1Fig. 1: The existing UDA methods of RSIs designed with (a) the class symmetry assumption cannot be effectively extended to the cases of (b) inclusion and (c) intersection between the source and target RSIs because of the asymmetry between class sets. Our work focuses on implementing class asymmetry domain adaptation segmentation of RSIs with multiple sources, in which the source class union (d) is equal to, or (e) includes the target class set.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig. 4: Multi-source pseudo-label generation strategy. The pixels contained in the red box that correspond to the lowquality pseudo-labels at the boundary region are discarded.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "1 and 𝐸 2 , to ensure that the features from different branches have the same dimension 𝑁 ℎ × 𝑁 𝑤 × 𝑁 𝑓 . Then, the features of different branches are directly concatenated in the channel dimension. Next, the highlevel relation learning based on HGCN is carried out from two different views of space and feature. The former refers to reshaping the obtained feature map into the (𝑁 ℎ ×𝑁 𝑤 )×(𝑁 𝑓 ×3) matrix X 1 , where 𝑁 ℎ ×𝑁 𝑤 corresponds to the number of pixels in the spatial dimension, and 𝑁 𝑓 × 3 represents the richness of feature channels. Consequently, the constructed hypergraph contains 𝑁 ℎ × 𝑁 𝑤 vertices, each of which has the feature vector of 𝑁 𝑓 × 3 dimension. In this case, the hyperedges mainly describe the global contextual relation in the spatial dimension of feature maps. And so on, based the matrix X 2 of 𝑁 𝑓 × (𝑁 ℎ × 𝑁 𝑤 × 3), the latter will produce a hypergraph with 𝑁 𝑓 nodes, each of which possessing the 𝑁 ℎ × 𝑁 𝑤 × 3 feature vectors. The hyperedges model the high-level relation between different feature channels. Finally, the outputs of hypergraphs with different views are concatenated and the target predictions are generated through a linear classification layer. Obviously, implementing the high-level relation learning from the views of space and feature can more fully model the knowledge routing relation between different branches and target predictions, and effectively improve the effect of multisource domain adaptation in the case of class asymmetry.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 6 :6Fig. 6: Segmentation maps of different methods in the twosource union equality scenario of airborne RSIs. (a) RSIs. (b) Ground truth. (c) PCEL in class setting 3. (d) MECKA in class setting 2. (e) MECKA in class setting 3. (f) MS-CADA in class setting 2. (g) MS-CADA in class setting 3.", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 7 :7Fig. 7: Segmentation maps of different methods in the threesource union equality scenario of spaceborne RSIs. (a) RSIs. (b) Ground truth. (c) PCEL in class setting 2. (d) MECKA in class setting 1. (e) MECKA in class setting 2. (f) MS-CADA in class setting 1. (g) MS-CADA in class setting 2.", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Fig. 8 :8Fig. 8: Segmentation maps of different methods in the twosource union inclusion scenario of airborne RSIs. (a) RSIs. (b) Ground truth. (c) PCEL. (d) MECKA. (e) MS-CADA.", "figure_data": "", "figure_id": "fig_5", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Fig. 9 :9Fig. 9: Visualization of the deep features generated by different branches during RSIs domain adaptation.", "figure_data": "", "figure_id": "fig_6", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Fig. 10 :10Fig. 10: Influence of different combinations of mixing ratios on segmentation results. (a) class setting 2 in the two-source union equality scenario. (b) class setting 1 in the three-source union equality scenario. (c) class setting 1 in the two-source union inclusion scenario.", "figure_data": "", "figure_id": "fig_7", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Fig. 11 :11Fig. 11: Influence of the channel dimension of the intermediate features 𝑁 𝑓 on segmentation results.", "figure_data": "", "figure_id": "fig_8", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Algorithm 1 MS-CADA for class asymmetry domain adaptation segmentation of RSIs with two sources Require: 𝑋 𝑆 1 , 𝑌 𝑆 1 , 𝑋 𝑆 2 , 𝑌 𝑆 2 : labeled source RSIs samples Require: 𝑋 𝑇 : unlabeled target RSIs samples Require: 𝑀, 𝑀 : student model and teacher model 1: randomly initialize 𝑀 and 𝑀 2: while not done do Sample batch (𝑥 𝑆 1 , 𝑦 𝑆 1 , 𝑥 𝑆 2 , 𝑦 𝑆 2 ) from (𝑋 𝑆 1 , 𝑌 𝑆 1 , 𝑋 𝑆 2 , 𝑌 𝑆 2 ) for (𝑥 𝑆 1 , 𝑦 𝑆 1 , 𝑥 𝑆 2 , 𝑦 𝑆 2 , 𝑥 𝑇 ) do", "figure_data": "3:4:Sample batch 𝑥 𝑇 from 𝑋 𝑇5:6:", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Details of different RSIs datasets.", "figure_data": "DatasetsSubsetsTypesCoverageResolution BandsISPRSVHAirborne1.38 𝑘𝑚 20.09 mIRRGRGBISPRSPDAirborne3.42 𝑘𝑚 20.05 mIRRGRGBIRLoveDARural UrbanSpaceborne 536.15 𝑘𝑚 20.3 mRGBBLUTile 1, Tile 2 Tile 3, Tile 4Spaceborne150 𝑘𝑚 20.8 mRGB", "figure_id": "tab_2", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "Different class settings in the two-source union equality scenario of airborne RSIs (PD1 + PD2 → VH).", "figure_data": "ClassSetting 1 PD1 PD2 VH PD1 PD2 VH PD1 PD2 VH Setting 2 Setting 3Impervious surfaceBuildingLow vegetationTreeCarCluttercontains 398 samples for domain adaptation training and 344samples for evaluation, while the source PD dataset contains3456 samples for training", "figure_id": "tab_3", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "Different class settings in the three-source union equality scenario of spaceborne RSIs (BLU1 + BLU2 + BLU3 → Urban).", "figure_data": "ClassSetting 1 BLU1 BLU2 BLU3 Urban BLU1 BLU2 BLU3 Urban Setting 2BackgroundBuildingVegetationWaterAgriculturalRoad", "figure_id": "tab_4", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "Different class settings in the two-source union inclusion scenario of airborne RSIs (PD1 + PD2 → Partial VH). Each sample is cropped to the size of 1024 × 1024. Similar to the scenario of airborne RSIs, the two three-source union equality scenarios are established, as listed in Table. III.", "figure_data": "ClassSetup 1 PD1 PD2 Partial VHImpervious surfaceBuildingLow vegetationTreeCarClutterdifferent source domains, each of which contains 196 samplesfor domain adaptation training.", "figure_id": "tab_5", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "Segmentation results of the class setting 1 of the two-source union equality scenario. IoU per class is listed from column 3 to 8.", "figure_data": "TypeMethodImp. surf.Build.Low veg.Tree Car Clu. mIoU mF1Li's64.22 88.06 55.64 75.60 27.77 29.22 56.75 69.74CombinedDAFormer 67.75 82.15 50.40 61.03 60.00 15.73 56.18 69.33sourceHRDA 72.39 83.78 50.26 66.46 50.23 44.48 61.27 73.08PCEL71.12 87.62 54.89 66.62 47.07 45.78 62.18 73.91Separated multiple sourcesUMMA 55.63 59.80 46.41 48.38 40.30 8.83 43.23 56.89 DCTN 76.84 87.65 49.52 44.36 38.22 7.71 50.72 63.95 He's 72.45 79.15 43.50 62.94 44.35 10.41 52.13 65.05 MECKA 82.36 87.56 53.81 54.46 48.14 7.95 55.71 67.32MS-CADA 77.39 89.47 56.45 67.34 51.94 47.06 64.94 76.26", "figure_id": "tab_6", "figure_label": "V", "figure_type": "table" }, { "figure_caption": "Segmentation results of the class setting 2 of the two-source union equality scenario. IoU per class is listed from column 3 to 8. .17 40.59 55.20 37.30 10.23 42.04 55.91 DCTN 79.42 87.54 50.28 44.96 38.57 7.66 51.41 64.54 He's 78.07 85.17 52.81 53.07 48.00 17.58 55.78 67.88 MECKA 81.54 86.27 53.28 56.20 52.00 15.88 57.53 69.96 MS-CADA 80.25 89.68 55.02 66.24 55.59 75.20 70.33 81.39", "figure_data": "TypeMethodImp. surf.Build.Low veg.Tree Car Clu. mIoU mF1Li's78.98 88.39 62.26 74.57 9.83 24.00 56.34 69.13CombinedDAFormer 69.28 84.88 51.16 62.50 59.97 40.75 61.42 75.20sourceHRDA 69.62 88.94 54.44 70.34 57.52 59.06 66.65 78.17PCEL72.98 88.64 53.82 71.91 54.06 60.41 66.97 78.32SeparatedUMMA 52.73 56multiplesources", "figure_id": "tab_7", "figure_label": "VI", "figure_type": "table" }, { "figure_caption": "Segmentation results of the class setting 3 of the two-source union equality scenario. IoU per class is listed from column 3 to 8. .75 55.77 65.38 58.99 40.34 65.20 77.83 DAFormer 79.04 89.71 51.79 61.83 65.94 39.00 64.55 77.18 HRDA 81.25 88.94 57.29 78.70 58.34 44.98 68.25 80.03 PCEL 79.03 89.80 59.23 66.84 58.88 60.04 68.97 80.54 Separated multiple sources UMMA 68.37 70.19 47.26 56.71 45.63 16.38 50.76 64.82 DCTN 79.31 88.12 53.85 65.29 62.85 15.05 60.75 73.26 He's 79.58 86.16 55.79 62.48 56.46 36.28 62.79 76.05 MECKA 79.51 88.65 54.72 67.81 64.75 50.26 67.62 79.13 MS-CADA 83.73 90.08 60.15 68.40 59.89 70.93 72.20 83.36", "figure_data": "TypeMethodImp. surf.Build.Low veg.Tree Car Clu. mIoU mF1DAFormer (PD1)64.13 87.36 27.63 30.48 64.53 1.52 45.94 57.14Single sourceDAFormer (PD2)82.56 89.81 55.83 65.16 64.45 47.34 67.53 79.71PCEL (PD1) 76.97 88.28 56.74 75.73 65.21 8.16 61.85 72.56PCEL (PD2) 81.51 89.55 59.40 72.32 57.96 59.45 70.03 81.07Li's81.95 88Combinedsource", "figure_id": "tab_8", "figure_label": "VII", "figure_type": "table" }, { "figure_caption": "Segmentation results of the class setting 1 of the three-source union equality scenario. IoU per class is listed from column 3 to 8. Type Method Back. Build. Veg. Water Agri. Road mIoU mF1 Combined source Li's 27.37 47.97 18.79 10.69 21.12 42.79 28.12 42.75 DAFormer 32.49 44.65 18.44 14.93 11.26 42.09 27.31 41.23 HRDA 27.16 50.24 18.90 44.20 20.04 45.40 34.32 49.75 PCEL 29.90 48.10 17.35 48.77 26.08 40.95 35.19 50.23 Separated multiple sources UMMA 19.55 30.98 13.13 13.76 2.86 20.51 16.80 29.37 DCTN 25.96 43.54 16.95 19.31 14.30 35.20 25.88 39.04 He's 24.30 47.75 12.31 46.67 16.60 34.26 30.32 44.69 MECKA 29.83 50.21 18.82 49.89 15.19 44.61 34.76 49.86 MS-CADA 30.82 52.76 23.48 43.29 22.36 46.70 36.57 51.69", "figure_data": "", "figure_id": "tab_9", "figure_label": "VIII", "figure_type": "table" }, { "figure_caption": "Segmentation results of the class setting 2 of the three-source union equality scenario. IoU per class is listed from column 3 to 8. DAFormer 31.20 46.20 18.88 42.80 18.10 49.79 34.50 49.93 HRDA 29.20 52.93 18.73 45.73 23.25 52.59 37.07 52.28 PCEL 30.17 46.30 20.06 54.18 29.75 46.52 37.83 53.06 Separated multiple sources UMMA 18.70 32.22 13.71 27.60 5.04 31.89 21.53 35.65 DCTN 26.52 44.68 18.38 27.91 18.66 42.22 29.73 34.92 He's 21.03 48.71 17.48 42.38 14.02 49.32 32.16 37.47 MECKA 23.40 50.40 18.88 54.68 21.55 50.04 36.49 51.59 MS-CADA 31.82 48.34 20.95 58.04 26.23 54.26 39.94 55.36", "figure_data": "", "figure_id": "tab_10", "figure_label": "IX", "figure_type": "table" }, { "figure_caption": "are discarded. In other words, the eight UDA methods actually still perform", "figure_data": "", "figure_id": "tab_11", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Segmentation results of the two-source union inclusion scenario. IoU per class is listed from column 3 to 7. .57 54.71 63.54 19.38 58.79 67.78 DAFormer 73.58 86.38 53.76 59.68 58.05 66.29 78.91 HRDA 75.52 90.40 60.66 76.30 49.42 70.46 81.85 PCEL 71.50 88.77 57.28 74.35 59.17 70.21 81.62 Separated multiple sources UMMA 63.26 68.12 27.02 56.91 22.92 47.65 60.13 DCTN 67.61 80.73 47.75 58.36 42.71 59.43 70.55 He's 79.97 85.91 50.03 44.04 57.94 63.58 75.84 MECKA 81.91 88.27 53.40 52.05 62.45 67.62 79.76 MS-CADA 82.25 90.90 57.54 65.67 62.85 71.84 83.37", "figure_data": "TypeMethodImp. surf.Build.Low veg.Tree Car mIoU mF1Li's68.75 87Combinedsource", "figure_id": "tab_12", "figure_label": "X", "figure_type": "table" }, { "figure_caption": "Ablation studies of the proposed cross-domain mixing strategy (mIoU, %).", "figure_data": "ScenarioClass settingBaseline Class-level Region-level OursTwo-source union equalitySetting 1 42.79 Setting 2 41.55 Setting 3 51.4963.85 69.80 71.3664.13 68.94 71.8764.94 70.33 72.20Three-sourceSetting 1 21.3835.5135.7936.57union equalitySetting 2 27.5639.0439.3339.94Two-source union inclusionSetting 1 49.8771.0670.9571.84", "figure_id": "tab_13", "figure_label": "XI", "figure_type": "table" }, { "figure_caption": "Ablation studies of the proposed multi-source pseudo-label generation strategy (mIoU, %).", "figure_data": "ScenarioClass settingBest expert Summation Ensemble OursTwo-source union equalitySetting 1 Setting 2 Setting 363.54 68.87 70.8364.06 69.73 71.7464.33 64.94 69.61 70.33 71.79 72.20Three-sourceSetting 135.0235.6335.86 36.57union equalitySetting 238.7539.2739.43 39.94", "figure_id": "tab_14", "figure_label": "XII", "figure_type": "table" }, { "figure_caption": "Ablation studies of the designed multiviewenhanced knowledge integration module (mIoU, %). summed at the element level, and then the pseudo-labels are obtained through the softmax activation. The Ensemble method comes from the research of Li et al.[41], and actually performs the selective average calculation on different results. Through comparison, it is found that the self-supervised training with only the results of one source expert will produce the suboptimal performance. Obviously, different experts will focus on different feature representation when performing supervised learning on different sources.", "figure_data": "ScenarioClass settingCNN Transformer GCN HGCN OursTwo-source union equalitySetting 1 62.36 Setting 2 68.14 Setting 3 69.9063.39 68.67 70.5563.16 64.10 64.94 68.72 69.26 70.33 70.64 71.58 72.20Three-sourceSetting 1 35.1835.8235.07 35.75 36.57union equalitySetting 2 37.5237.4938.03 38.96 39.94Two-source union inclusionSetting 1 69.8770.4770.32 71.05 71.84experts are firstly", "figure_id": "tab_15", "figure_label": "XIII", "figure_type": "table" }, { "figure_caption": "Influence of different combinations of loss weights on segmentation results (mIoU, %).", "figure_data": "ScenarioClass setting𝛼 = 1, 𝛽 = 1𝛼 = 0.5, 𝛽 = 1𝛼 = 1, 𝛽 = 0.5𝛼 = 2, 𝛽 = 1𝛼 = 1, 𝛽 = 2Two-source union equalitySetting 2 70.33 65.6665.00 68.76 69.08Three-source union equalitySetting 1 36.57 34.1933.96 35.73 36.04Two-source union inclusionSetting 1 71.84 66.8366.96 70.46 71.13", "figure_id": "tab_16", "figure_label": "XIV", "figure_type": "table" } ]
Kuiliang Gao; Anzhu Yu; Xiong You; Wenyue Guo; Ke Li; Ningbo Huang
[ { "authors": "P Dias; Y Tian; S Newsam; A Tsaris; J Hinkle; D Lunga", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b0", "title": "Model assumptions and data characteristics: Impacts on domain adaptation in building segmentation", "year": "2022" }, { "authors": "L Zhang; L Zhang", "journal": "IEEE Geoscience and Remote Sensing Magazine", "ref_id": "b1", "title": "Artificial intelligence for remote sensing data analysis: A review of challenges and opportunities", "year": "2022" }, { "authors": "R Qin; T Liu", "journal": "Remote Sensing", "ref_id": "b2", "title": "A review of landcover classification with very-high resolution remotely sensed optical images mdash; analysis unit, model scalability and transferability", "year": "2022" }, { "authors": "S Zhou; Y Feng; S Li; D Zheng; F Fang; Y Liu; B Wan", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b3", "title": "Dsmassisted unsupervised domain adaptive network for semantic segmentation of remote sensing imagery", "year": "2023" }, { "authors": "H Ni; Q Liu; H Guan; H Tang; J Chanussot", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b4", "title": "Category-level assignment for cross-domain semantic segmentation in remote sensing images", "year": "2023" }, { "authors": "M Luo; S Ji", "journal": "ISPRS Journal of Photogrammetry and Remote Sensing", "ref_id": "b5", "title": "Cross-spatiotemporal land-cover classification from vhr remote sensing images with deep learning based domain adaptation", "year": "2022" }, { "authors": "S Ji; D Wang; M Luo", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b6", "title": "Generative adversarial network-based full-space domain adaptation for land cover classification from multiplesource remote sensing images", "year": "2021" }, { "authors": "L Wu; M Lu; L Fang", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b7", "title": "Deep covariance alignment for domain adaptive remote sensing image segmentation", "year": "2022" }, { "authors": "M Xu; M Wu; K Chen; C Zhang; J Guo", "journal": "Remote Sensing", "ref_id": "b8", "title": "The eyes of the gods: A survey of unsupervised domain adaptation methods based on remote sensing data", "year": "2022" }, { "authors": "C Liang; B Cheng; B Xiao; Y Dong; J Chen", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b9", "title": "Multilevel heterogeneous domain adaptation method for remote sensing image segmentation", "year": "2023" }, { "authors": "K Gao; B Liu; X Yu; A Yu", "journal": "IEEE Transactions on Image Processing", "ref_id": "b10", "title": "Unsupervised meta learning with multiview constraints for hyperspectral image small sample set classification", "year": "2022" }, { "authors": "K Gao; B Liu; X Yu; J Qin; P Zhang; X Tan", "journal": "Remote Sensing", "ref_id": "b11", "title": "Deep relation network for hyperspectral image few-shot classification", "year": "2020-03" }, { "authors": "O Tasar; S L Happy; Y Tarabalka; P Alliez", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b12", "title": "Colormapgan: Unsupervised domain adaptation for semantic segmentation using color mapping generative adversarial networks", "year": "2020" }, { "authors": "M Sokolov; C Henry; J Storie; C Storie; V Alhassan; M Turgeon-Pelchat", "journal": "IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing", "ref_id": "b13", "title": "High-resolution semantically consistent image-to-image translation", "year": "2023" }, { "authors": "Z Yang; P Guo; H Gao; X Chen", "journal": "IEEE Geoscience and Remote Sensing Letters", "ref_id": "b14", "title": "Depth-assisted residualgan for cross-domain aerial images semantic segmentation", "year": "2023" }, { "authors": "Y Cai; Y Yang; Y Shang; Z Shen; J Yin", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b15", "title": "Dasrsnet: Multitask domain adaptation for super-resolution-aided semantic segmentation of remote sensing images", "year": "2023" }, { "authors": "J Zhu; Y Guo; G Sun; L Yang; M Deng; J Chen", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b16", "title": "Unsupervised domain adaptation semantic segmentation of high-resolution remote sensing imagery with invariant domain-level prototype memory", "year": "2023" }, { "authors": "X Ma; X Zhang; Z Wang; M.-O Pun", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b17", "title": "Unsupervised domain adaptation augmented by mutually boosted attention for semantic segmentation of vhr remote sensing images", "year": "2023" }, { "authors": "J Chen; J Zhu; Y Guo; G Sun; Y Zhang; M Deng", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b18", "title": "Unsupervised domain adaptation for semantic segmentation of high-resolution remote sensing imagery driven by category-certainty attention", "year": "2022" }, { "authors": "L Bai; S Du; X Zhang; H Wang; B Liu; S Ouyang", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b19", "title": "Domain adaptation for remote sensing image semantic segmentation: An integrated approach of contrastive learning and adversarial learning", "year": "2022" }, { "authors": "A Zheng; M Wang; C Li; J Tang; B Luo", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b20", "title": "Entropy guided adversarial domain adaptation for aerial image semantic segmentation", "year": "2022" }, { "authors": "A Ma; C Zheng; J Wang; Y Zhong", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b21", "title": "Domain adaptive landcover classification via local consistency and global diversity", "year": "2023" }, { "authors": "A Tarvainen; H Valpola", "journal": "", "ref_id": "b22", "title": "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results", "year": "2017" }, { "authors": "L Yan; B Fan; S Xiang; C Pan", "journal": "IEEE Geoscience and Remote Sensing Letters", "ref_id": "b23", "title": "Cmt: Cross mean teacher unsupervised domain adaptation for vhr image semantic segmentation", "year": "2022" }, { "authors": "J Wang; A Ma; Y Zhong; Z Zheng; L Zhang", "journal": "Remote Sensing of Environment", "ref_id": "b24", "title": "Cross-sensor domain adaptation for high spatial resolution urban land-cover mapping: From airborne to spaceborne imagery", "year": "2022" }, { "authors": "J Chen; P He; J Zhu; Y Guo; G Sun; M Deng; H Li", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b25", "title": "Memorycontrastive unsupervised domain adaptation for building extraction of high-resolution remote sensing imagery", "year": "2023" }, { "authors": "T Lasloum; H Alhichri; Y Bazi; N Alajlan", "journal": "Remote Sensing", "ref_id": "b26", "title": "Ssdan: Multi-source semi-supervised domain adaptation network for remote sensing scene classification", "year": "2021-09" }, { "authors": "O Tasar; A Giros; Y Tarabalka; P Alliez; S Clerc", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b27", "title": "Daugnet: Unsupervised, multisource, multitarget, and life-long domain adaptation for semantic segmentation of satellite images", "year": "2021" }, { "authors": "A Elshamli; G W Taylor; S Areibi", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b28", "title": "Multisource domain adaptation for remote sensing using deep neural networks", "year": "2020" }, { "authors": "X Wang; Y Li; C Lin; Y Liu; S Geng", "journal": "Journal of Applied Remote Sensing", "ref_id": "b29", "title": "Building damage detection based on multi-source adversarial domain adaptation", "year": "2021" }, { "authors": "S Zhao; B Li; X Yue; Y Gu; P Xu; R Hu; H Chai; K Keutzer", "journal": "", "ref_id": "b30", "title": "Multi-source domain adaptation for semantic segmentation", "year": "2019" }, { "authors": "H Wallach; H Larochelle; A Beygelzimer", "journal": "", "ref_id": "b31", "title": "33rd Conference on Neural Information Processing Systems (NeurIPS)", "year": "2019" }, { "authors": "J He; X Jia; S Chen; J Liu", "journal": "ELECTR NETWORK", "ref_id": "b32", "title": "Multi-source domain adaptation with collaborative learning for semantic segmentation", "year": "2021" }, { "authors": "M M Al Rahhal; Y Bazi; T Abdullah; M L Mekhalfi; H Alhichri; M Zuair", "journal": "Remote Sensing", "ref_id": "b33", "title": "Learning a multi-branch neural network from multiple sources for knowledge adaptation in remote sensing imagery", "year": "2018" }, { "authors": "M M Al Rahhal; Y Bazi; H Al-Hwiti; H Alhichri; N Alajlan", "journal": "IEEE Geoscience and Remote Sensing Letters", "ref_id": "b34", "title": "Adversarial learning for knowledge adaptation from multiple remote sensing sources", "year": "2021" }, { "authors": "Y Wang; L Feng; W Sun; Z Zhang; H Zhang; G Yang; X Meng", "journal": "GISCIENCE & REMOTE SENSING", "ref_id": "b35", "title": "Exploring the potential of multi-source unsupervised domain adaptation in crop mapping using sentinel-2 images", "year": "2022" }, { "authors": "R Xu; Z Chen; W Zuo; J Yan; L Lin", "journal": "IEEE Comp Soc", "ref_id": "b36", "title": "Deep cocktail network: Multi-source unsupervised domain adaptation with category shift", "year": "2018" }, { "authors": "Z Ding; M Shao; Y Fu", "journal": "IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS", "ref_id": "b37", "title": "Incomplete multisource transfer learning", "year": "2018" }, { "authors": "X Lu; T Gong; X Zheng", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b38", "title": "Multisource compensation network for remote sensing cross-domain scene classification", "year": "2020" }, { "authors": "T Gong; X Zheng; X Lu", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b39", "title": "Cross-domain scene classification by integrating multiple incomplete sources", "year": "2021" }, { "authors": "B H Ngo; J H Kim; S J Park; S I Cho", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b40", "title": "Collaboration between multiple experts for knowledge adaptation on multiple remote sensing sources", "year": "2022" }, { "authors": "Z Li; R Togo; T Ogawa; M Haseyama", "journal": "", "ref_id": "b41", "title": "Union-set multi-source model adaptation for semantic segmentation", "year": "2022" }, { "authors": "Z Cao; M Long; J Wang; M I Jordan", "journal": "IEEE Comp Soc", "ref_id": "b42", "title": "Partial transfer learning with selective adversarial networks", "year": "2018" }, { "authors": "Z Cao; K You; Z Zhang; J Wang; M Long", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b43", "title": "From big to small: Adaptive learning to partial-set domains", "year": "2023" }, { "authors": "A Sahoo; R Panda; R Feris; K Saenko; A Das", "journal": "", "ref_id": "b44", "title": "Select, label, and mix: Learning discriminative invariant feature representations for partial domain adaptation", "year": "" }, { "authors": "; Grainger; Wacv", "journal": "", "ref_id": "b45", "title": "IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)", "year": "2023-01" }, { "authors": "J Zheng; Y Zhao; W Wu; M Chen; W Li; H Fu", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b46", "title": "Partial domain adaptation for scene classification from remote sensing imagery", "year": "2023" }, { "authors": "V Olsson; W Tranheden; J Pinto; L Svensson", "journal": "ELECTR NETWORK", "ref_id": "b47", "title": "Classmix: Segmentation-based data augmentation for semi-supervised learning", "year": "2021-09" }, { "authors": "S Yun; D Han; S Chun; S J Oh; Y Yoo; J Choe", "journal": "", "ref_id": "b48", "title": "Cutmix: Regularization strategy to train strong classifiers with localizable features", "year": "2019-10-27" }, { "authors": "J Lim; S Yun; S Park; J Y Choi", "journal": "", "ref_id": "b49", "title": "Hypergraph-induced semantic tuplet loss for deep metric learning", "year": "2022" }, { "authors": "Q He; X Sun; W Diao; Z Yan; F Yao; K Fu", "journal": "IEEE Transactions on Image Processing", "ref_id": "b50", "title": "Multimodal remote sensing image segmentation with intuition-inspired hypergraph modeling", "year": "2023" }, { "authors": "Z Ma; Z Jiang; H Zhang", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b51", "title": "Hyperspectral image classification using feature fusion hypergraph convolution neural network", "year": "2022" }, { "authors": "S Bai; F Zhang; P H S Torr", "journal": "PATTERN RECOGNITION", "ref_id": "b52", "title": "Hypergraph convolution and hypergraph attention", "year": "2021" }, { "authors": "Y Gao; Y Feng; S Ji; R Ji", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b53", "title": "Hgnn+: General hypergraph neural networks", "year": "2023" }, { "authors": "J Wang; Z Zheng; A Ma; X Lu; Y Zhong", "journal": "CoRR", "ref_id": "b54", "title": "Loveda: A remote sensing land-cover dataset for domain adaptive semantic segmentation", "year": "2021" }, { "authors": "L Ding; D Lin; S Lin; J Zhang; X Cui; Y Wang; H Tang; L Bruzzone", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b55", "title": "Looking outside the window: Wide-context transformer for the semantic segmentation of high-resolution remote sensing images", "year": "2022" }, { "authors": "K Gao; A Yu; X You; C Qiu; B Liu", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b56", "title": "Prototype and contextenhanced learning for unsupervised domain adaptation semantic segmentation of remote sensing images", "year": "2023" }, { "authors": "L Hoyer; D Dai; L Van Gool", "journal": "", "ref_id": "b57", "title": "Daformer: Improving network architectures and training strategies for domain-adaptive semantic segmentation", "year": "2022" }, { "authors": "W Li; H Gao; Y Su; B M Momanyi", "journal": "Remote Sensing", "ref_id": "b58", "title": "Unsupervised domain adaptation for remote sensing semantic segmentation with transformer", "year": "2022-10" }, { "authors": "L Hoyer; D Dai; L Van Gool", "journal": "", "ref_id": "b59", "title": "Hrda: Context-aware highresolution domain-adaptive semantic segmentation", "year": "2022" }, { "authors": "L Van Der Maaten; G Hinton", "journal": "Journal of Machine Learning Research", "ref_id": "b60", "title": "Viualizing data using t-sne", "year": "2008" } ]
[ { "formula_coordinates": [ 4, 326.72, 464.62, 236.32, 17.67 ], "formula_id": "formula_0", "formula_text": "L 𝑆 𝑖 𝑠𝑢 𝑝 = -E ( 𝑥 𝑆 𝑖 𝑗 ,𝑦 𝑆 𝑖 𝑗 )∼𝐷 𝑆 𝑖 ∑︁ 𝑦 𝑆 𝑖 𝑗 𝑙𝑜𝑔(𝐹 𝑖 (𝐸 𝑖 (𝐺 (𝑥 𝑆 𝑖 𝑗 )))),(1)" }, { "formula_coordinates": [ 5, 107.54, 635.84, 192.48, 29.32 ], "formula_id": "formula_1", "formula_text": "𝑥 𝑆 1 𝑚𝑖 𝑥 = M 𝑥 𝑆 1 + (1 -M) 𝑥 𝑇 𝑦 𝑆 1 𝑚𝑖 𝑥 = M 𝑦 𝑆 1 + (1 -M) 𝑦 𝑇 𝑆 2 ,(2)" }, { "formula_coordinates": [ 5, 350.73, 657.2, 212.31, 40.67 ], "formula_id": "formula_2", "formula_text": "𝑤 𝑆 1 = M 1 + (1 -M) 𝑤 𝑡 𝑤 𝑡 = ℎ •𝑤 𝑙=1 [max 𝑐 𝐹 1 (𝐸 1 (𝐺 (𝑥 𝑇 ))) (𝑙,𝑐) > 𝜏] ℎ • 𝑤 ,(3)" }, { "formula_coordinates": [ 6, 59.73, 262.99, 236.43, 26.75 ], "formula_id": "formula_3", "formula_text": "L 𝑆 1 𝑠𝑠𝑙 = -E ( 𝑥 𝑆 1 𝑚𝑖 𝑥 ,𝑦 𝑆 1 𝑚𝑖 𝑥 ) ∑︁ 𝑤 𝑆 1 𝑦 𝑆 1 𝑚𝑖 𝑥 𝑙𝑜𝑔(𝐹 1 (𝐸 1 (𝐺 (𝑥 𝑆 1 𝑚𝑖 𝑥 )))),(4" }, { "formula_coordinates": [ 6, 348.93, 328.43, 214.11, 24.91 ], "formula_id": "formula_4", "formula_text": "ŷ𝑇 = max(𝐹 1 (𝐸 1 (𝐺 (𝑥 𝑇 ))), 𝐹 2 (𝐸 2 (𝐺 (𝑥 𝑇 )))) 𝑦 𝑇 = 𝑀 𝑐 ( ŷ𝑇 ),(5)" }, { "formula_coordinates": [ 6, 383.17, 428.58, 179.87, 23.56 ], "formula_id": "formula_5", "formula_text": "𝑀 𝑐 = ŷ𝑇 , if ŷ𝑇 ∈ C 𝑇 ; 255, if ŷ𝑇 ∉ C 𝑇 .(6)" }, { "formula_coordinates": [ 7, 387.38, 76.07, 175.65, 9.2 ], "formula_id": "formula_6", "formula_text": "𝑒 𝑖 = {𝑣 𝑖 , ∀𝑣 𝑗 ∈ N 𝐾 (𝑣 𝑖 )},(7)" }, { "formula_coordinates": [ 7, 387.5, 150.18, 175.54, 23.56 ], "formula_id": "formula_7", "formula_text": "ℎ(𝑣, 𝑒) = 1, if 𝑣 ∈ 𝑒; 0, if 𝑣 ∉ 𝑒,(8)" }, { "formula_coordinates": [ 7, 365.47, 234.74, 193.7, 12.61 ], "formula_id": "formula_8", "formula_text": "Y = 𝜎(D -1/2 𝑣 HWD -1 𝑒 H D -1/2 𝑣 X𝛩), (9" }, { "formula_coordinates": [ 7, 559.16, 238.34, 3.87, 8.64 ], "formula_id": "formula_9", "formula_text": ")" }, { "formula_coordinates": [ 7, 365.13, 430.27, 197.91, 11.05 ], "formula_id": "formula_10", "formula_text": "L 𝑀 𝑠𝑠𝑙 = -E ( ȳ𝑇 ,𝑦 𝑇 ) trans(𝑦 𝑇 )𝑙𝑜𝑔( ȳ𝑇 ),(10)" }, { "formula_coordinates": [ 7, 381.54, 545.54, 181.5, 10.39 ], "formula_id": "formula_11", "formula_text": "L = L 𝑠𝑢 𝑝 + 𝛼L 𝑠𝑠𝑙 + 𝛽L 𝑀 𝑠𝑠𝑙 ,(11)" } ]
10.18653/v1/2021.emnlp-main.552
2023-05-17
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b5", "b11", "b16", "b22", "b14", "b10", "b15", "b10", "b7", "b5", "b6", "b15", "b10", "b25" ], "table_ref": [], "text": "The performance of sequence-to-sequence (Seq2Seq) neural models for abstractive summarization (Lewis et al., 2020;Nallapati et al., 2016;See et al., 2017;Zhang et al., 2020) has improved significantly. The dominant training paradigm of Seq2Seq models is that of Maximum Likelihood Estimation (MLE), maximizing the likelihood of each output given the gold history of target sequences during training. However, since the models generate the sequence in an auto-regressive manner at inference, the errors made in the previous steps accumulate in the next step thereby affecting the entire sequence. This phenomenon is known as exposure bias (Bengio et al., 2015;Ranzato et al., 2016). To mitigate this problem, re-ranking systems (Liu et al., 2021;Liu and Liu, 2021;Liu et al., 2022;Ravaut et al., 2022) have recently been introduced to generate a more appropriate summary.\nThere are two training objectives for applying reranking to abstractive summarization: contrastive learning and multi-task learning. The contrastive learning-based approaches deploy margin-based losses. SimCLS (Liu and Liu, 2021) and BRIO-Ctr (Liu et al., 2022) train a large pre-trained model, such as RoBERTa (Liu et al., 2019) and BART (Lewis et al., 2020), to align the candidate summaries according to the quality. The authors use the ROUGE (Lin, 2004) score as a quality measurement. The multi-task learning-based approaches combine at least two losses that perform different roles. SummaReranker (Ravaut et al., 2022) minimizes the average over the binary cross-entropy losses optimized for each evaluation metric. In addition, BRIO-Mul (Liu et al., 2022) demonstrates that the combination of the contrastive and crossentropy loss works complementarily and has better performance.\nIn this paper, we analyze the three main drawbacks of existing re-ranking approaches. First, we argue that current methods focus excessively on ranking summaries in terms of lexical overlap. Inspired by Zhong et al. (2020), we conduct a preliminary study, by sorting candidate summaries in descending order based on the ROUGE score and then defining z as the rank index of the highest BERTScore summary. As demonstrated in Fig. 1, we can observe that there is a large gap between lexical overlap and semantic similarity. In a majority (52%) of cases z > 1. Second, despite more than half of the candidates with the same ROUGE score, previous studies do not accurately reflect quality measurements as they are trained with different ranks even if they have equal scores (Appendix F). Lastly, for the first time, we find summaries with high lexical overlap but low semantic similarity as false positives (Appendix G). They can be noises during training phrase, which are not considered substantially in the prior works.\nTo address these issues, we propose a novel training method in which a re-ranker balances lexical and semantic quality. Based on a two-stage framework, our model, named BalSum, is trained on multi-task learning. We directly reflect the ROUGE score difference on a ranking loss to preserve the lexical quality as much as possible. Then, we use a contrastive loss with instance weighting to identify summaries whose meanings are close to the document. Specifically, we define novel false positives (semantic mistakes) and present a strategy to reduce their influence in ranking. Experiments on CNN/DM and XSum datasets demonstrate the effectiveness of our method. Notably, BalSum achieves an 89.67 BERTScore on CNN/DM, reaching a new state-of-the-art performance." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "Our method follows the two-stage framework. Given a source document D, a function g is to generate a pool of candidate summaries C = {C 1 , C 2 , ..., C m } at the first stage:\nC ← g(D)(1)\nThen, a function f is to assign scores to each candidate and select the best summary C * with the highest score at the second stage:\nC * = argmax C i ∈C {f (C i , D)} (2)\nOur goal is to train the ranking model f that identifies the correct summary from the outputs of the generation model g." }, { "figure_ref": [ "fig_1" ], "heading": "Model Architecture", "publication_ref": [ "b7", "b4" ], "table_ref": [], "text": "We start with a bi-encoder using RoBERTa-base (Liu et al., 2019) as a back-bone neural network.\nInspired by Khattab and Zaharia (2020), we aim to capture rich semantic units at the sentence level.\nAs shown in Fig. 2, we insert the [CLS] tokens in front of K sentences in the document D to let them encode into multi-vector representations. Then, we compute the individual score Score k which is modeled as an inner-product:\nScore k = sim(E 1 (C i ), E k (D))(3)\nwhere E 1 (C i ) and E k (D)(k = 1, 2, ..., K) mean the representations of [CLS] tokens for candidate summary C i and document D, respectively. We calculate the similarity score f (C i , D):\nf (Ci, D) = K k=1 Score k K j=1 Scorej Score k = K k=1 w k • Score k (4)\nIn Appendix E, we show that our model can capture more information from documents at the sentence level." }, { "figure_ref": [], "heading": "Training objective", "publication_ref": [ "b10" ], "table_ref": [], "text": "Ranking Loss The core idea is that the higher the quality of the candidate summary, the closer to the document. We introduce a ranking loss to f (•): where S is the reference summary and λ is the hyper-parameter.1 Here, cost(C i , S) = 1 -M (C i , S) is the margin, and M is the automatic evaluation metric. We define it as ROUGE. We use the same metric in previous work (Liu and Liu, 2021;Liu et al., 2022), but the difference is that our loss directly reflects the quality measure during training. In other words, the quality was not properly reflected before because different margin ((j -i) * λ) was assigned even if the candidate summaries had the same ROUGE score.\nL rank = i j>i max(0, f (Cj, D) -f (Ci, D) +(-cost(Ci, S) + cost(Cj, S)) * λ)(5)" }, { "figure_ref": [ "fig_2" ], "heading": "Contrastive Loss with Instance Weighting", "publication_ref": [ "b26", "b2" ], "table_ref": [], "text": "The construction of positive and negative pairs is the critical point in constrative learning. Therefore, we consider generated summaries from the same document as positive samples and irrelevant summaries from other documents as negative samples. Thus, we design a set of candidate summaries C in Eq. 1 as positive and a set of randomly sampled summaries N as negative.2 To identify summaries whose meanings are close to the document, we introduce a contrastive learning objective with instance weighting:\nL ctr = 1 |C| C i ∈C -log α C i × e f (C i ,D) e f (C i ,D) + s i ∈N e f (s i ,D) (6)\nWe newly define summaries that have a high lexical matching but a low semantic similarity as false positives. Inspired by Zhou et al. (2022), we design an instance weighting method to reduce the influence of false positives. We produce the weights for positives using the SimCSE (Gao et al., 2021) which is the state-of-the-art model for the sentence representation task:\nα C i = 0, sim(C i , S) < φ 1, sim(C i , S) ≥ φ (7)\nwhere φ is a hyper-parameter of the instance weighting threshold, and sim(•) is the cosine similarity score evaluated by the SimCSE model. Finally, as shown in Fig. 3, we combine the ranking (Eq. 5) and contrastive (Eq. 6) losses:\nL = γ 1 L rank + γ 2 L ctr (8\n)\nwhere γ is the scale factor of each loss and we find the optimal values (γ 1 = 10, γ 2 = 0.1) in Appendix H." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b12" ], "table_ref": [], "text": "We experiment on two datasets, whose statistics are shown in Appendix C. CNN/DailyMail (Hermann et al., 2015) is the most commonly used summarization dataset which contains articles from the CNN and DailyMail newspapers.\nXSum (Narayan et al., 2018) is a one-sentence summary dataset from the British Broadcasting Corporation (BBC) for the years 2010 -2017." }, { "figure_ref": [], "heading": "Training Details", "publication_ref": [ "b19", "b7" ], "table_ref": [], "text": "We use diverse beam search (Vijayakumar et al., 2016) to generate 16 candidate summaries. We start from pre-trained checkpoints of RoBERTabase (Liu et al., 2019). We train BalSum for five epochs. It takes 33 hours on CNN/DM and 22 hours on XSum on a single RTX 3090 GPU. More details are described in Appendix D." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [ "b10" ], "table_ref": [ "tab_0", "tab_1" ], "text": "In terms of the two-stage framework, we compare our results with SimCLS (Liu and Liu, 2021), Sum-maReranker (Ravaut et al., 2022), and BRIO (Liu et al., 2022). We apply BalSum on top of each base model which is BART or PEGASUS.\nThe results on CNN/DM are described in Table 1 our model can estimate the meaning of summaries without seriously degrading the lexical aspect. We argue that this is because BalSum decreases more false positives than other ranking models. We provide fine-grained analyses for this result and present a case study in Sec.3.4.\nIn addition, we apply our method on XSum, as shown in Table 2. Though we use a different strategy to generate the validation and test data3 , our method improves a base PEGASUS with a small margin. We believe the one of reasons is that XSum is restricted to capturing diverse semantic units because it consists of much shorter summaries (onesentence) than CNN/DM. Model BS@1 BS@3 BS@5 R@1 R@3 R@5\nOracle " }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [ "tab_3", "tab_4", "tab_4", "tab_9" ], "text": "Weighting Threshold φ Intuitively, the larger the weighting threshold, the lower false positives. We train our model with different instance weighting thresholds from 0.7 to 0.9. In Table 3, the highest threshold (φ = 0.9) shows the best performance and it rises largely to 0.3 BERTScore compared to when not applied. We also find that increasing the threshold leads to performance improvement. Therefore, we demonstrate that false positives can be considered noise in training.\nRanking Evaluation Regardless of the number of candidates, an ideal ranking model should yield oracle results considering diverse aspects of summarization. We conduct an experiment to measure the qualities by selecting the top-k summaries after aligning the candidates through different models. As shown in Table 4, we can see that our model shows consistent performance in both evaluation metrics depending on the k (about ±0.06 BERTScore, ±0.34 ROUGE average score). Compared to SimCLS and BRIO-Ctr, the second block in Table 4 demonstrates that BalSum captures semantic similarity best while maintaining the intermediate level from the perspective of lexical overlap quality. Moreover, we find that BalSum has the lowest drop ratio of BERTScore (-1.52%) from the perfect ranking \"oracle\" scores. We also investigate whether all ranked summaries by models satisfy both lexical and semantic quality. We evaluate models using F 1 which measures the cases where the higher-ranked summary Case Study on CNN/DM Table 10 presents an intriguing pattern we observed when comparing the results of BRIO-Ctr and BalSum, which demonstrate that our model helps to capture precise details from documents. While BRIO-Ctr contains some irrelevant information in the summaries (shown as highlighted text in blue), BalSum selects the summaries where the last sentence is more consistent with the reference (shown as highlighted text in yellow). Furthermore, despite the comparable ROUGE scores of both models, we note that Bal-Sum's selected summaries consistently have higher BERTScore than those of BRIO-Ctr." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we propose BalSum which aims to evaluate summaries by considering the balance between lexical and semantic quality. To achieve this, we perform a multi-task learning, which aligns summaries according to their lexical overlap qualities and identifies whether they are similar to the document. In addition, to our best knowledge, our method is the first attempt to present a new perspective of false positives (semantic mistakes) in ranking and creating the model to reduce their in-fluence. Our experimental results and fine-grained analyses validate that our model achieves consistent improvements over competitive baselines." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b13", "b21" ], "table_ref": [ "tab_1" ], "text": "Candidate Summaries Dependency While we mainly investigate a training objective to select the best summary among a set of candidates, we find that our model has been dependent on those obtained from the generation model. Recently, several works have been presented to improve language generation. For example, Narayan et al. (2022) and Xu et al. (2022) improve decoding methods to generate diverse outputs. It will be beneficial when applying our method to these approaches.\nOne-sentence Summary Our approach can fail to capture the information from an extremely short summary. Since Table 2 shows that our approach has a smaller improvement than CNN/DM, we plan to investigate that our model aims to capture more detailed features from an input text." }, { "figure_ref": [], "heading": "E Effect of Model Architecture", "publication_ref": [], "table_ref": [], "text": "We train BalSum with different model architectures and evaluate them on CNN/DM test set. For a fair comparison, we use only ranking loss in Eq. 5.\nTable 7 shows that taking the weighted sum of scores in Eq. 4 leads to better performance than others.\nModel R-1 R-2 R-L Table 7: Ablation studies of different model architectures on CNN/DM. R-1/2/L denotes ROUGE-1/2/L.\n[CLS]: using the first [CLS] token. Avg.: averaging all scores in Eq. 3." }, { "figure_ref": [], "heading": "F Identical Candidates Scores", "publication_ref": [], "table_ref": [], "text": "As shown in Table 8, we note cases that have at least two identical R-avg on CNN/DM and XSum are a majority. Since we count after removing the same summaries in the pool, we ensure that it is the number of summaries with different content but the same R-avg score. " }, { "figure_ref": [], "heading": "G Examples for False Positive", "publication_ref": [], "table_ref": [], "text": "Table. 9 shows that #2 has 2.33 R-avg lower than #1, but 3.67 BERTScore higher. Also, when evaluated qualitatively, it can be seen that #2 is closer to the gold summary. While the sentence in green is discarded, the sentence in red is included in the reference summary." }, { "figure_ref": [ "fig_4" ], "heading": "H Negative Size and Scale Factors", "publication_ref": [], "table_ref": [], "text": "We have tuned the scale factor γ 1 of ranking loss and γ 2 of contrastive loss in Eq. 8 with different sizes of negative samples. As shown in Fig. 5, suitable scale factors (γ 1 = 10, γ 2 = 0.1) can improve more than others. Though size = 4 and size = 12 showed similar performance, we set the negative size to 4 due to memory efficiency. " }, { "figure_ref": [ "fig_5" ], "heading": "I Number of Candidate Summaries", "publication_ref": [], "table_ref": [], "text": "We set the size of the candidate summary pool to 16, as it is close to the maximum which could fit in a standard 24GB RAM GPU. Fig. 6 reports that our method is robust to the number of candidates. 28.39 91.17 Didier Drogba made his second Chelsea debut in pre-season friendly at Werder Bremen. The 36-yearold was a half-time substitute as Chelsea lost 3-0. Drogbba was captain after John Terry left the pitch in the second half. The Ivorian striker missed a penalty and failed to make an impact on the game. Reference ---ross barkley has been repeatedly linked with a move to manchester city. former city star gareth barry says his everton team-mate is too young. the toffees face manchester united in the premier league on sunday.\nBRIO-Ctr 47.19 27.59 29.21 88.85 everton team-mate gareth barry has advised ross barkley against moving to manchester city. the 21-year-old has been linked with a move away from goodison park. barry believes it is too early for the youngster to decide on his future. the veteran midfielder spent four seasons at the etihad before joining everton. Reference ----\nthe gruesome vision was captured in australia and uploaded last week. the lizard swings its neck back and forth in a bid to swallow the rabbit. goannas can unhinge their lower jaws allowing them to swallow large prey.\nBRIO-Ctr 51.16 23.81 27.91 88.75 two-metre long reptile is filmed balancing on top of a power pole to swallow rabbit. the lizard swings its neck back and forth as it battles to swallow its catch. it finishes the feat in under a minute, and the video was uploaded to youtube last week.\nBalSum 46.91 20.25 34.57 90.72 two-metre long lizard filmed battling to swallow rabbit in under one minute. video shows lizard balance at the top of a power pole while swallowing its prey. goannas can unhinge their lower jaws when feeding, allowing them to eat over-sized prey. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We thank Soohyeong Kim and anonymous reviewers for valuable feedback and helpful suggestions. This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(*MSIT) (No.2018R1A5A7059549 , No.2020R1A2C1014037) and supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(*MSIT) (No.2020-0-01373). *Ministry of Science and ICT" }, { "figure_ref": [], "heading": "A Distribution of z on XSum", "publication_ref": [], "table_ref": [], "text": "The result in Fig. 4 shows that there is a majority (53%) of cases where z > 1. " }, { "figure_ref": [], "heading": "B Evaluation Metrics", "publication_ref": [ "b6", "b24", "b17" ], "table_ref": [], "text": "We examine our model with two evaluation metrics.\n• ROUGE (Lin, 2004) is a widely used metric for summarization evaluation. We use the standard ROUGE Perl package 4 for evluation.\n• BERTScore (Zhang et al., 2019) is a semantic similarity metric for multiple tasks. We use the public bert-score package 5 shared by the authors. Training Settings We train our models for 5 epochs using an Adafactor optimizer (Shazeer and Stern, 2018). The batch size is 4 and the learning rate is 2e-3. During training, we randomly select 4 negative samples for each input document. We evaluate the model every 1000 steps on the validation set." }, { "figure_ref": [], "heading": "C Datasets Statistics", "publication_ref": [], "table_ref": [], "text": "" } ]
An important problem of the sequence-tosequence neural models widely used in abstractive summarization is exposure bias. To alleviate this problem, re-ranking systems have been applied in recent years. Despite some performance improvements, this approach remains underexplored. Previous works have mostly specified the rank through the ROUGE score and aligned candidate summaries, but there can be quite a large gap between the lexical overlap metric and semantic similarity. In this paper, we propose a novel training method in which a re-ranker balances the lexical and semantic quality. We further newly define false positives in ranking and present a strategy to reduce their influence. Experiments on the CNN/DailyMail and XSum datasets show that our method can estimate the meaning of summaries without seriously degrading the lexical aspect. More specifically, it achieves an 89.67 BERTScore on the CNN/DailyMail dataset, reaching new state-of-the-art performance. Our code is publicly available at https://github.com/ jeewoo1025/BalSum.
Balancing Lexical and Semantic Quality in Abstractive Summarization
[ { "figure_caption": "Figure 1 :1Figure 1: Distribution of z (%) for a base BART model on CNN/DM. Since a BART model generates a pool of 16 diverse beam search candidates, the X-axis ranges from 1 to 16. If z = 1, it means that both ROUGE and BERTscore are high. As z increases, the gap between ROUGE and BERTScore tends to increase. The Y-axis represents the proportion of z in the test set. The distribution for XSum is in Appendix A.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: BalSum model architecture. The model predicts scores for candidate summaries based on the document. The thickness of the red dashed line indicates the magnitude of each score's weight.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Overview of our proposed training objective.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Number of pools with at least two same Ravg (%). A pool consists of 16 diverse beam search candidates generated on different datasets (CNN/DM, XSum) with different base models (PEGASUS, BART). R-avg is the average of ROUGE-1/2/L scores.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: ROUGE-1 on CNN/DM w.r.t scale factors and N negative samples at inference time, with N ∈ {4, 8, 12, 16}.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: ROUGE-1 with different numbers of candidate summaries on CNN/DM. The gray dashed line denotes the performance of a base model (BART).", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": ". BalSum outperforms a base BART model, according to gains of 2.54/1.27/2.63 R-1/2/L. Notably, while it has comparable performances on ROUGE to previous models, it achieves an 89.67 BERTScore, reaching a new state-of-the-art performance. When ranking the candidate summaries, Results on CNN/DM. R-1/2/L are the ROUGE-1/2/L F 1 scores. BS denotes BERTScore. *: results reported in the original papers. ‡: results from our own evaluation script. †: significantly better than the baseline model (BART).", "figure_data": "ModelR-1R-2R-LBSBART*44.1621.2840.90-BART ‡44.0421.0640.8688.12Pegasus*44.1621.5641.30-BRIO-Mul*47.7823.5544.57-BRIO-Mul ‡47.5023.4844.0189.08BRIO-Ctr*47.2822.9344.15-BRIO-Ctr ‡47.0823.0344.0689.03SummaReranker* 47.1622.5543.8787.74SimCLS*46.6722.1543.54-SimCLS ‡46.3422.0743.3088.92BalSum46.58 † 22.33 † 43.49 † 89.67 †ModelR-1R-2R-LBSBART*45.1422.27 37.25 -Pegasus*47.2124.56 39.25 -Pegasus ‡46.8224.44 39.07 91.93BRIO-Mul*49.0725.59 40.40 -BRIO-Mul ‡48.7425.38 40.16 92.60BRIO-Ctr*48.1325.13 39.84 -BRIO-Ctr ‡48.1225.24 39.96 91.72SummaReranker* 48.1224.95 40.00 92.14SimCLS*47.6124.57 39.44 -SimCLS ‡47.3724.49 39.31 91.48BalSum47.17 † 24.23 39.09 91.48", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "BERTScore (noted BS) results with different weighting threshold φ on CNN/DM. \"N/A\": no instance weighting.", "figure_data": "", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Analysis of re-ranking performance on CNN/DM. BS and R denote BERTScore and the mean ROUGE F 1 score, respectively. Oracle (R) is ordered by ROUGE scores, while Oracle (BS) is ordered by BERTScore.", "figure_data": "(R)90.7790.4290.1844.85 42.68 41.16Oracle (BS) 91.0690.6690.3843.32 41.46 40.18SimCLS88.9288.8788.8237.24 36.95 36.65BRIO-Ctr89.0388.9388.8538.06 37.55 37.14BalSum89.6789.6089.5437.46 37.08 36.78", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "F 1 score and percentage of false positives on all two datasets. The high F 1 score indicates how well the ranking model estimates both lexical and semantic quality of all candidate summaries in the pool. FP stands for false positives.", "figure_data": "CNNDMXSumModelF1FP(%)F1FP(%)BRIO-Ctr 78.5010.9676.9510.01BalSum78.8410.7376.3210.49has both larger ROUGE and BERTScore than thelower-ranked summary. In addition, we calculatethe percentage of false positives. Following Table5, while BalSum has worse (+0.48% FP, -0.63F 1 ) than BRIO-Ctr on XSum, it has better rankingperformance (-0.23% FP, +0.34 F", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "False positive examples from fine-tuned BART model on CNN/DM. R-avg is the average of ROUGE-1/2/L scores. BS denotes BERTScore. The related sentences in the reference are in bold. wenger will have chat with theo walcott ahead of arsenal clash. walcott was substituted after 55 minutes of england's draw with italy. arsenal boss is wenger is concerned by the winger's confidence. the gunners take on liverpool at the emirates stadium on saturday.BRIO-Ctr 60.61 41.24 46.46 89.93 theo walcott played just 55 minutes of england's 1-1 draw with italy. arsene wenger says he is concerned by the winger's confidence. the arsenal manager will speak with walcott ahead of liverpool clash. walcott could start against liverpool on saturday with alex oxlade-chamberlain out and danny welbeck a doubt. have voiced concerns over diy brain stimulation kits for children. for a few hundred dollars, one can be purchased online from various sites. it promises to help children with math homework and claims to help adhd. professor colleen loo from the black dog institute strongly believes that the equipment poses a danger to amateurs and children. the equipment is currently being used to treat people with speech impediments but is still very much in trial stages. hundred dollars, you can purchase a brain stimulation kit online. experts have voiced concerns over the potential side effects. the kits are being sold online for as little as $ 55 us. one site even advertises how to make your own electrodes using a household sponge. diy brain stimulation kits for their children. the kits are being sold online for as little as $ 55 us. experts are concerned about the potential side effects of the equipment. the devices are used to improve speaking in those with speech problems. the equipment is still relatively new and experimental.", "figure_data": "SystemR-1R-2R-L BSSummaryReference ----BalSum61.5438.2041.7692.36arsenal winger theo walcott struggled for england against italy. arsene wenger says he is concernedby the winger's confidence. walcott was replaced after 55 minutes of england's 1-1 draw in turin. thegunners face liverpool on saturday in a top-four clash.Reference ----BRIO-Ctr 40.0 for a few BalSum 16.26 19.20 87.11 36.92 17.19 27.69 89.90 parents are buying", "figure_id": "tab_7", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "advised ross barkley against moving to manchester city. the everton midfielder believes it is too early for the 21-year-old to decide on his future. barry spent four seasons at the etihad before arriving on merseyside. the toffees face manchester united on sunday. councils are urged to draw up maps of the residents who are at risk. essex and gloucestershire have already made 'loneliness maps' experts warn that being lonely can lead to serious health problems.BRIO-Ctr 50.57 28.24 29.89 90.30 two county councils have already implemented 'loneliness maps' to target 'danger zones' being lonely can lead to health problems including dementia and high blood pressure. campaigners say councils should draw up maps of the places where pensioners are most at risk. study by university of kent and campaign to end loneliness recommends maps. should draw up maps of places where pensioners and others are most likely to suffer from social isolation. two county councils, essex and gloucestershire, have already implemented the maps. they allow them to target 'danger zones' of loneliness. being lonely can lead to health problems including dementia and high blood pressure.", "figure_data": "BalSum gareth barry has Reference -46.34 25.0 34.15 91.16 ---BalSum50.027.9143.1891.28campaigners say councils", "figure_id": "tab_8", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Case Study on CNN/DM. R-1/2/L are the ROUGE-1/2/L F 1 scores. BS denotes BERTScore. The related sentences in the reference are in bold.", "figure_data": "", "figure_id": "tab_9", "figure_label": "10", "figure_type": "table" } ]
Jeewoo Sul; Yong Suk Choi
[ { "authors": "Samy Bengio; Oriol Vinyals; Navdeep Jaitly; Noam Shazeer", "journal": "", "ref_id": "b0", "title": "Scheduled sampling for sequence prediction with recurrent neural networks", "year": "2015" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b1", "title": "", "year": "" }, { "authors": "Tianyu Gao; Xingcheng Yao; Danqi Chen", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "SimCSE: Simple contrastive learning of sentence embeddings", "year": "2021" }, { "authors": "Karl Moritz Hermann; Tomas Kocisky; Edward Grefenstette; Lasse Espeholt; Will Kay; Mustafa Suleyman; Phil Blunsom", "journal": "", "ref_id": "b3", "title": "Teaching machines to read and comprehend", "year": "2015" }, { "authors": "Omar Khattab; Matei Zaharia", "journal": "Association for Computing Machinery", "ref_id": "b4", "title": "Colbert: Efficient and effective passage search via contextualized late interaction over bert", "year": "2020" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", "year": "2020" }, { "authors": "Chin-Yew Lin", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "ROUGE: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b7", "title": "Roberta: A robustly optimized BERT pretraining approach", "year": "2019" }, { "authors": "Yixin Liu; Zi-Yi Dou; Pengfei Liu", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "RefSum: Refactoring neural summarization", "year": "2021" }, { "authors": "Yixin Liu; Pengfei Liu", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "SimCLS: A simple framework for contrastive learning of abstractive summarization", "year": "2021" }, { "authors": "Yixin Liu; Pengfei Liu; Dragomir Radev; Graham Neubig", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "BRIO: Bringing order to abstractive summarization", "year": "2022" }, { "authors": "Ramesh Nallapati; Bowen Zhou; Çaglar Cicero Dos Santos; Bing Guçlçehre; Xiang", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Abstractive text summarization using sequence-to-sequence RNNs and beyond", "year": "2016" }, { "authors": "Shashi Narayan; Shay B Cohen; Mirella Lapata", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization", "year": "2018" }, { "authors": "Shashi Narayan; Gonçalo Simões; Yao Zhao; Joshua Maynez; Dipanjan Das; Michael Collins; Mirella Lapata", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "A well-composed text is half done! composition sampling for diverse conditional generation", "year": "2022" }, { "authors": "Aurelio Marc; Sumit Ranzato; Michael Chopra; Wojciech Auli; Zaremba", "journal": "", "ref_id": "b14", "title": "Sequence level training with recurrent neural networks", "year": "2016-05-02" }, { "authors": "Shafiq Mathieu Ravaut; Nancy Joty; Chen", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "SummaReranker: A multi-task mixture-of-experts re-ranking framework for abstractive summarization", "year": "2022" }, { "authors": "Abigail See; Peter J Liu; Christopher D Manning", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Get to the point: Summarization with pointergenerator networks", "year": "2017" }, { "authors": "Noam Shazeer; Mitchell Stern", "journal": "", "ref_id": "b17", "title": "Adafactor: Adaptive learning rates with sublinear memory cost", "year": "2018" }, { "authors": " Pmlr", "journal": "", "ref_id": "b18", "title": "", "year": "" }, { "authors": "K Ashwin; Michael Vijayakumar; Ramprasaath R Cogswell; Qing Selvaraju; Stefan Sun; David J Lee; Dhruv Crandall; Batra", "journal": "", "ref_id": "b19", "title": "Diverse beam search: Decoding diverse solutions from neural sequence models", "year": "2016" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Remi Louf; Morgan Funtowicz; Joe Davison; Sam Shleifer; Clara Patrick Von Platen; Yacine Ma; Julien Jernite; Canwen Plu; Teven Xu; Sylvain Le Scao; Mariama Gugger; Quentin Drame; Alexander Lhoest; Rush", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" }, { "authors": "Jiacheng Xu; Siddhartha Jonnalagadda; Greg Durrett", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Massive-scale decoding for text generation using lattices", "year": "2022" }, { "authors": "Jingqing Zhang; Yao Zhao; Mohammad Saleh; Peter Liu", "journal": "", "ref_id": "b22", "title": "Pegasus: Pre-training with extracted gap-sentences for abstractive summarization", "year": "2020" }, { "authors": " Pmlr", "journal": "", "ref_id": "b23", "title": "", "year": "" }, { "authors": "Tianyi Zhang; Varsha Kishore; Felix Wu; Kilian Q Weinberger; Yoav Artzi", "journal": "", "ref_id": "b24", "title": "Bertscore: Evaluating text generation with BERT", "year": "2019" }, { "authors": "Ming Zhong; Pengfei Liu; Yiran Chen; Danqing Wang; Xipeng Qiu; Xuanjing Huang", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Extractive summarization as text matching", "year": "2020" }, { "authors": "Kun Zhou; Beichen Zhang; Xin Zhao; Ji-Rong Wen", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "Debiased contrastive learning of unsupervised sentence representations", "year": "2022" } ]
[ { "formula_coordinates": [ 2, 155.87, 722.74, 133.27, 10.18 ], "formula_id": "formula_0", "formula_text": "C ← g(D)(1)" }, { "formula_coordinates": [ 2, 357.97, 269.09, 166.44, 21.75 ], "formula_id": "formula_1", "formula_text": "C * = argmax C i ∈C {f (C i , D)} (2)" }, { "formula_coordinates": [ 2, 344.02, 495.63, 180.39, 10.77 ], "formula_id": "formula_2", "formula_text": "Score k = sim(E 1 (C i ), E k (D))(3)" }, { "formula_coordinates": [ 2, 307.21, 577.36, 217.2, 35.87 ], "formula_id": "formula_3", "formula_text": "f (Ci, D) = K k=1 Score k K j=1 Scorej Score k = K k=1 w k • Score k (4)" }, { "formula_coordinates": [ 2, 327.11, 743.89, 197.3, 31.37 ], "formula_id": "formula_4", "formula_text": "L rank = i j>i max(0, f (Cj, D) -f (Ci, D) +(-cost(Ci, S) + cost(Cj, S)) * λ)(5)" }, { "formula_coordinates": [ 3, 71.92, 608.05, 217.22, 43.15 ], "formula_id": "formula_5", "formula_text": "L ctr = 1 |C| C i ∈C -log α C i × e f (C i ,D) e f (C i ,D) + s i ∈N e f (s i ,D) (6)" }, { "formula_coordinates": [ 3, 349.32, 113.33, 175.09, 26.89 ], "formula_id": "formula_6", "formula_text": "α C i = 0, sim(C i , S) < φ 1, sim(C i , S) ≥ φ (7)" }, { "formula_coordinates": [ 3, 364.62, 232.01, 155.55, 10.77 ], "formula_id": "formula_7", "formula_text": "L = γ 1 L rank + γ 2 L ctr (8" }, { "formula_coordinates": [ 3, 520.17, 232.36, 4.24, 9.46 ], "formula_id": "formula_8", "formula_text": ")" } ]
10.1080/08874417.2015.11645796
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b3" ], "table_ref": [], "text": "In a sample of points which constitute the dataset, there is always a set of data points / observations which observes to have a peculiar behaviour. Such kind of data points / observations are considered \"Outliers\" [1], which are observed to have a strong different properties than the usual data points in a population.\nThere are various definitions for an outlier, we will take the definition of Grubbs [2]: \" An outlying observation, or outlier, is one that appears to deviate markedly from other memebers of the sample which it occurs.\"\nWhereas, the Barnett & Lewis [3] quoted as follows:\n\"An observation (or subset of observations) which appears to be inconsistent with the remainder of that set of data.\"\nThese outliers might be the cause of the following events:  Wrong data entry by the humans  Instrumental errors due to mismanagement / instrumental errors.\n Faulty processing / presenting the original data.\n Sampling errors due to the heterogenous sources of the data.\nAs we know that the data is fuel to the machine learning models. Due to the presence of these outliers, may result in the bad data which ultimately builds bad models. For whatever reason, identifying and detecting these sets of data points has occupied its own essentiality in handling the bias-variance trade-off of the machine learning models. Such kinds of methods is formally known as \"Outlier Detection (OD) methods\" [4].\nOD is considered as the primary data mining task because presence of outlier will ultimately decreases the efficacy of the machine learning / deep learning models. This will ultimately hampers the importance of the automating things using machine learning / deep learning models. There are various ways one can perform these OD using various statistical / machine learning methods. In general, the OD is performed on the datasets without prior knowledge, based on the distance measures. This says that the farther the distance, the probability of becoming an outlier is increases. OD is very important in tasks which can be succefully implemented by constant monitoring which is suitable to detect the sudden change in the data which may result in fraudulent events. For example, in the case of loan application processing or credit card transactions, the fraudulent events / transactions are considered to have an abrupt behaviour which ultimately deviates the regular original behaviour of the authentic transactions. If the OD systems are deployed then it will be taken care of these systems so that the payment is not slipped down to the hands of fraudsters. Similarly in the case of health care, where the failure of heart is an abnormal activity which may a save a life of a human, if identified at the proper time. Hence, this motivated me to work on these two set of sector related problems.\nIn real-world applications, the data is keep on being generated continuously by various heterogeneous sources. This kind of generating data at a large variety of data is known as streaming data [4]. For example, the user clicks on the google search engine, transactions done on ATM, credit cards and debit cards, and heart rate monitoring of the patients etc., Such kind of data has to be processed in a sequential and incremental way which is done in record-byrecord basis/sliding window approaches. This is quite different from batch processing, where is data is static and fixed. Hence, in batch processing, if the model is built offline and tested offline works fine. This kind of mechanism won't work in streaming applications. As we know that these streaming applications, are very challenging and need to be done in incremental modelling. The following aspects need to be considered while building the models to handle the streaming data :\n Latency need to be handled.\n Individual records / micro batches need to be processed efficiently.\nThe major highlights of the project are as follows:\n Studied the effectiveness of various Outlier Detection algorithms for streaming data.\n Designed a methodology for the online incremental outlier detection framework for solving various finance and health care problems in a streaming environment.\n Compared the performance of the incremental model training with the offline model building strategy to prove the efficacy of the proposed methodology.\nThe rest of the work is organized as follows: Section 2 focuses on the literature review, and Section 3 covers the various techniques employed. Section 4 discusses the proposed methodology. Section 5 gives the data set description and environmental setup. Section 6 discusses the results and discussion. Section 7 presents the conclusion." }, { "figure_ref": [], "heading": "Literature Survey", "publication_ref": [ "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b18", "b19", "b20", "b21", "b22", "b23", "b24", "b25", "b26", "b27" ], "table_ref": [], "text": "In this section, we describe the literature review conducted on the OD methods to emphasize its importance of it in solving various problems in the financial domain and health care domain.\nThe american heritage [5] defined the meaning of the outliers as: \"Fraud is defined as a deception deliberately practiced in order to secure unfair or unlawful gain\". Fraud and financial crimes together comes under the category and takes place when a person or entity loses money / property, or if they use it for an illicit manner, with the intention of gaining profit from it. Davia et al [6] had stated the outliers as follows:, \"Fraud always involves one or more persons who, with intent, act secretly to deprive another of something of value, for their own enrichment\".\nWells et al [7] gave more emphasis on the 'deception' which is the linchpin to fraud. Occupational fraud and abuse may be defined as: \"The use of one's occupation for personal enrichment through the deliberate misuse or misapplication of the employing organizations resources or assets.\" This definition encompasses a wide variety.\nRikhardsson and Yigitbasioglu [8] primarily focused on Business Intelligence (BI), which encompasses technology-driven processes such as data warehousing, data mining, data analytics, information management, and applications. These five components synergize to provide users with the necessary knowledge for making informed business decisions (p. 38). BI tools possess significant capabilities and can be seamlessly integrated across various functions, enabling individuals within an organization with analytical skills to delve deep and extract specific insights to address particular business problems [9,10]. While Business Intelligence has gained substantial attention from both practitioners and academics in the field of fraud analytics [11][12][13], efforts have been made to explore its effectiveness, particularly in the realm of fraud detection and prevention. Currently, prevailing solutions for identifying data anomalies in fraud analytics rely heavily on manual and inflexible data mining techniques [14]. Business Intelligence techniques have been leveraged to enhance data mining solutions, primarily in the context of Fraud Detection and Prevention (FFD), by integrating humancentered evaluations [15][16]. Notably, Ngai et al. [17], followed by Pilon et al. [18] and Tang and Karim [19], emphasize in their respective works that a critical challenge in analytics-based fraud detection research is the need to focus attention on exploring Business Intelligence tools. Data visualization, described as an interactive representation of data, plays a crucial role in facilitating methodological inquiries to acquire knowledge about specific phenomena [20].\nBased on Knorr's global fraud report [21] from the survey conducted between 2013 and 2014, it is evident that the incidence of fraud has risen across all metrics over the course of 12 months. In total, approximately 70% of companies reported experiencing at least one form of fraud in the previous year, marking an increase from 61% in the previous survey. Moreover, individual businesses faced a wider range of threats on average compared to those in 2012. Notably, the economic impact of these fraudulent activities has significantly escalated, with costs rising from an average of 0.9% to 1.4% of revenue. Additionally, one in ten businesses reported costs exceeding 4% of their revenue.\nThe primary objective of the works [22][23][24][25] is to employ visual analytics to extract knowledge from raw data across various domains. These works present an opportunity for receiving scholarships in visual analytics and also delve into the emerging field of fraud detection, both of which are discussed in interdisciplinary literature. The authors emphasize the importance of exploring and analyzing data to identify trends and patterns before investigators can predict fraudulent activities and formulate hypotheses. Visual analytics stands out as a valuable subject since it represents data graphically and provides in-depth insights by filtering out irrelevant observations, which cannot be achieved through manual, humancentered approaches [26,27]. With this introduction to visual analytics, we can now focus on a more comprehensive exploration of how it aids in detecting CCF." }, { "figure_ref": [], "heading": "Overview of the Methods employed", "publication_ref": [], "table_ref": [], "text": "In this work, we had employed the following Outlier Detection (OD) were employed:" }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "One class support vector machine (OCSVM)", "publication_ref": [], "table_ref": [], "text": "OCSVM is a natural extension of the support vector machine algorithm in the case of unlabelled data. The main distinction between them is that OCSVM is an unsupervised algorithm. However, SVM is a supervised model. OCSVM learns boundaries for the normal samples and identifies the data outside the border of the anomalies. The functionality of the OCSVM is depicted in Fig. 1. As you can observe in Fig. 1 This is also an unsupervised model as there are no pre-defined labels. The IForest isolates observations by randomly selecting a feature and then randomly selecting a split value between the maximum and minimum values of the selected feature. IForest is an outlier ensemble technique. IForest follows the algorithm as given below:\n1. A random sub-sample of the data is selected and assigned to a binary tree.\n2. Branching of the tree starts by selecting a random feature first. Then branching is done on a random threshold.\n3. If the value of a data point is less than the selected threshold, it goes to the left branch or else to the right. Thus a node is split into left and right branches.\n4. This process from step 2 is continued recursively till each data point is completely isolated or till max depth (if defined) is reached." }, { "figure_ref": [], "heading": "The above steps are repeated to construct random binary trees.", "publication_ref": [], "table_ref": [], "text": "Along with the above, the adaptive sliding window approach is adopted while building the IForest model which makes it as IForest ASD." }, { "figure_ref": [], "heading": "Linear Outlier Factor (LOF)", "publication_ref": [], "table_ref": [], "text": "LOF is an algorithm proposed by MM Breunig et al for finding anomalous data points by measuring the local deviation of a given data point w.r.t its neighbours. LOF is depicted in Fig. 3. The LOF is based on the concept of local density, where locality is given by k nearest neighbours, whose distance is used to estimate the density. Thus computed the local densities of its neighbours one can identify the regions of similar density and points that have a substantially lower density than their neighbours. The distance is used to design what is called reachability distance." }, { "figure_ref": [], "heading": "Reachability_distancek(A,B)=max{k-distance(B), d(A,B)}", "publication_ref": [], "table_ref": [], "text": "Fig. 3. Linear Outlier Factor (LOF)" }, { "figure_ref": [ "fig_1" ], "heading": "Angle based Outlier Detection (ABOD)", "publication_ref": [], "table_ref": [], "text": "ABOD is proposed by H. Kriegel et al to overcome its limitation of it. The working principle of the ABOD is depicted in Fig. 4. Comparing the distances between the data points and classifying them will become meaningless with an increase in the data dimensionality. ABOD assess the variance in the angle between the differences vectors of a data point to the other points.\nABOD works on the core principle as follows:\n The farther the points the lower the variance of the angle.\n The variance of the angle among the points within a cluster differs widely.\n The variance of the angles will become smaller at the border of a cluster.  However, the variance will become very small for outliers. " }, { "figure_ref": [], "heading": "Exact Storm", "publication_ref": [], "table_ref": [], "text": "Exact storm is a kind of algorithm, is made sutiable for the streaming data, which consists of two different procedures viz., stream manager and the query manager.\n Stream manager receives the data streams which are coming either as a single point / as a micro-batch.\n After receiving the stream data, the data structure is updated according to the computed summary of the current window, ISB ( Indexed stream buffer ), is a combination of multiple nodes. Here, each node is responsible for a different stream object.  Later, based on the query, the query manager detects and decides which is an outlier in the stream object." }, { "figure_ref": [ "fig_2" ], "heading": "KitNET", "publication_ref": [], "table_ref": [], "text": "KitNet (Kitsunes online algorithm) is designed based on neural network, to identify the outliers in the population. It is designed to low complex in nature. The algorithm is depicted in Fig. 5.\nIt is composed of four different components: Based on the previously processed data, KNN CAD assigns the anomaly score to the current stream object. Hence, it acquires probabilistic interpretation to this anomaly score which is based on conformal paradigm. However, the downside of this approach, this is a univariate model, hence the combination of multiple features and its effect is not considered while assigning the anomaly score.\n" }, { "figure_ref": [], "heading": "Proposed Methodology", "publication_ref": [], "table_ref": [], "text": "The proposed methodology in order to identify the outlier detection on the stream data is as follows: o Upon the data stream, the sliding window is kept on running, which considers the group of data streams comprising the window length. o The window length is an user-defined parameter. o Sliding window approach: In sliding window approach, as explained earlier, the window is slided over a group of data streams and slides across the data streams as per the specified interval of time. o Let us consider an example, where the first window w1 which contains the data streams that arrived between 0-10 seconds. Similarly, the second window contains the data stream from 2-12 seconds. This kind of process is continued for all of the data streams.\n\no The model is built based on the first window, builds the first model which results the model m1. Thereafter the second window is slided and then the model m1 is taken as the base for this window which results the model m2.\n Testing: Once, the model is built in the above phase, then the model is tested on the test data which also comes in stream of points. The configuration of the experimental study conducted is as follows: i5 8 th generation, octacore, 2.4 GHz. The python version used for the experimental study is python 3.8 and we set up an anaconda environment to work on the jupyter notebook. All the experiments under a similar environment." }, { "figure_ref": [], "heading": "Dataset description", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "In this work, we considered the following datasets which are presented in Table 1 " }, { "figure_ref": [], "heading": "Results & Discussion", "publication_ref": [], "table_ref": [ "tab_1", "tab_1" ], "text": "The evaluation is done in comparison by taking the model which is trained offline and then deployed for testing. Hence, the experiment analysis is conducted in two different scenarios.\n Scenario 1: The model which is trained offline and then tested thereafter.  Scenario 2: The model which is trained by using the sliding window approach and the model is trained in an incremental way. Once deployed, in the testing environment, as explained in proposed methodology the data points are coming as single point. After deployment, the model is tested thereafter.\nThe results are obtained and depicted in Table 2, AUC achieved by all of the ML models are included in Table 2. As per taken datasets and algorithms Brain-stroke Prediction dataset has given best results for scenario 2 as it involves in training for streaming data, and for most of the datasets it shows the scenario 2 performed well when compared to the scenario 1, this shows that if training has done on the streaming data it performs well that means training done on online data performs well when compared to the offline training. And also we can observe that among all the datasets Brain-stroke Prediction dataset is an highly imbalanced dataset. So it out performs well for the highly imbalanced datasets. And then it also performed well for the Ethereum fraud detection dataset as it's a secondly highly imbalanced dataset among other datasets and it also gave best result for Scenario 2 which is the training done for online/streaming data.\nAs per the results and dataset it shows that it performs well for the highly imbalanced datasets that means there is a majority of negative class and minority will be the positive class.\nAnd it works well for the Scenario 2 which means the traning is done by preprocessing the data and then by taking the streaming data and testing done on the streaming/online data. " }, { "figure_ref": [], "heading": "Conclusion and Future Work", "publication_ref": [], "table_ref": [], "text": "In this project, the proposed methodology is considered as scenario 2 and this methodology is compared with the offline-built models to evaluate the importance of incremental models in the case of a streaming framework. As per the results and dataset, scenario 2 performed well for the highly imbalanced datasets which means there is a majority of negative class and a minority will be the positive class. Among all the models, the ensemble model strategy IForest ASD model performed better in most of the cases standing in the top 3 models in almost all of the cases.\nIn Future work, a robust model has to be designed by improving the performance of the models. The robustness can be increased by ensembling various models thereby enhancing the voting-based mechanism which alleviates the advantages of various models." } ]
In this paper, we had built the online model which are built incrementally by using online outlier detection algorithms under the streaming environment. We identified that there is highly necessity to have the streaming models to tackle the streaming data. The objective of this project is to study and analyze the importance of streaming models which is applicable in the real-world environment. In this work, we built various Outlier Detection (OD) algorithms viz., One class Support Vector Machine (OC-SVM), Isolation Forest Adaptive Sliding window approach (IForest ASD), Exact Storm, Angle based outlier detection (ABOD), Local outlier factor (LOF), KitNet, KNN ASD methods. The effectiveness and validity of the above-built models on various finance problems such as credit card fraud detection, churn prediction, ethereum fraud prediction. Further, we also analyzed the performance of the models on the health care prediction problems such as heart stroke prediction, diabetes prediction and heart stroke prediction problems. As per the results and dataset it shows that it performs well for the highly imbalanced datasets that means there is a majority of negative class and minority will be the positive class. Among all the models, the ensemble model strategy IForest ASD model performed better in most of the cases standing in the top 3 models in almost all of the cases.
Incremental Outlier Detection Modelling Using Streaming Analytics in Finance & Health Care
[ { "figure_caption": "Fig. 11Fig.1 Functionality of the OCSVM", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Angle based Outlier Detection : ABOD", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. KitNet : Kitsunes algorithm with an ensemble of AE's", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "In the proposed methodology, the data which is available for the incremental model training is considered training data. And thus trained model is tested on the testing data.  The proposed methodology is divided into two different steps. (i) Incremental model building phase, (ii) Testing the model.  Incremental model building: o The training data is simulated into multiple data streams.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "are considered. are seven different datasets used in our approach related to various problem solving in the finance and the health care sector. All of these datasets are available in opensource platforms such as UCI repository [28] and Kaggle [29] repositories. Hence, they are free to download and work on. Description of the datasets", "figure_data": "Dataset#Datapoints#FeaturesNegative: PositiveCredit card Churn prediction dataset10,0001380:20Default of Credit Card Clients Dataset30,0002478:22Auto insurance claims fraud dataset1,0004075:25Ethereum fraud detection dataset9,8415186.3 : 13.7Diabetes prediction dataset9811255 : 45Brain stroke prediction dataset4,9811195: 5Heart failure prediction dataset3686174.8 : 25.2", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "AUC results obtained by various OD models", "figure_data": "KNNCAD0.50.4890.50.5010.50.40.50.5530.50.5240.50.5030.50.50KitNET0.50.5460.50.6680.50.50.50.4780.50.50.50.4280.50.5ExacgtStorm0.50.4620.5930.5080.6210.5230.50.4090.50.4740.50.5360.50.447IForestASD0.4960.6110.3360.5270.4120.4750.6310.5570.4710.5890.5950.5710.4390.528OCSVM0.50.50.5440.4750.4680.50.5370.50.50.50.4980.50.4050.5LOF0.4690.5700.4320.6250.3800.6070.4900.7250.4360.4860.4100.5560.490.413ABOD0.5180.5710.5030.5790.5020.5720.5800.7510.4870.4900.3800.5460.390.41Scenario 1Scenario 2Scenario 1Scenario 2Scenario 1Scenario 2Scenario 1Scenario 2Scenario 1Scenario 2Scenario 1Scenario 2Scenario 1Scenario 2DatasetCredit card ChurnPredicitionEthereum frauddetection datasetDiabetes predictiondatasetBrain Stroke datasetAuto Insuranceclaims datasetDefault of creditcard clients datasetHeart failureprediction dataset", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Ch Priyanka
[ { "authors": "C C Aggarwal; P S Yu", "journal": "", "ref_id": "b0", "title": "Outlier Detection for High Dimensional Data", "year": "2001" }, { "authors": "F E Grubbs", "journal": "Technometrics", "ref_id": "b1", "title": "Procedures for detecting outlying observations in samples", "year": "1969" }, { "authors": "V Barnett; T Lewis", "journal": "John Wiley & Sons", "ref_id": "b2", "title": "Outliers in Statistical Data", "year": "1994" }, { "authors": "V J Hodge; Orcid", "journal": "", "ref_id": "b3", "title": "0002-2469-0224", "year": "2004" }, { "authors": "", "journal": "Artificial Intelligence Review", "ref_id": "b4", "title": "A survey of outlier detection methodologies", "year": "" }, { "authors": "", "journal": "", "ref_id": "b5", "title": "American heritage dictionary", "year": "2013-11-29" }, { "authors": "P C J W Davia; H R ; J Kastantin", "journal": "John Wiley and Sons", "ref_id": "b6", "title": "Accountant's guide to fraud detection and control", "year": "2000" }, { "authors": "J Wells", "journal": "John Wiley and Sons", "ref_id": "b7", "title": "Principles of fraud examination", "year": "2005" }, { "authors": "P Rikhardsson; O Yigitbasioglu", "journal": "International Journal of Accounting Information Systems", "ref_id": "b8", "title": "Business intelligence and analytics in management accounting research: status and future focus", "year": "2018" }, { "authors": "S Mathrani", "journal": "Journal of Applied Computing and Information Technology", "ref_id": "b9", "title": "Using enterprise systems to enhance organizational agility", "year": "2014" }, { "authors": "P Schlesinger; N Rahman", "journal": "Journal of Computer Information Systems", "ref_id": "b10", "title": "Self-service business intelligence resulting in disruptive technology", "year": "2016" }, { "authors": "F Glancy; S Yadav", "journal": "Decision Support Systems", "ref_id": "b11", "title": "A computational model for financial reporting fraud detection", "year": "2011" }, { "authors": "M Ahmed; A Mahmood; M Islam", "journal": "Future Generation Computer Systems", "ref_id": "b12", "title": "A survey of anomaly detection techniques in financial domain", "year": "2016" }, { "authors": "N Carneiro; G Figueira; M Costa", "journal": "Decision Support Systems", "ref_id": "b13", "title": "A data mining based system for creditcard fraud detection in e-tail", "year": "2017" }, { "authors": "R Leite; T Gschwandtner; S Miksch; E Gstrein; J Kuntner", "journal": "Visual Informatics", "ref_id": "b14", "title": "Visual analytics for event detection: focusing on fraud", "year": "2018" }, { "authors": "J Tang; K Karim", "journal": "Managerial Auditing Journal", "ref_id": "b15", "title": "Financial fraud detection and big data analyticsimplications on auditors' use of fraud brainstorming session", "year": "2019" }, { "authors": "S Das", "journal": "", "ref_id": "b16", "title": "A risk-reduction-based incentivization model for human-centered multifactor authentication", "year": "2020" }, { "authors": "E Ngai; Y Hu; Y Wong; Y Chen; X Sun", "journal": "Decision Support Systems", "ref_id": "b17", "title": "The application of data mining techniques in financial fraud detection: a classification framework and an academic review of literature", "year": "2011" }, { "authors": "B Pilon; J Murillo-Fuentes; J Da Costa; R De Sousa Júnior; A Serrano", "journal": "", "ref_id": "b18", "title": "Gaussian process for regression in business intelligence: a fraud detection application", "year": "2015" }, { "authors": "J Tang; K Karim", "journal": "Managerial Auditing Journal", "ref_id": "b19", "title": "Financial fraud detection and big data analyticsimplications on auditors' use of fraud brainstorming session", "year": "2019" }, { "authors": "W Dilla; R Raschke", "journal": "International Journal of Accounting Information Systems", "ref_id": "b20", "title": "Data visualization for fraud detection: practice implications and a call for future research", "year": "2015" }, { "authors": "E M Knorr; R T Ng", "journal": "", "ref_id": "b21", "title": "Algorithms for Mining Distance-Based Outliers in Large Datasets", "year": "1998" }, { "authors": "E Argyriou; A Symvonis; V Vassiliou", "journal": "", "ref_id": "b22", "title": "A fraud detection visualization system utilizing radial drawings and heat-maps", "year": "2014" }, { "authors": "S Ko; I Cho; S Afzal; C Yau; J Chae; A Malik; K Beck; Y Jang; W Ribarsky; D S Ebert", "journal": "Computer Graphics Forum", "ref_id": "b23", "title": "A survey on visual analysis approaches for financial data", "year": "2016" }, { "authors": "K Singh; P Best", "journal": "Managerial Auditing Journal", "ref_id": "b24", "title": "Interactive visual analysis of anomalous accounts payable transactions in SAP enterprise systems", "year": "2016" }, { "authors": "R Leite; T Gschwandtner; S Miksch; S Kriglstein; M Pohl; E Gstrein; J Kuntner", "journal": "IEEE Transactions on Visualization and Computer Graphics", "ref_id": "b25", "title": "Eva: visual analytics to identify fraudulent events", "year": "2017" }, { "authors": "S Dutta; C M Chen; G Heinlein; H W Shen; J P Chen", "journal": "IEEE Transactions on Visualization and Computer Graphics", "ref_id": "b26", "title": "In situ distribution guided analysis and visualization of transonic jet engine simulations", "year": "2016" }, { "authors": "E Novikova; M Bestuzhev; I Kotenko", "journal": "Computer Security, Springer", "ref_id": "b27", "title": "Anomaly detection in the HVAC system operation by a RadViz based visualization-driven 'approach", "year": "2019" } ]
[ { "formula_coordinates": [ 8, 110.52, 263.93, 5.17, 10.34 ], "formula_id": "formula_0", "formula_text": "" }, { "formula_coordinates": [ 9, 110.52, 132.52, 5.17, 10.35 ], "formula_id": "formula_1", "formula_text": "" } ]
2023-07-08
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b17", "b12", "b19", "b20", "b1", "b15", "b39", "b0", "b27", "b42" ], "table_ref": [], "text": "Unbiased Learning to Rank (ULTR), leveraging implicit user feedback for optimizing learning-to-rank systems, has been extensively studied in information retrieval [18]. Usually, directly optimizing ranking models with click data will suffer from the intrinsic noise and bias in user interaction. In particular, the position bias [13] that occurs because users are more likely to examine documents at higher ranks is known to distort ranking optimization if not handled properly [20,21]. Among existing solutions, ULTR algorithms that jointly estimate click bias and construct unbiased ranking models, namely the AutoULTR algorithms, have drawn a lot of attention [2,16,40]. Because they do not need to conduct separate user studies or online experiments to estimate user bias (i.e., the propensity models), and can be deployed on existing systems without hurting user experiences.\nDespite the theoretical soundness, their effectiveness is usually justified under a weak logging policy, where the ranking model can barely rank documents correctly according to their relevance to the query. However, when the logging policy is strong, especially an industry-deployed ranking policy, the reported effectiveness cannot be reproduced. Since relevant documents are more likely to be presented on top positions under a strong logging policy, the observed click-through rates on top positions would be greater than those of weak logging policy, i.e., users click more on the top positions. Therefore, the estimated propensity on top positions would be larger than actual values, which we refer to as propensity overestimation problem (Detailed empirical results and analysis in Section 6.2). Since industrial LTR systems will be updated dynamically, implicit feedback will only be valuable when the propensity overestimation is addressed.\nIn this paper, we investigate propensity overestimation in ULTR through the lens of causality. By analyzing the causal relations in ULTR, we identify the confounder, i.e., query-document relevance, and derive the propensity's compositions in existing ULTR methods: (1) the causal effect between the position and examination, i.e., the desired propensity; and (2) the confounding effect caused by the relevance, i.e., the overestimated part. To eliminate the confounding effect, a straightforward solution is to adopt backdoor adjustment [28], which, in ULTR, means building a propensity model that takes both the position and relevance into account. However, optimizing this propensity model is non-trivial because separating ranking and propensity models in AutoULTR algorithms is infeasible when they share a common input (i.e., query-document features) and target (i.e., user clicks) [43].\nFor unconfounded propensity estimation in ULTR, which we refer to as UPE, we propose a novel propensity model, namely Logging-Policy-aware Propensity Model, and its distinct two-step optimization strategy: (1) logging-policy-aware confounding effect learning learns a mapping from query-document feature vectors to logging policy scores, which captures the confounding effect caused by the relevance confounder and thereby separates the effects of ranking and propensity model. (2) joint propensity learning learns a mapping from the query-document features and position to the examination by locking the confounding effect part and solely optimizing the position-related parameters. Thereafter, we are able to conduct unconfounded inference via backdoor adjustment and actualize AutoULTR. Extensive experiments on two benchmarks with synthetic clicks on online and offline simulations demonstrate superiority of UPE.\nThe contributions of this work can be summarized as follows:\n• We propose the propensity overestimation phenomenon and firstly conduct the causal analysis in ULTR to identify the confounding effect for the overestimation problem. • We propose a novel Logging-Policy-aware Propensity Model and its distinct two-step optimization strategy: (1) logging-policyaware confounding effect learning and (2) joint propensity learning, which solves the difficulty of backdoor adjustment in ULTR. • We conduct extensive experiments on two benchmarks with synthetic clicks on online and offline simulations to demonstrate the superiority of our proposal." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [ "b5", "b10", "b12", "b26", "b25", "b21", "b38", "b0", "b7", "b15", "b28", "b35", "b6", "b8", "b41", "b48", "b34", "b35", "b43", "b36", "b7", "b9", "b14", "b30", "b37", "b47", "b27", "b37", "b47", "b22", "b40", "b22" ], "table_ref": [], "text": "We summarize the related literature on Unbiased Learning to Rank and Deconfounding in Information Retrieval as follows.\nUnbiased Learning to Rank. To leverage implicit user feedback for optimizing LTR systems, there are two streams for ULTR. One school depends on click modeling, which usually maximizes the likelihood of the observed data, models the examination probability and infers accurate relevance feedback from user clicks [6,11,13,27] However, click models usually requires multiple times appearances of the same query-document pair for reliable inference [26]; thus they may fall short for tail queries. The other school derives from counterfactual learning, which treats bias as a counterfactual factor and debiases user clicks via inverse propensity weighting [22,39]. Among them, automatic unbiased learning to rank methods, which jointly learn user bias models (i.e., propensity models) with unbiased rankers, have received a lot of attention due to their superior performance and low deployment cost.\nBased on automatic ULTR methods, recent work has investigated various biased within ULTR, including position bias [1,8,16,29,36], contextual position bias [7,9,42,49], trust bias [35,36], exploitation bias [44], click unnecessary bias [37], factorizability of the examination hypothesis [8]. Unfortunately, their effectiveness has only been justified under weak logging policies, which cannot be reproduced under a strong policy. Our work offers an answer from a causal perspective by identifying relevance as a confounder, and demonstrating a propensity overestimation problem. Our proposal not only improves the ranking performance of ranking models, but also provides an explanation for the improvement and strong theoretical guarantees of unbiasedness.\nDeconfounding in Information Retrieval. Recently, causal-aware methods have thrived in information retrieval. In particular, some efforts have been made to address confounding problems in recommendation systems. Those methods adopt causal inference to analyze the root causes of bias problems [10,15,31,38,48] and apply backdoor adjustment [28] during the training or inference to address the bias problems. For example, Wang et al. [38] identify the distribution of historical interactions as a confounder for bias amplification. Zhang et al. [48] identify popularity as a confounder that affects both item exposures and user clicks. However, these methods require the confound variables to be observable, while in ULTR the confounder -document relevance -is unobservable.\nThere are also a few efforts that address confounding effects without the need to be observable [23,41]. For example, Liu et al. [23] learn a biased embedding vector with independent biased and unbiased components in the training phase. In testing, only the unbiased component is used to deliver more accurate recommendations. Unlike those methods, extracting separate ranking and propensity models in unbiased learning to rank is difficult when they share a common input and target. In summary, these differences make existing deconfounding methods not applicable in ULTR." }, { "figure_ref": [], "heading": "PRELIMINARIES", "publication_ref": [], "table_ref": [], "text": "In this section, we formulate ULTR with implicit feedback, and introduce the inverse propensity weighting for ULTR." }, { "figure_ref": [], "heading": "Problem Formulation", "publication_ref": [ "b16", "b5", "b21" ], "table_ref": [], "text": "Let D be the universal set of documents, and Q be the universal set of queries. For a user-issued query ∈ Q, is the ranked list retrieved for query , ∈ is the document presented at position , x ∈ X and ∈ {0, 1} are the feature vector and binary relevance of the query document pair ( , ), respectively. The goal of learning to rank is to find a mapping function from a query document feature vector x to its relevance . In most cases, we are only concerned with the position of relevant documents ( = 1) in retrieval evaluations (e.g., MAP, nDCG [17], ERR [6]), so we can formulate the ideal local ranking loss L as:\nL ( , | ) = ∈ , =1 Δ( (x ), | ),(1)\nwhere Δ is a function that computes the individual loss on each document. An alternative to relevance is implicit feedback from users, such as clicks. If we conduct learning to rank by replacing the relevance label with click label in Eq. 1, then the empirical\nR E X K C R (a) X R E X K C R (b)\nFigure 1: (a) The causal graph for ULTR. (b) The causal graph with intervention ′ used in UPE. In particular, we apply backdoor adjustment to cut off the association → for propensity model learning. A gray node indicates that the variable is observable.\nlocal ranking loss is derived as follows,\nL ( , | ) = ∈ , =1 Δ( (x ), | ),(2)\nwhere is a binary variable indicating whether the document at position is cliked in the ranked list . However, this naive loss function is biased, due to factors such as position bias [22]. To address this issue, unbiased learning-to-rank aims to train a ranking model with the biased user clicks collected but immune to the bias in the implicit feedback." }, { "figure_ref": [], "heading": "AutoULTR Algorithms", "publication_ref": [ "b29", "b0" ], "table_ref": [], "text": "According to the position bias assumption that the examination only depends on the position, AutoULTR algorithms usually assume the examination and relevance are independent under a weak logging policy. Thereafter, according to the Examination Hypothesis [30]:\n= 1 ⇐⇒ ( = 1 and = 1),(3)\nthe problem of learning a propensity model from click data (i.e., the estimation of bias in clicks) can be treated as a dual problem of constructing an unbiased learning-to-rank model; and its unbiasedness has been justified in Ai et al. [1]. Let be the binary variables that represent whether document is examined by a user. In AutoULTR, an unbiased ranking system and a propensity model can be jointly learned by optimizing the local ranking loss as:\nL ( , | ) = ∈ , =1 Δ( (x ), | ) ( )(4a)\nL ( , | ) = ∈ , =1 Δ( ( ), | ) (x )(4b)\nwhere ( ) is the instantiation of ( = 1) estimating the propensity at position . A nice propensity is that both estimations are only affected by clicked documents = 1, respectively." }, { "figure_ref": [], "heading": "CAUSAL ANALYSIS ON PROPENSITY OVERESTIMATION 4.1 Causal View of ULTR", "publication_ref": [ "b29", "b27" ], "table_ref": [], "text": "To illustrate propensity overestimation in ULTR, we first scrutinize its causal relations and present a causal graph, as shown in Fig. 1a, which consists of six variables { , , ˆ , , , } as follows:\n• represents the real relevance between the query and document.\nFor simplicity, we assume it is a binary variable with ∈ {0, 1}. However, is unobservable and it is typically estimated by the ranking model in ULTR algorithms with raw click data. • is the representation for query-document pair ( , ), where a particular value x ∈ R is the feature vector for the pair. • ˆ with ˆ ∈ R is the estimated relevance score generated by the logging policy. • denotes 's rank on the search result page, where ∈ {1..K} and K is the maximum number of observable documents. • is a binary variable ∈ {0, 1} to indicate whether document is examined, which is unobservable and typically estimated by the propensity model. • is a binary variable ∈ {0, 1} to denote whether the user clicked document .\nThe graph's edges describe causal relations between variables:\n• → : this edge shows that there exists a mapping from querydocument representations to their relevance , which is the goal of unbiased learning to rank. • → ˆ : feature representation determines the estimated relevance score, based on the logging policy.\n• ˆ → : the logging policy presents a list of documents in descending order by the estimated relevance scores. Without losing generality, we assume the estimated relevance score of the document decides its ranked position, and ignores the influence of comparing with other documents. • → : Position bias is formally modeled by assuming the examination only depends on the position. • ( , ) → : the edge follows the examination hypothesis [30] that a user would only click a document when it is observed by the user and considered relevant to the user's need.\nAccording to causal theory [28], query-document feature representation is a confounder as ← → , which leads to a spurious correlation in formulating the propensity model." }, { "figure_ref": [], "heading": "Analysis of Propensity Overestimation", "publication_ref": [ "b0" ], "table_ref": [], "text": "Based on the causal graph illustrated, we derive the propensity estimand in existing ULTR methods. As discussed previously, they mainly formulate the propensity model conditioned on the clicked items as ( | ). In causality language, this formulation means: is the cause, and is the effect. Thereafter, we can derive propensity estimand ( | ) as follows:\n( | ) propensity estimand = x ( , x| ) (5a) = x ( |x, ) (x| ),(5b)\n∝ x ( |x, ) (x) causal • ( |x) confounding ,(5c)\nwhere Eq. 5a is the definition of the law of total probability; Eq. 5b and Eq. 5c follow the Bayes rules, respectively. In particular, the proportion operation ∝ does not affect the effectiveness of the proposal, please refer to Section 5.2.2 for more details.\nWe can clearly see the propensity estimand ( | ) in existing ULTR methods consists of two parts: (1) ( |x, ) (x) that contributes to a causal effect between the position and examination, i.e., the desired propensity, which will be illustrated in Eq. 6 later; and (2) ( |x) that contributes to a confounding effect. Remarkably, ( |x) fundamentally changes the propensity estimand ( | ), especially when users' clicks are collected based on an industry-deployed logging policy. Suppose K is a top position, a more relevant document will have a higher chance to be ranked on position K, resulting in a larger value of ( |x) for that document. Therefore, propensity estimand ( | ) on observed data will be larger than the actual value. In short, the query-document relevance confounder leads to the propensity overestimation.\nAccording to the justification by Ai et al. [1], the unbiasedness of the propensity model in Eq. 4b requires the unbiasedness of the ranking model, and vice versa. Since ( | ) is a biased estimand for the examination, conventional AutoULTR algorithms would fail to converge to the unbiased ranking and propensity models.\nNote that this confounding effect widely exists across all logging policies, but its effect is unintentionally concealed in the general ULTR setting, i.e., clicks are collected under a weak logging policy, where ( |x) is almost the same for all documents. A weak logging policy can barely rank documents correctly according to their relevance to the query, so the confounding effect is minor but non-negligible. Experimental results in Section 6.4 verify that our solution could obtain a better-performance ranking model even when the backdoor path leads to a minor confounding effect under a weak logging policy." }, { "figure_ref": [], "heading": "METHODOLOGY", "publication_ref": [ "b27", "b40" ], "table_ref": [], "text": "In this section, we detail our solution, which is referred to as Unconfounded Propensity Estimation (UPE). We first resort to the backdoor adjustment [28,41], which enables the causal effect estimation without causal intervention. Then we propose Loggingpolicy-aware Propensity (LPP) model, and its novel learning strategy, namely logging-policy-aware confounding effect learning and joint propensity learning. Thereafter, we conduct backdoor adjustment on LPP model for unconfounded propensity inference." }, { "figure_ref": [], "heading": "Backdoor adjustment", "publication_ref": [ "b27", "b27", "b27" ], "table_ref": [], "text": "Experimentally, one can resolve the impact of relevance confounder by collecting intervened data where the document on specific position is forcibly adjusted to eliminate the impact of the confounder. However, such an experiment is too costly, and faces the risk of doing harm to user experience in practice. Therefore, we resort to the causal technique: backdoor adjustment [28].\nAccording to the theory of backdoor adjustment [28], our unconfounded propensity estimand is formulated as ( | ( )), where ( ) can be intuitively seen as cutting off the edge ˆ → in , as illustrated in Fig. 1b. Remarkably, ( | ( )) estimates the causal effect between position and examination , rather than ( | ) estimated by existing methods. Now we are ready to derive the specific expression of backdoor adjustment as\n( | ( )) = ′ ( | ) (6a) = x ′ ( |x, ) ′ (x| ) (6b) = x ′ ( |x, ) ′ (x) (6c) = x ( |x, ) (x),(6d)\nwhere ′ denotes the probability function evaluated on intervened causal graph ′ in Fig. 1b. In particular, Eq. 6a is because of backdoor criterion [28] as the only backdoor path ← → has been blocked by ( ); Eq. 6b is obtained by Bayes rules; since and x are independent in ′ , ′ (x) = ′ (x| ) in Eq. 6c; in Eq. 6d, ( |x, ) = ′ ( |x, ) because the causal relation/association → and → are not changed when cutting off ˆ → , and (x) = ′ (x) has the same prior on the two graphs. To this end, we have demonstrated that the causal component in Eq. 5c exactly contributes to the causal effect, which is the need for unconfounded propensity desired.\nUnlike conventional estimand ( | ), our estimand ( | ( )) estimates the examination probability for each position with consideration of every possible value of document x subject to the prior (x) in Eq. 6d, rather than (x| ) in Eq.5c. Therefore, in our method, the documents with high relevance will not receive high examination probability purely because of a higher rank position via (x| ), which addresses propensity overestimation. Remarkably, the ranking model is unbiased at the presence of an unconfounded propensity model and learned by an in-principled unbiased algorithm. On the basis of causal theory, our unconfounded estimand learns the causal effect between the position and examination; therefore, its unbiasedness can be guaranteed when jointly works with in-principled unbiased algorithms.\nTheoretically, the sample space of X is infinite, which makes the calculation of ( | ( )) in Eq. 6d intractable. Therefore, we further devise an approximation of backdoor adjustment for unconfounded propensity inference by empirically averaging over the training samples as:\n( | ( )) = 1 |Q | • | | ∈ Q ∈ ( |x , ),(7)\nwhere Q is a batch of queries in the raw click data, is the ranked list for query , |Q | and | | denotes the number of queries within the batch and number of documents in the ranked list, respectively." }, { "figure_ref": [ "fig_0" ], "heading": "Logging-Policy-aware Propensity Model", "publication_ref": [ "b42", "b37", "b44", "b45", "b47", "b8", "b41", "b48" ], "table_ref": [], "text": "To facilitate the backdoor adjustment in Eq. 7, we need to instantiate a propensity model as ( | , ). However, it is difficult to extract separate ranking models and propensity models when they share the common input (i.e., document) and target (i.e., user clicks) [43].\nInspired by the derivation in Eq. 5, we propose a novel propensity model that considers both the relevance and position, named Logging-Policy-aware Propensity Model (LPP) (as shown in Fig. 2), and its novel two-step optimization strategy: logging-policy-aware confounding effect learning and joint propensity learning. Our model consists of three parts: a confounder encoder, a position encoder and shared feed-forward network. The encoders learn a dense representations for each query-document pair and its ranking position, respectively. Afterward, a vector sum integrates relevance and position representations. The feed-forward network receives the output from the encoder(s) and generates the final output for optimization with respect to different targets. In particular, we discuss the design choice of LPP model as follows.\nObservable confounder variable. Existing deconfounding methods [38,45,46,48] usually represent the joint effect by a simple multiplication among each decoupled factor. Unfortunately, it is not feasible in ULTR because neither ( | ) nor ( |x) is observable. In LPP, we leverage a shared feed-forward network to capture the joint effect of relevance features and position .\nThe motivation behind LPP model. At first glance, readers may confuse the LPP in our work and contextual position bias [9,42,49], since they both model ( | , ). The key difference between them is: the propensity model desired in our work is ( | ( )) according to the position-based bias assumption; whereas it is ( | , ) according to the contextual position bias. In this work, we need to build a propensity model ( | , ) because we need to address propensity overestimation caused by the relevance confounder as illustrated in Eq. 6. More importantly, we need to separate the influence over propensity estimation by the position and relevance from raw click data, which facilitates backdoor adjustment and obtain the unconfounded propensity estimation ( | ( ))." }, { "figure_ref": [], "heading": "5.2.1", "publication_ref": [ "b0", "b9" ], "table_ref": [], "text": "Logging-policy-aware Confounding Effect Learning. The first step is to estimate the confounding effect caused by query-document features under a logging policy, which corresponds to ( | ) in Eq. 5c. To this end, we propose logging-policy-aware confounding effect learning, which learns a mapping from raw document features to logging policy scores ˆ . Careful readers may notice that the target is logging policy scores ˆ instead of rank positions . We will compare the effectiveness of different fitting targets in Section 6.5.1.\nDirectly learning the mapping in a point-wise way, however, is problematic. Since the logging policies, i.e., ranking models, are usually optimized pairwise or listwise, the logging policy scores may follow different distributions under different queries. This issue would restrict the expressive ability of neural encoders.\nTherefore, we propose to learn the mapping in a list-wise way, which is invariant to different score distributions under different queries. Formally, given a query and its associated documents\n= [ 1 , • • • , ],\neach feature vector for query-document pair x is transformed into an -dimensional latent feature representation:\nm = Encoder ( , ), where m ∈ R .(8)\nEncoder is a confounder encoder that projects the raw querydocument feature vector x to latent representations m . Afterward, the latent feature vector is passed into a feed-forward network, and generate the predictive scores as:\nˆ = FFN(m ), where ˆ ∈ R,(9)\nFFN is a point-wise feed-forward network that projects the latent representation to real-value scalar as prediction scores ( |x ). Let be the logging policy score of document in query that is observed in the logging policy, we optimize in the list-wise paradigm through an attention rank loss function [1]:\nL ( pt | ) = - ∈ exp( ) ∈ exp( ) • log exp( ˆ ) ∈ exp( ˆ )\n, (10) where pt denotes the parameters including in the confounder encoder Encoder and feed-forward network FFN. To sum up, the logging-policy-aware confounding effect learning captures confounding component ( | ) in Eq. 5c, and enables the separation of the ranking and propensity model from raw clicks.\nA powerful confounder encoder is needed. As shown in Eq. 5c, a more accurate estimation of the confounding effect naturally leads to a more better unconfounded propensity models, given a logging policy. Thus, an expressive confounder encoder is needed for separating the ranking and propensity models from raw clicks, and consequently, the performance of ranking models can be enhanced." }, { "figure_ref": [ "fig_0" ], "heading": "Joint Propensity", "publication_ref": [ "b1", "b3", "b24" ], "table_ref": [], "text": "Learning. Next, we present how to capture the influence of position . In particular, we propose to learn the mapping from position and query-document feature to the examination by fixing the confounding effect model and solely tuning the position-related parameters, as marked lock in Fig. 2. The rationality is that fixing the confounder encoder and feed-forward network keeps the relevance confounding effect unchanged; therefore, the position encoder is able to correctly capture the influence of position over examination.\nFormally, we design a position embedding function Encoder . It encodes the position of a document that is ranked on position to a vector with the same dimension as m :\np = Encoder (rank( )), where p ∈ R ,(11)\nand rank( ) is the ranked position for document generated by the logging policy. Afterward, we obtain predictions for joint propensity scores ( | ) through the frozen FFN:\nˆ = FFN(m + p ).(12)\nLet be the estimated propensity score of the document ranked at position by existing ULTR algorithms, which only depends on position . We solely update the position embedding function Encoder during the optimization via the attention rank loss function, which is formally defined as,\nL ( pos | ) = - ∈ exp( ) ∈ exp( ) •log exp( ˆ ) ∈ exp( ˆ ) ,(13)\nwhere pos denotes the parameters of position embedding Encoder .\nTo this end, the joint propensity learning captures the influence of position by fixing that of confounder . As shown in Eq. 10 and Eq. 13, the use of the softmax function assumes that the relevance probabilities and examination probabilities on different documents in will sum up to 1, which is not true in practice. This, however, does not hurt the effectiveness of models training. In fact, the predicted values of ˆ have a minor effect on the unconfounded propensity learning as long as their relative proportions are correct. Due to this reason, we show normalized propensity against position 10, which reflects the relative proportion, throughout this paper. Such technique has been widely applied in existing work, and its effectiveness has been extensively verified in prior work [2,4,25]." }, { "figure_ref": [], "heading": "Unconfounded Propensity Inference with Backdoor Adjustment.", "publication_ref": [], "table_ref": [], "text": "Given the LPP model, we first estimate the propensity under querydocument relevance confounder:\n( |x , ) := FFN Encoder (x ) + Encoder ( ) .(14)\nAfterward, we leverage backdoor adjustment approximation in Eq. 7 to obtain unconfounded propensity estimation ( | ( ), ) from raw user clicks, and integrate with any IPW-based ULTR algorithm to obtain unbiased ranking models. Given the uncoufounded propensity score, one can conduct any IPW-based ULTR algorithm with our proposal, as it is a plug-in model, which can seamlessly integrated into existing automatic ULTR framework. We summarize UPE for ULTR in Algorithm 1." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [ "b2" ], "table_ref": [], "text": "We conduct extensive experiments to demonstrate the effectiveness of our UPE by investigating the following research questions:\n• RQ1: Can empirical results justify the propensity overestimation problem and how it affects ranking models? In particular, we analyze two learning paradigms that are proposed in Ai et al. [3]. The first one, referred to as the deterministic online paradigm (OnD), is an online setting where the displayed ranked list is created by the current logging policy, and the ranking model is updated based on collected online. The second one, which is referred to as the offline paradigm (Off ), is a classic setting where we obtain a logging policy, and then both the displayed ranked list and the clicks on it are fixed and observed in advance." }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [ "b1", "b15", "b4" ], "table_ref": [], "text": "Datasets. We conduct empirical studies on two of the largest publicly available LTR datasets:\n• Yahoo! LETOR1 comes from the Learn to Rank Challenge version 2.0 (Set 1), and is one of the largest benchmarks of unbiased learning to rank [2,16]. It consists of 29,921 queries and 710K documents. Each query-document pair is represented by a 700-D feature vector and annotated with 5-level relevance labels [5].\nTable 1: Overall performance comparison between UPE and the baselines on Yahoo! and Istella-S datasets with deterministic online learning (OnD). \" * \" indicates statistically significant improvement over the best baseline without result randomization." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [ "b21", "b1", "b21", "b21", "b5", "b7", "b1", "b39", "b15", "b21", "b38", "b33", "b1", "b13", "b11", "b32", "b16", "b5", "b1", "b31" ], "table_ref": [], "text": "Yahoo! LETOR Istella-S NDCG@K ERR@K NDCG@K ERR@K Click Simulation. We generate synthesized click with a two-step process as in Joachims et al. [22] and Ai et al. [2]. First, we generate the initial ranked list for each query based on learning paradigms, i.e., OnD and Off . Then, we simulate the user browsing process based on PBM [22] and sample clicks from the initial ranked list by utilizing the simulation model. The PBM models user browsing behavior based on the assumption that the bias of a document only depends on its position:\nK = 1 K = 3 K = 5 K = 10 K = 1 K = 3 K = 5 K = 10 K = 1 K = 3 K = 5 K = 10 K = 1 K = 3 K = 5 K =\n( ) = , (15\n)\nwhere represents position bias at position and ∈ [0, +∞] is a parameter controlling the degree of position bias. The position bias is obtained from an eye-tracking experiment in Joachims et al. [22] and the parameter is set as 1 by default. Following the methodology proposed by Chapelle et al. [6], we sample clicks with:\nPr( = 1| ) = + (1 -) 2 -1 2 max -1 , (16\n)\nwhere ∈ [0, max ] is the relevance label of the document , and max is the maximum value of , which is 4 on both datasets. is the noise level, which models click noise such that irrelevant documents (i.e., = 0) have non-zero probability to be perceived as relevant and clicked. We fix = 0.1 as the default setting.\nBaselines. To demonstrate the effectiveness of our proposed method, we compare it with baseline methods which are widely used in ULTR problems.\n• Vectorization: The Vectorization [8] expands the examination hypothesis to a vector-based one, which formulates the click probability as a dot product of two vector functions instead of two scalar functions.\n2 http://quickrank.isti.cnr.it/istella-dataset/\n• DLA: The Dual Learning Algorithm [2] treats the problem of unbiased learning to rank and unbiased propensity estimation as a dual problem, such that they can be optimized simultaneously. • REM: The Regression EM model [40] uses an EM framework to estimate the propensity scores and ranking scores. • PairD: The Pairwise Debiasing (PairD) Model [16] uses inverse propensity weighting for pairwise learning to rank. • IPW-Random: The Inverse Propensity Weighting [22,39] uses result randomization to estimate the examination probabilities against positions and optimizes the ranking models accordingly. Its performance can be considered as the upper bound for learningto-rank with user implicit feedback.\n• Naive: This model just uses the raw click data to train the ranking model, without any correction. Its performance can be considered as a lower bound for the ranking model.\nExperimental Protocols. We implement UPE and use the baselines in ULTRA [34] to conduct our experiments. In particular, UPE is integrated with DLA, as it is the state-of-the-art automatic ULTR algorithm. For each query, only the top = 10 documents are assumed to be displayed to the users. For both datasets, all models are trained with synthetic clicks. Following the setting in [2], the click sessions for training are generated on the fly. We fix the batch size to 256 and train each model for 10K steps. We use the AdaGrad optimizer [14] and tune learning rates from 0.01 to 0.05 for each unbiased learning-to-rank algorithm on the validation dataset.\nIn the experiments, we train neural networks for our ranking functions. All reported results are produced using a model with three hidden layers with size [512, 256, 128] respectively, with the ELU [12] activation function and 0.1 dropout [33]. To construct a sufficiently expressive confounder encoder in LPP model, it is configured to have two stacked transformer blocks, each with 256 hidden units and 8 heads. The hidden size of the document and position representation are both set to 64. The feed-forward network is a neural network with two layers with size [64, 256].\nTo evaluate all methods, we use the normalized Discounted Cumulative Gain (nDCG) [17] and the Expected Reciprocal Rank (ERR) [6]. For both metrics, we report the results at ranks 1, 3, 5, and 10 to show the performance of models on different positions. Following [2], statistical differences are computed based on the Fisher randomization test [32] with ≤ 0.05. propensity@K propensity@10 ) under different logging policies on Yahoo! LETOR." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Empirical Results for Propensity Overestimation(RQ1)", "publication_ref": [ "b1" ], "table_ref": [], "text": "We first justify the propensity overestimation problem. For a better illustration, we demonstrate the normalized estimated propensities against ranking positions and ranking performance, accordingly. Fig. 3(a) investigates the ranking performance of DLA [2], a state-of-the-art automatic ULTR method, under weak and strong logging policies, respectively. It shows that DLA's ranking performance suffers from a significant drop under a strong logging policy. Furthermore, in Fig. 3(b), we show the learned propensity score and indeed observe a propensity overestimation problem in which normalized estimated propensities for top positions are much larger than their actual values. This observation confirms the propensity overestimation problem, and this problem is detrimental to ranking models' optimization. Therefore, propensity overestimation needs to be addressed during the dynamic learning and serving of LTR systems, where the logging policies are not longer weak." }, { "figure_ref": [ "fig_2", "fig_1" ], "heading": "Dynamic LTR Simulation (RQ2)", "publication_ref": [ "b14" ], "table_ref": [], "text": "In practice, the logging policies are updated periodically: we can collect sufficient user clicks in a few months and update the logging policy with the latest ranking policy. To simulate this process, we perform deterministic online simulation ( ) and update the logging policy after every 2.5K steps with the most recent ranking model. From Table 1, we can see that UPE significantly outperforms all the ULTR baseline methods without result randomization on both datasets. This demonstrates the effectiveness of UPE with dynamic training and serving of LTR systems. Remarkably, UPE achieves similar performance to IPW-Random, this indicates that UPE can accurately estimate the examination probability for each position, yet without the need for result randomization.\nTo provide a more insightful understanding for the benefit of UPE, we also illustrate the learning curve of estimated propensity in Fig. 4. In particular, we investigate the normalized estimated propensity against position 1 on UPE, compared with DLA, the state-of-the-art automatic ULTR algorithm. We select position 1 for illustration because it suffers from propensity overestimation most (will see Fig. 3). The \"ground truth\" is computed by 1 10 , where is the position bias defined in Eq. 15.\nWe can see that DLA suffers from propensity overestimation, as evidenced by its learning curves deviating from the ground truth during dynamic training. This demonstrates that existing ULTR methods are unable to leverage implicit feedback to enhance the ranking model during dynamic training and serving of LTR systems, due to the issue of propensity overestimation. In contrast, UPE's propensity estimation is robust to changes in the logging policy, and its curves generally align with the ground truth. As more user clicks are collected, the ranking performance improves, highlighting the effectiveness of UPE in enhancing the ranking model during training. This nice property actualizes the value of implicit feedback in the practical ULTR scenario, i.e., the LTR systems are updated dynamically. Remarkably, though the performance of logging policy varies from scratch to being in high accuracy, UPE can accurately estimate propensity in a consistent manner. This suggests that the superiority of UPE is not related to the performance of logging policies, but the accurate estimation for confounding effect under a logging policy, which justifies the necessity of an expressive confounder encoder in LPP.\nIt is worth explaining a counter-intuitive observation: the proposed UPE outperforms the ideal IPW-Random (theoretical upper bound), which uses result randomization to estimate the examination probability against positions. While result randomization would produce the accurate estimation of the true position bias and leads to the optimality of IPW-Random in theory, it also introduces large variance in practice. The \"unexpected\" results observed in Table 1 and Table 2 are mostly due to such variances." }, { "figure_ref": [], "heading": "Offline Simulation (RQ3)", "publication_ref": [ "b21", "b1", "b18" ], "table_ref": [], "text": "To investigate the generalizability of UPE, we conduct experiments on the commonly used ULTR setting, i.e., offline learning paradigm (Off ) as in Joachims et al. [22] and Ai et al. [2]. Unlike the online paradigm, the ranked lists are generated by a Rank SVM model [19] that trained with 1% of the training data with real relvance judgements, i.e., weak logging policy.\nTable 2 shows the experimental results. We can see that UPE also achieves the best performance among all baseline methods without result randomization, and UPE achieves similar performance as IPW-Random on the offline paradigm Off . Besides, UPE outperforms the best baseline methods without result randomization in most of the metrics. This observation indicates that UPE can effectively address propensity overestimation even when there exists a minor confounding effect by query-document relevance confounder under a weak logging policy, and more importantly such confounding effect is indeed non-negligible.\nTo validate the effectiveness of UPE in propensity estimation under different logging policies, we have also demonstrated the Table 2: Overall performance comparison between UPE and the baselines on Yahoo! and Istella-S datasets with offline learning (Off ). \" * \" indicates statistically significant improvement over the best baseline without result randomization." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Yahoo! LETOR", "publication_ref": [], "table_ref": [], "text": "Istella-S NDCG@K ERR@K NDCG@K ERR@K K overall distribution of normalized estimated propensity against position in Fig. 3. The results demonstrate the high generalizability of UPE, which can consistently obtain unconfounded propensity estimation under both strong and weak policies. Its ranking performance is also shown in Fig. 3, where UPE can consitenly achieve the SOTA ranking performance under both weak and strong policies.\n= 1 K = 3 K = 5 K = 10 K = 1 K = 3 K = 5 K = 10 K = 1 K = 3 K = 5 K = 10 K = 1 K = 3 K = 5 K =\nMethods NDCG@K ERR@K K = 1 K = 3 K = 5 K = 10 K = 1 K = 3 K = 5 K =" }, { "figure_ref": [ "fig_2" ], "heading": "Ablation Study (RQ4)", "publication_ref": [ "b46" ], "table_ref": [ "tab_3" ], "text": "Our ablation studies investigate how variant designs of LPP affect its optimization, and consequentially affect the ranking performance.\n6.5.1 Is it more beneficial to model ( ˆ |x) than ( |x) in propensity model training? In this section, we compare different fitting targets for LPP optimization. To instantiate ( |x), we transform the ranked positions as relevance labels for ranking optimization as in Zhang et al. [47]:\nMRR-UPE : MRR-LPP@ = 1 (17a) DCG-UPE : DCG-LPP@ = 1 log 2 ( + 1) . (17b\n)\nThe experimental results are summarized in Table 3. We can see that UPE significantly outperforms the two variants, i.e., MRR-UPE and DCG-UPE. This observation confirms that the logging policy scores are more informative than the ranked positions, because logging policy scores not only provide the order of the documents, i.e., ranked positions, but also their relevance strengths. We refer to the naive propensity learning framework as UPE , which optimizes ( | ) and ( | , ) from raw click data alternatively. For better illustration, we show the logarithm of the normalized estimated propensity at positions 1, 2, 3 and 9 in Fig. 5(a). We can observed that from step 3000, the estimated propensity by UPE against position 1 has been much smaller than that against position 9, and those at other positions are almost identical to that at position 9. It indicates that UPE fails to learn the correct examination probability for each position, which should have a larger value for a higher-ranked position. We side-by-side present the ranking performance curve of UPE in Fig. 5(b). Starting from step 3000, nDCG@10 of UPE does not increase; this means that collecting more click data does not improve the ranking model because the propensity model has failed in separating the impact of relevance and position on propensity. This phenomenon also verifies that an accurate propensity model is indeed necessary for optimizing ranking models.\nUnlike UPE , the proposed logging-policy-aware confounding effect learning and joint propensity learning enable us to obtain an unconfounded propensity through backdoor adjustment, which correctly estimates the causal effect between the propensity and position, as has been shown in Fig. 4. Moreover, the learning curve of UPE in Fig. 5(b) shows that the ranking performance of our proposal consistently improves when more click data are collected. Therefore, the two-step optimization strategy is necessary for LPP optimization, it enables the separation of ranking and propensity models from raw clicks." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this work, we investigate unbiased learning to rank through the lens of causality and identify query-document representation as a confounder, which leads to propensity overestimation. For unconfounded propensity overestimation, we propose a novel propensity model, i.e., Logging-Policy-aware Propensity Model, and its distinct two-step optimization strategy: (1) logging-policy-aware confounding effect learning, which captures the confounding effect caused by the query-document feature confounder and thereby separates ranking and propensity models from raw clicks; and (2) joint propensity learning, which learns the mapping from position and query-document feature to the examination by fixing the confounding effect model and solely tuning the position-related parameters. Given the fine-tuned LPP model, we conduct backdoor adjustment for unconfounded propensity estimation, which serves for ULTR. Extensive experiments on two benchmarks with synthetic clicks with online and offline simulations validate the effectiveness of our proposal in addressing propensity overestimation and improving ranking performance. A natural future direction would be to extend current work to pairwise learning to explore the feasibility of UPE across more ULTR frameworks." } ]
The goal of unbiased learning to rank (ULTR) is to leverage implicit user feedback for optimizing learning-to-rank systems. Among existing solutions, automatic ULTR algorithms that jointly learn user bias models (i.e., propensity models) with unbiased rankers have received a lot of attention due to their superior performance and low deployment cost in practice. Despite their theoretical soundness, the effectiveness is usually justified under a weak logging policy, where the ranking model can barely rank documents according to their relevance to the query. However, when the logging policy is strong, e.g., an industry-deployed ranking policy, the reported effectiveness cannot be reproduced. In this paper, we first investigate ULTR from a causal perspective and uncover a negative result: existing ULTR algorithms fail to address the issue of propensity overestimation caused by the query-document relevance confounder. Then, we propose a new learning objective based on backdoor adjustment and highlight its differences from conventional propensity models, which reveal the prevalence of propensity overestimation. On top of that, we introduce a novel propensity model called Logging-Policy-aware Propensity (LPP) model and its distinctive two-step optimization strategy, which allows for the joint learning of LPP and ranking models within the automatic ULTR framework, and actualize the unconfounded propensity estimation for ULTR. Extensive experiments on two benchmarks demonstrate the effectiveness and generalizability of the proposed method.
Unconfounded Propensity Estimation for Unbiased Ranking
[ { "figure_caption": "Figure 2 :2Figure 2: The workflow of the proposed LPP framework. Its optimization strategy consists of two steps: (1) confounding effect learning; (2) joint propensity learning. The unconfounded propensity is obtained by conducting backdoor adjustment over the LPP model.Our model consists of three parts: a confounder encoder, a position encoder and shared feed-forward network. The encoders learn a dense representations for each query-document pair and its ranking position, respectively. Afterward, a vector sum integrates relevance and position representations. The feed-forward network receives the output from the encoder(s) and generates the final output for optimization with respect to different targets. In particular, we discuss the design choice of LPP model as follows.Observable confounder variable. Existing deconfounding methods[38,45,46,48] usually represent the joint effect by a simple multiplication among each decoupled factor. Unfortunately, it is not feasible in ULTR because neither ( | ) nor ( |x) is observable. In LPP, we leverage a shared feed-forward network to capture the joint effect of relevance features and position .The motivation behind LPP model. At first glance, readers may confuse the LPP in our work and contextual position bias[9,42,49], since they both model ( | , ). The key difference between them is: the propensity model desired in our work is ( | ( )) according to the position-based bias assumption; whereas it is ( | , ) according to the contextual position bias. In this work, we need to build a propensity model ( | , ) because we need to address propensity overestimation caused by the relevance confounder as illustrated in Eq. 6. More importantly, we need to separate the influence over propensity estimation by the position and relevance from raw click data, which facilitates backdoor adjustment and obtain the unconfounded propensity estimation ( | ( )).", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Ranking performance and normalized estimated propensity (", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: The learning curve of normalized estimated propensity for position 1 with deterministic online learning (OnD).", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "6. 5 . 2 Figure 5 :525Figure 5: The learning curve of normalized estimated propensity and ranking performance of UPE and UPE on Yahoo! LETOR with deterministic online learning (OnD). methods cannot extract separated ranking model ( | ) and propensity model ( | , ) from raw click data in UPE. In this section, we justify this argument, and demonstrate the indispensability of our novel two-step optimization.We refer to the naive propensity learning framework as UPE , which optimizes ( | ) and ( | , ) from raw click data alternatively. For better illustration, we show the logarithm of the normalized estimated propensity at positions 1, 2, 3 and 9 in Fig.5(a). We can observed that from step 3000, the estimated propensity by UPE against position 1 has been much smaller than that against position 9, and those at other positions are almost identical to that at position 9. It indicates that UPE fails to learn the correct examination probability for each position, which should have a larger value for a higher-ranked position. We side-by-side present the ranking performance curve of UPE in Fig.5(b). Starting from step 3000, nDCG@10 of UPE does not increase; this means that collecting more click data does not improve the ranking model because the propensity model has failed in separating the impact of relevance and position on propensity. This phenomenon also verifies that an accurate propensity model is indeed necessary for optimizing ranking models.Unlike UPE , the proposed logging-policy-aware confounding effect learning and joint propensity learning enable us to obtain an unconfounded propensity through backdoor adjustment, which correctly estimates the causal effect between the propensity and position, as has been shown in Fig.4. Moreover, the learning curve of UPE in Fig.5(b)shows that the ranking performance of our proposal consistently improves when more click data are collected. Therefore, the two-step optimization strategy is necessary for LPP", "figure_data": "", "figure_id": "fig_3", "figure_label": "525", "figure_type": "figure" }, { "figure_caption": "0.703 * 0.723 * 0.768 * 0.355 * 0.433 * 0.454 * 0.469 * 0.666 * 0.633 * 0.657 * 0.718 * 0.595 * 0.704 * 0.720 * 0.727 * We follow the predefined data split of training, validation, and testing of all datasets. The Yahoo! set splits the queries arbitrarily and uses 19,944 for training, 2,994 for validation, and 6,983 for testing.", "figure_data": "10IPW-Random 0.693 0.702 0.722 0.767 0.352 0.432 0.452 0.469 0.667 0.637 0.660 0.720 0.596 0.705 0.722 0.728UPE 0.695 Vectorization 0.670 0.678 0.702 0.753 0.343 0.423 0.446 0.460 0.663 0.630 0.653 0.711 0.593 0.701 0.716 0.724DLA0.671 0.677 0.701 0.750 0.345 0.423 0.445 0.460 0.663 0.629 0.653 0.712 0.592 0.701 0.717 0.724REM0.674 0.678 0.699 0.747 0.349 0.425 0.446 0.462 0.642 0.611 0.631 0.690 0.574 0.684 0.702 0.709PairD0.602 0.614 0.642 0.700 0.319 0.394 0.416 0.433 0.609 0.569 0.593 0.653 0.545 0.656 0.675 0.684Naive0.634 0.644 0.670 0.723 0.334 0.409 0.431 0.447 0.639 0.601 0.624 0.683 0.571 0.681 0.699 0.706• Istella-S 2 contains 33K queries and 3,408K documents (roughly103 documents per query) sampled from a commercial Italiansearch engine. Each query-document pair is represented by 220features and annotated with 5-level relevance judgments [24].", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "0.692 * 0.715 * 0.762 * 0.351 0.429 0.451 0.466 0.669 * 0.637 * 0.659 * 0.720 * 0.598 * 0.706 * 0.722 * 0.729 * The performance of different fitting targets in LLP optimization on Yahoo! LETOR with deterministic online learning (OnD). \" * \" indicates statistically significant improvement over the best baseline.", "figure_data": "10", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "10 UPE 0.695 * 0.703 * 0.723 * 0.768 * 0.355 * 0.433 * 0.454 * 0.469 * MRR-UPE 0.688 0.694 0.715 0.762 0.352 0.429 0.451 0.466 DCG-UPE 0.686 0.691 0.714 0.760 0.350 0.428 0.451 0.465", "figure_data": "", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" } ]
Dan Luo; Lixin Zou; Zhiyu Chen; Dawei Yin; Brian D Davison
[ { "authors": "Qingyao Ai; Keping Bi; Jiafeng Guo; W Bruce Croft", "journal": "", "ref_id": "b0", "title": "Learning a Deep Listwise Context Model for Ranking Refinement", "year": "2018-07-08" }, { "authors": "Qingyao Ai; Keping Bi; Cheng Luo; Jiafeng Guo; W Bruce Croft", "journal": "", "ref_id": "b1", "title": "Unbiased Learning to Rank with Unbiased Propensity Estimation", "year": "2018-07-08" }, { "authors": "Qingyao Ai; Tao Yang; Huazheng Wang; Jiaxin Mao", "journal": "ACM Trans. Inf. Syst", "ref_id": "b2", "title": "Unbiased Learning to Rank: Online or Offline?", "year": "2021" }, { "authors": "Yinqiong Cai; Jiafeng Guo; Yixing Fan; Qingyao Ai; Ruqing Zhang; Xueqi Cheng", "journal": "", "ref_id": "b3", "title": "Hard Negatives or False Negatives: Correcting Pooling Bias in Training Neural Ranking Models", "year": "2022-10-17" }, { "authors": "Olivier Chapelle; Yi Chang", "journal": "", "ref_id": "b4", "title": "Yahoo! Learning to Rank Challenge Overview", "year": "2010" }, { "authors": "Olivier Chapelle; Donald Metlzer; Ya Zhang; Pierre Grinspan", "journal": "", "ref_id": "b5", "title": "Expected reciprocal rank for graded relevance", "year": "2009" }, { "authors": "Mouxiang Chen; Chenghao Liu; Zemin Liu; Jianling Sun", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b6", "title": "LBD: Decouple Relevance and Observation for Individual-Level Unbiased Learning to Rank", "year": "2022" }, { "authors": "Mouxiang Chen; Chenghao Liu; Zemin Liu; Jianling Sun", "journal": "ACM", "ref_id": "b7", "title": "Scalar is Not Enough: Vectorization-based Unbiased Learning to Rank", "year": "2022-08-14" }, { "authors": "Mouxiang Chen; Chenghao Liu; Jianling Sun; Steven C H Hoi", "journal": "", "ref_id": "b8", "title": "Adapting Interactional Observation Embedding for Counterfactual Learning to Rank", "year": "2021-07-11" }, { "authors": "Konstantina Christakopoulou; Madeleine Traverse; Trevor Potter; Emma Marriott; Daniel Li; Chris Haulk; Ed H Chi; Minmin Chen", "journal": "", "ref_id": "b9", "title": "Deconfounding User Satisfaction Estimation from Response Rate Bias", "year": "2020-09-22" }, { "authors": "Aleksandr Chuklin; Ilya Markov; Maarten De Rijke", "journal": "", "ref_id": "b10", "title": "Click Models for Web Search and their Applications to IR: WSDM 2016 Tutorial", "year": "2016-02-22" }, { "authors": "Djork-Arné Clevert; Thomas Unterthiner; Sepp Hochreiter", "journal": "", "ref_id": "b11", "title": "Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)", "year": "2016-05-02" }, { "authors": "Nick Craswell; Onno Zoeter; Michael J Taylor; Bill Ramsey", "journal": "", "ref_id": "b12", "title": "An experimental comparison of click position-bias models", "year": "2008-02-11" }, { "authors": "John C Duchi; Elad Hazan; Yoram Singer", "journal": "J. Mach. Learn. Res", "ref_id": "b13", "title": "Adaptive Subgradient Methods for Online Learning and Stochastic Optimization", "year": "2011" }, { "authors": "Priyanka Gupta; Ankit Sharma; Pankaj Malhotra; Lovekesh Vig; Gautam Shroff", "journal": "", "ref_id": "b14", "title": "CauSeR: Causal Session-based Recommendations for Handling Popularity Bias", "year": "2021-11-01" }, { "authors": "Ziniu Hu; Yang Wang; Qu Peng; Hang Li", "journal": "", "ref_id": "b15", "title": "Unbiased LambdaMART: An Unbiased Pairwise Learning-to-Rank Algorithm", "year": "2019-05-13" }, { "authors": "Kalervo Järvelin; Jaana Kekäläinen", "journal": "ACM Trans. Inf. Syst", "ref_id": "b16", "title": "Cumulated gain-based evaluation of IR techniques", "year": "2002" }, { "authors": "Thorsten Joachims", "journal": "ACM", "ref_id": "b17", "title": "Optimizing search engines using clickthrough data", "year": "2002-07-23" }, { "authors": "Thorsten Joachims", "journal": "", "ref_id": "b18", "title": "Training linear SVMs in linear time", "year": "2006-08-20" }, { "authors": "Thorsten Joachims; Laura A Granka; Bing Pan; Helene Hembrooke; Geri Gay", "journal": "", "ref_id": "b19", "title": "Accurately interpreting clickthrough data as implicit feedback", "year": "2005-08-15" }, { "authors": "Thorsten Joachims; Laura A Granka; Bing Pan; Helene Hembrooke; Filip Radlinski; Geri Gay", "journal": "ACM Trans. Inf. Syst", "ref_id": "b20", "title": "Evaluating the accuracy of implicit feedback from clicks and query reformulations in Web search", "year": "2007" }, { "authors": "Thorsten Joachims; Adith Swaminathan; Tobias Schnabel", "journal": "", "ref_id": "b21", "title": "Unbiased Learning-to-Rank with Biased Feedback", "year": "2017-02-06" }, { "authors": "Dugang Liu; Pengxiang Cheng; Hong Zhu; Zhenhua Dong; Xiuqiang He; Weike Pan; Zhong Ming", "journal": "", "ref_id": "b22", "title": "Mitigating Confounding Bias in Recommendation via Information Bottleneck", "year": "2021-09-27" }, { "authors": "Claudio Lucchese; Maria Franco; Salvatore Nardini; Raffaele Orlando; Fabrizio Perego; Salvatore Silvestri; Trani", "journal": "", "ref_id": "b23", "title": "Post-Learning Optimization of Tree Ensembles for Efficient Ranking", "year": "2016-07-17" }, { "authors": "Dan Luo; Lixin Zou; Qingyao Ai; Zhiyu Chen; Dawei Yin; Brian D Davison", "journal": "", "ref_id": "b24", "title": "Model-based Unbiased Learning to Rank", "year": "2022" }, { "authors": "Jiaxin Mao; Zhumin Chu; Yiqun Liu; Min Zhang; Shaoping Ma", "journal": "", "ref_id": "b25", "title": "Investigating the Reliability of Click Models", "year": "2019-10-02" }, { "authors": "Jiaxin Mao; Cheng Luo; Min Zhang; Shaoping Ma", "journal": "", "ref_id": "b26", "title": "Constructing Click Models for Mobile Search", "year": "2018-07-08" }, { "authors": "Judea Pearl", "journal": "Cam-bridgeUniversityPress", "ref_id": "b27", "title": "Models, reasoning and inference", "year": "2000" }, { "authors": "Yi Ren; Hongyan Tang; Siwen Zhu", "journal": "ACM", "ref_id": "b28", "title": "Unbiased Learning to Rank with Biased Continuous Feedback", "year": "2022-10-17" }, { "authors": "Matthew Richardson; Ewa Dominowska; Robert Ragno", "journal": "", "ref_id": "b29", "title": "Predicting clicks: estimating the click-through rate for new ads", "year": "2007-05-08" }, { "authors": "Masahiro Sato; Sho Takemori; Janmajay Singh; Tomoko Ohkuma", "journal": "", "ref_id": "b30", "title": "Unbiased Learning for the Causal Effect of Recommendation", "year": "2020-09-22" }, { "authors": "D Mark; James Smucker; Ben Allan; Carterette", "journal": "", "ref_id": "b31", "title": "A comparison of statistical significance tests for information retrieval evaluation", "year": "2007-11-06" }, { "authors": "Nitish Srivastava; Geoffrey E Hinton; Alex Krizhevsky; Ilya Sutskever; Ruslan Salakhutdinov", "journal": "J. Mach. Learn. Res", "ref_id": "b32", "title": "Dropout: a simple way to prevent neural networks from overfitting", "year": "2014" }, { "authors": "Anh Tran; Tao Yang; Qingyao Ai", "journal": "", "ref_id": "b33", "title": "ULTRA: An Unbiased Learning To Rank Algorithm Toolbox", "year": "2021-11-01" }, { "authors": "Ali Vardasbi; Maarten De Rijke; Ilya Markov", "journal": "ACM", "ref_id": "b34", "title": "Mixture-Based Correction for Position and Trust Bias in Counterfactual Learning to Rank", "year": "2021-11-01" }, { "authors": "Ali Vardasbi; Harrie Oosterhuis; Maarten De Rijke", "journal": "", "ref_id": "b35", "title": "When Inverse Propensity Scoring does not Work: Affine Corrections for Unbiased Learning to Rank", "year": "2020-10-19" }, { "authors": "Nan Wang; Zhen Qin; Xuanhui Wang; Hongning Wang", "journal": "ACM", "ref_id": "b36", "title": "Non-Clicks Mean Irrelevant? Propensity Ratio Scoring As a Correction", "year": "2021-03-08" }, { "authors": "Wenjie Wang; Fuli Feng; Xiangnan He; Xiang Wang; Tat-Seng Chua", "journal": "", "ref_id": "b37", "title": "Deconfounded Recommendation for Alleviating Bias Amplification", "year": "2021-08-14" }, { "authors": "Xuanhui Wang; Michael Bendersky; Donald Metzler; Marc Najork", "journal": "", "ref_id": "b38", "title": "Learning to Rank with Selection Bias in Personal Search", "year": "2016-07-17" }, { "authors": "Xuanhui Wang; Nadav Golbandi; Michael Bendersky; Donald Metzler; Marc Najork", "journal": "", "ref_id": "b39", "title": "Position Bias Estimation for Unbiased Learning to Rank in Personal Search", "year": "2018-02-05" }, { "authors": "Yixin Wang; Dawen Liang; Laurent Charlin; David M Blei", "journal": "", "ref_id": "b40", "title": "Causal Inference for Recommender Systems", "year": "2020-09-22" }, { "authors": "Le Yan; Zhen Qin; Honglei Zhuang; Xuanhui Wang; Michael Bendersky; Marc Najork", "journal": "", "ref_id": "b41", "title": "Revisiting Two-tower Models for Unbiased Learning to Rank", "year": "2022-07-11" }, { "authors": "Tao Yang; Shikai Fang; Shibo Li; Yulan Wang; Qingyao Ai", "journal": "", "ref_id": "b42", "title": "Analysis of Multivariate Scoring Functions for Automatic Unbiased Learning to Rank", "year": "2020-10-19" }, { "authors": "Tao Yang; Chen Luo; Hanqing Lu; Parth Gupta; Bing Yin; Qingyao Ai", "journal": "ACM", "ref_id": "b43", "title": "Can Clicks Be Both Labels and Features?: Unbiased Behavior Feature Collection and Uncertainty-aware Learning to Rank", "year": "2022-07-11" }, { "authors": "Xun Yang; Fuli Feng; Wei Ji; Meng Wang; Tat-Seng Chua", "journal": "", "ref_id": "b44", "title": "Deconfounded Video Moment Retrieval with Causal Intervention", "year": "2021-07-11" }, { "authors": "Ruohan Zhan; Changhua Pei; Qiang Su; Jianfeng Wen; Xueliang Wang; Guanyu Mu; Dong Zheng; Peng Jiang; Kun Gai", "journal": "", "ref_id": "b45", "title": "Deconfounding Duration Bias in Watch-time Prediction for Video Recommendation", "year": "2022-08-14" }, { "authors": "Junqi Zhang; Jiaxin Mao; Yiqun Liu; Ruizhe Zhang; Min Zhang; Shaoping Ma; Jun Xu; Qi Tian", "journal": "", "ref_id": "b46", "title": "Context-Aware Ranking by Constructing a Virtual Environment for Reinforcement Learning", "year": "2019-11-03" }, { "authors": "Yang Zhang; Fuli Feng; Xiangnan He; Tianxin Wei; Chonggang Song; Guohui Ling; Yongdong Zhang", "journal": "", "ref_id": "b47", "title": "Causal Intervention for Leveraging Popularity Bias in Recommendation", "year": "2021-07-11" }, { "authors": "Honglei Zhuang; Zhen Qin; Xuanhui Wang; Michael Bendersky; Xinyu Qian; Po Hu; Dan Chary; Chen ", "journal": "", "ref_id": "b48", "title": "Cross-Positional Attention for Debiasing Clicks", "year": "2021-04-19" } ]
[ { "formula_coordinates": [ 2, 356.16, 632.16, 202.59, 19.58 ], "formula_id": "formula_0", "formula_text": "L ( , | ) = ∈ , =1 Δ( (x ), | ),(1)" }, { "formula_coordinates": [ 3, 69.04, 87.47, 205.95, 77.15 ], "formula_id": "formula_1", "formula_text": "R E X K C R (a) X R E X K C R (b)" }, { "formula_coordinates": [ 3, 81.24, 246.72, 213.39, 19.58 ], "formula_id": "formula_2", "formula_text": "L ( , | ) = ∈ , =1 Δ( (x ), | ),(2)" }, { "formula_coordinates": [ 3, 123.96, 420.48, 170.67, 8.84 ], "formula_id": "formula_3", "formula_text": "= 1 ⇐⇒ ( = 1 and = 1),(3)" }, { "formula_coordinates": [ 3, 92.16, 526.61, 202.47, 26.49 ], "formula_id": "formula_4", "formula_text": "L ( , | ) = ∈ , =1 Δ( (x ), | ) ( )(4a)" }, { "formula_coordinates": [ 3, 98.64, 558.17, 195.99, 26.61 ], "formula_id": "formula_5", "formula_text": "L ( , | ) = ∈ , =1 Δ( ( ), | ) (x )(4b)" }, { "formula_coordinates": [ 3, 347.28, 576.24, 211.47, 52.22 ], "formula_id": "formula_6", "formula_text": "( | ) propensity estimand = x ( , x| ) (5a) = x ( |x, ) (x| ),(5b)" }, { "formula_coordinates": [ 3, 410.76, 633.96, 147.99, 26.81 ], "formula_id": "formula_7", "formula_text": "∝ x ( |x, ) (x) causal • ( |x) confounding ,(5c)" }, { "formula_coordinates": [ 4, 372.6, 105.72, 186.15, 82.46 ], "formula_id": "formula_8", "formula_text": "( | ( )) = ′ ( | ) (6a) = x ′ ( |x, ) ′ (x| ) (6b) = x ′ ( |x, ) ′ (x) (6c) = x ( |x, ) (x),(6d)" }, { "formula_coordinates": [ 4, 357.12, 518.76, 201.63, 25.35 ], "formula_id": "formula_9", "formula_text": "( | ( )) = 1 |Q | • | | ∈ Q ∈ ( |x , ),(7)" }, { "formula_coordinates": [ 5, 329.76, 333.17, 56.09, 9.48 ], "formula_id": "formula_10", "formula_text": "= [ 1 , • • • , ]," }, { "formula_coordinates": [ 5, 361.68, 358.92, 197.07, 8.84 ], "formula_id": "formula_11", "formula_text": "m = Encoder ( , ), where m ∈ R .(8)" }, { "formula_coordinates": [ 5, 379.44, 421.68, 179.31, 8.84 ], "formula_id": "formula_12", "formula_text": "ˆ = FFN(m ), where ˆ ∈ R,(9)" }, { "formula_coordinates": [ 5, 324.96, 496.08, 210.11, 28.35 ], "formula_id": "formula_13", "formula_text": "L ( pt | ) = - ∈ exp( ) ∈ exp( ) • log exp( ˆ ) ∈ exp( ˆ )" }, { "formula_coordinates": [ 6, 94.2, 178.8, 200.43, 8.84 ], "formula_id": "formula_14", "formula_text": "p = Encoder (rank( )), where p ∈ R ,(11)" }, { "formula_coordinates": [ 6, 136.32, 231.36, 158.31, 8.84 ], "formula_id": "formula_15", "formula_text": "ˆ = FFN(m + p ).(12)" }, { "formula_coordinates": [ 6, 58.44, 308.52, 236.19, 27.39 ], "formula_id": "formula_16", "formula_text": "L ( pos | ) = - ∈ exp( ) ∈ exp( ) •log exp( ˆ ) ∈ exp( ˆ ) ,(13)" }, { "formula_coordinates": [ 6, 88.92, 538.56, 205.71, 9.21 ], "formula_id": "formula_17", "formula_text": "( |x , ) := FFN Encoder (x ) + Encoder ( ) .(14)" }, { "formula_coordinates": [ 7, 116.64, 140.04, 422.29, 8.73 ], "formula_id": "formula_18", "formula_text": "K = 1 K = 3 K = 5 K = 10 K = 1 K = 3 K = 5 K = 10 K = 1 K = 3 K = 5 K = 10 K = 1 K = 3 K = 5 K =" }, { "formula_coordinates": [ 7, 157.2, 448.2, 134.01, 8.84 ], "formula_id": "formula_19", "formula_text": "( ) = , (15" }, { "formula_coordinates": [ 7, 291.21, 448.2, 3.43, 8.73 ], "formula_id": "formula_20", "formula_text": ")" }, { "formula_coordinates": [ 7, 105.12, 529.08, 186.08, 20.97 ], "formula_id": "formula_21", "formula_text": "Pr( = 1| ) = + (1 -) 2 -1 2 max -1 , (16" }, { "formula_coordinates": [ 7, 291.2, 535.2, 3.43, 8.73 ], "formula_id": "formula_22", "formula_text": ")" }, { "formula_coordinates": [ 9, 131.04, 140.04, 401.41, 8.73 ], "formula_id": "formula_23", "formula_text": "= 1 K = 3 K = 5 K = 10 K = 1 K = 3 K = 5 K = 10 K = 1 K = 3 K = 5 K = 10 K = 1 K = 3 K = 5 K =" }, { "formula_coordinates": [ 9, 60.36, 296.05, 217.63, 22.04 ], "formula_id": "formula_24", "formula_text": "Methods NDCG@K ERR@K K = 1 K = 3 K = 5 K = 10 K = 1 K = 3 K = 5 K =" }, { "formula_coordinates": [ 9, 93, 559.44, 201.63, 44.57 ], "formula_id": "formula_25", "formula_text": "MRR-UPE : MRR-LPP@ = 1 (17a) DCG-UPE : DCG-LPP@ = 1 log 2 ( + 1) . (17b" }, { "formula_coordinates": [ 9, 291, 587.04, 3.63, 8.73 ], "formula_id": "formula_26", "formula_text": ")" } ]
2023-05-17
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b36", "b8", "b9", "b20", "b36", "b9", "b41", "b40", "b14", "b38", "b48", "b32" ], "table_ref": [], "text": "Transformer [37] and its variations have achieved great success in both the visual and linguistic areas. The Transformer, for instance, is the backbone architecture of several advanced pre-trained language models in natural language processing, including BERT [9] and GPT [25]. In vision-related tasks, such as classification [10], object detection [3], semantic segmentation [50], pose estimation [17] and video captioning [52], Transformer also shows significant potential. Core to a Transformer model is the self-attention mechanism, which allows the Transformer to represent the contexts within an input sequence [21]. Due to the fact that self-attention computes the dot-product between each pair of input representations, its complexity is quadratic to the length of the input sequence [37]. Therefore, it is challenging for standard Transformer models to effectively process lengthy input sequences [35]. In computer vision, however, many tasks [10,42,41] demand high-resolution images that are transformed into lengthy series of image patches ahead of being processed using the Transformer model. Consequently, it is essential to construct an effective attention mechanism capable of modeling lengthy sequences.\nFigure 1: The overview of activation guided attention in our model. After partitioning the input image into patches of fixed size, the heat map produced by Grad-CAM++ via the auxiliary convolutional model M conv is utilized to choose the major tokens and the minor tokens. Then, for each patch, we perform a linear embedding. The minor tokens (grey tokens) are combined into several representation tokens (N f < N m ) via a fusion network in order to decrease computation while preserving background knowledge. After incorporating positional embedding and fusion embedding into tokens, the output is fed into an efficient transformer encoder.\nUtilizing more efficient attention processes, many approaches have been proposed to accelerate the vanilla Transformer [8, 15,22]. In one direction, the approaches lower the length of input tokens, such as [39,32] by leveraging low-rank projections. These methods employ complex projection techniques to reduce computing effort while retaining feature representation. Due to the loss of fine-grained token-wise information, however, they frequently perform worse than full attention on tasks requiring fine-grained local knowledge, as evaluated by traditional language [34] and vision [48] benchmarks. In another direction, the approaches also attempt to sparsify the attention matrix using predefined patterns such as [33,22,2,46]. These methods employ powerful inductive biases to enhance computing efficiency and model performance, but the self-attention layer's capabilities are severely constrained because each token can only attend to a subset of tokens.\nTo address the above limitations, we present the Convolutional Activation Guided Efficient Vision Transformer (CageViT), which speeds up the vanilla Transformer in both directions. Specifically, as shown in Figures 1 and2, our CageViT is developed in three steps:\n1) Before feeding tokens into the linear projection layer, we execute an additional selection and rearrangement step based on the class activation map provided by Grad-CAM++. This process enables the network to identify in advance which patches are significant (major tokens) and which are not (minor tokens).\n2) After the linear projection of flattened patches, we combine minor tokens into multiple fusion tokens while retaining major tokens. Then, positional and fusion embeddings are added to tokens, and the results are sent to Transformer. This step can preserve the fine-grained information of major tokens, which dominate the classification process because determining the contours and textures is more significant in this task.\n3) We redesign the attention mechanism to improve communication between major tokens and fusion tokens, therefore facilitating the use of background information contained in fusion tokens to assist with classification. This step permits all major tokens to interact with minor tokens under the supervision of fusion tokens, i.e., they are gated by the fusion tokens, requiring the major token to be aware of the background information.\nWe thoroughly validate the effectiveness of the proposed CageViT using widely used benchmarks, such as image classification on ImageNet-1k. Compared to the most recent state-of-the-art Vision Transformers and CNNs, experimental results demonstrate the efficiency and generalization abilities of the proposed CageViT. Using a model size comparable to ResNet-18 (69.7%) and PVT-Tiny (75.1%), our CageViT-Small achieves Top-1 accuracy of 80.4% on ImageNet-1k." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b36", "b0", "b36", "b8", "b14", "b14", "b4", "b9", "b38", "b23" ], "table_ref": [], "text": "Transformers [37] is a sequence model that maintains the hidden activation of each time step and combines this data with the help of an attention operator [1]. Transformer will therefore represent the past with a tensor of past observations (depth × memory size × dimension). With this granular memory, Transformer has brought about a step-change in performance, particularly in machine translation [37], language modelling [8, 29], video captioning [52], and a multitude of language understanding benchmarks [9,44]. The computational expense of attending to every time step and the storage cost of retaining this enormous memory, i.e. O(n2) in both time and space complexity, is a disadvantage of storing everything. Multiple strategies [8, 15,22] have been proposed to reduce the computational cost of attention. These efficient transformers have a common name scheme: X-former, for instance, Reformer, e.g. Reformer [15] and Performer [5]. Based on previous research, efficient transformers may be loosely divided into three categories, namely fixed/factorized/random pattern-based approaches, low rank/kernels-based methods, and recurrence-based methods.\nFixed/factorized/random pattern-based methods perform by simply sparsifying the attention matrix. For instance, Image Transformer [10], which is inspired by CNNs, restricts the receptive field of self-attention to small local regions. This allows the model to process larger batch sizes while maintaining the likelihood of loss tractable. In addition, applying the notion of locality can be a desired inductive bias for image processing.\nUtilizing low-rank approximations of the self-attention matrix to enhance efficiency is a lately popular strategy. The central idea is to assume that the N • N matrix has a low-rank structure. A classic approach is shown by the Linformer [39], which projects the length dimension of keys and values to a lower-dimensional representation (N → k). Because the N • N matrix has been reduced to N • k, it is evident that the low-rank technique alleviates the memory complexity issue of self-attention.\nRecurrence-based methods are an extension of the blockwise methods [24,22] that link these chunked input blocks through recurrence. Transformer-XL [8] is the first to propose a system that connects multiple segments and blocks through segment-level recurrence. By using the last state as a memory state for the current segment, instead of computing the hidden states from scratch, the information is passed from state to state and not lost. This method effectively addresses the problem of semantic information loss and allows for longer-term dependencies." }, { "figure_ref": [], "heading": "CageViT", "publication_ref": [ "b36" ], "table_ref": [], "text": "CageViT tries to approximate the full attention of the vanilla Transformer by aggregating all minor tokens into a single background token, while completely keeping the significant tokens. This section presents a preliminary overview of the multi-headed attention mechanism in the Transformer [37]. Then, we will detail how to discriminate major tokens from minor ones and fuse minor ones using the existing convolutional neural network. After that, we propose a new attention mechanism, called Gated Linear SRA, to further improve the model's efficiency. See Fig. 1 for an illustration of our convolutional activation guided attention." }, { "figure_ref": [], "heading": "Preliminaries and Notations", "publication_ref": [ "b36" ], "table_ref": [], "text": "Following [37], the dot-product attention mechanism in vanilla Transformer can be defined as:\nAttention(Q, K, V ) = softmax( QK T √ d )V,(1)\nwhere Q, K, V ∈ R N ×d are the query, key, and value embeddings, with N as the sequence length and d to be the hidden dimension.\nTransformer's multi-headed attention (MHA) algorithm computes contextual representations for each token by paying attention to the entire input sequence across various representation subspaces. MHA is characterized as:\nMHA(Q, K, V ) = Concat(head 0 , ..., head N h )W O ,(2)\nhead i = Attention(Q i , K i , V i )(3\n) for the i th head, where N h is the number of head, W O ∈ R N d×d is the projection matrix to project nd dimension into d dimension." }, { "figure_ref": [], "heading": "Convolutional Activation Guided Attention", "publication_ref": [], "table_ref": [], "text": "In this section, we describe how to incorporate convolutional activations in the Transformer. As shown by Eq. ( 1), the inner product between Q and k has both time and space complexity of O(N 2 ). One strategy to increase efficiency is to reduce the number of tokens, or N . In image classification tasks, humans are able to quickly distinguish between important, or \"major\" tokens, and less important, or \"minor\" tokens. Thus, by combining all minor tokens into a few tokens with all the background information before they enter the Transformer, we can greatly reduce computational complexity while still retaining all the important major tokens for accurate classification.\nIn this paper, we use class activation maps created by Grad-CAM++ [4] to identify major and minor tokens. Since the class label is necessary for Grad-CAM++ to produce the class activation map, which is not available during testing, we choose the top-K labels based on the confidence of the final layer and calculate a weighted average using this confidence. The resulting salience map from the weighted average Grad-CAM++ is defined as:\nS ij = K k ( z k Z • L c k ij ),(4)\nwhere z k is the confidence of the last layer for class label c k , Z = K k z k is the regularization term, L ij is the value of salience map given by Grad-CAM++ at the i th row and j th column.\nAfter obtaining the class activation map, the importance of each patch can be calculated by summing the activation values within the patch. In this paper, we choose a ratio of ρ major tokens out of the total tokens. These tokens are then rearranged and fed into the linear projection layer, as shown in Fig. 1.\nAfter passing through the linear projection layer, the selected minor tokens are fed into a multi-head fusion layer composed of multiple multi-layer MLPs to reduce the token dimension. The multi-head fusion layer is defined as follows:\nMHF i (m) = M LP (Concat(m 0 , m 1 , m 2 , ..., m Nm ),(5)\nwhere for the i th fusion head,\nN m = N b • (1 -ρ)\nis the number of minor tokens, N b is the total number of batches, and m i is the i th selected minor token.\nExtra model. We add an additional convolutional model M conv , specifically MixNet-S [31], to the Transformer, requiring extra compute and memory during training and testing. However, based on the results presented in Sec.4.2, we found that we do not need a high-accuracy convolutional model to achieve good results. We can use a convolutional model with average performance (50% accuracy on ImageNet) which is computationally efficient. Experiments involving the forwarding process of M conv , necessitates the selection of a lightweight convolutional model. The performance of MixNet-S can be found in Tab. 2." }, { "figure_ref": [], "heading": "Gated Linear Spatial Reduction Attention", "publication_ref": [ "b39", "b39", "b40", "b40", "b40" ], "table_ref": [], "text": "Here we present Gated Linear Spatial Reduction Attention (Gated Linear SRA), a redesigned layer that better fits our proposed CAGA. We start by introducing the related layers:\nSpatial Reduction Attention (SRA). SRA [40] is proposed to reduce attention operation's computational and memory costs for high-resolution feature maps. The key idea is to reshape the key and value from (hw [40], Linear SRA in PVTv2 [41], and our proposed Gated Linear SRA, which can make it easier to use the contextual information stored in the fusion tokens to aid classification. All the fusion tokens are concatenated together and fed into the Gate layer.\n) × d to hw R 2 × d in each encoder layer. SRA(Q, K, V ) = Concat(head 0 , ..., head N )W O . (6\n) Multi-Head Attention Q K Average Pooling V ℎ𝑤𝑤 × 𝑑𝑑 𝑝𝑝 2 × 𝑑𝑑 Linear SRA Multi-Head Attention Q K Conv V ℎ𝑤𝑤 × 𝑑𝑑 ℎ𝑤𝑤 𝑅𝑅 2 × 𝑑𝑑 SRA Value of fusion token Multi-Head Attention Q K Average Pooling V ℎ𝑤𝑤 × 𝑑𝑑 𝑝𝑝 2 × 𝑑𝑑 Gated Linear SRA Gate 𝑁𝑁𝑓𝑓 × 𝑑𝑑 𝑝𝑝 2 × 𝑑𝑑 Figure 2: Comparison of SRA in PVTv1\nHere each head is obtained by\nhead = Attention(Q, SR(K), SR(V )),(7)\nwhere SR(•) is the operation for reducing the spatial dimension of K and V , which can be written as:\nSR(x) = Norm(Reshape(x, R)W S ).(8)\nHere, R is the reduction ratio, x ∈ R hw×d denotes the input sequence. Reshape is an operation to reshape the input sequence into hw R 2 × d, which can be implemented with a convolution over the 2-d dimension of the feature map. The complexity of SRA is:\nΩ(SRA) = 2h 2 w 2 d R 2 + hwd 2 R 2 . (9\n)\nLinear SRA. Different from SRA, Linear SRA [41] uses average pooling for spatial dimension reduction, which can reduce the dimension into a fixed size (p 2 × d) before attention operation. Thus, the complexity of Linear SRA can be reduced to a linear level:\nΩ(LinearSRA) = 2hwp 2 d, (10\n)\nwhere p is pooling size of Linear SRA, and is set to 7 in PVTv2 [41].\nGated Linear SRA. The primary concept behind our proposal is to incorporate the fusion token, introduced in Section 3.2, deeper into the attention calculation process. By merging all minor tokens into fusion tokens, the fusion token carries a wealth of environmental information that can better guide the attention. Also, since the locality information disrupted by the selection and rearrangement is still preserved in the environment representations, the introduced gating mechanism can facilitate the communication between major tokens and fusion tokens. Specifically, Gated Linear SRA can be formulated as follows:\nhead = Attention(Q, SR(K), SR(V ) Gate(V f )),(11)\nwhere stands for element-wise multiplication, V f is the concatenated value of fusion tokens in each layer, and SR(•) is the same operation used in Linear SRA, i.e. average pooling to a fixes size of (p 2 × d). Note that the Gate module is a two-layer MLP that is used to map multiple fusion tokens into dimensions that can exchange information with the average-pooled value token.\nSince the complexity of MLP and element-wise multiplication is linear, the complexity of Gated Linear SRA is equivalent to that of Linear SRA:\nΩ(GatedLinearSRA) = Ω(LinearSRA) = 2hwp 2 d.(12)" }, { "figure_ref": [], "heading": "Model Variants", "publication_ref": [], "table_ref": [], "text": "In summary, the hyperparameters of our method are listed as follows:\n• L: the number of encoder layers; • d: the size of hidden dimension in the encoder layer;\n• D: the size of hidden dimension in MLP head layer;\n• p: a constant value that is used to reduce the spatial dimension in Gated Linear SRA;\n• N h : the head number of the Efficient Self-Attention;\n• N f : the number of fusion heads in the multi-head fusion module in Sec. 3.2;\n• K: the number of class labels used to calculate the weighted average of activation in Eq. (4);\n• ρ: the ratio of major tokens selected by CAGA in Sec. 3.2;\nTo provide instances for discussion, we describe a series of CageViT model variants with different scales, namely CageViT-T, CageViT-S, CageViT-B, CageViT-L (Tiny, Small, Base, and Large, respectively), in Tab. 1. As is said in Sec. 3.2, a light-weighted convolutional model is required for the calculation of Grad-CAM++, so MixConv-s [31] is chosen in the experiments. We also report the total parameters by adding the MixConv-s (4.10M) together with our CageViT in Tab. 1. In this section, we conduct experiments on ImageNet-1k for classification task. In the following subsections, we first compared the proposed CageViT with the previous state-of-the-arts on image classification task. Then we adopt ablation studies to validate the core design elements of CageViT." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Image Classification on ImageNet-1k", "publication_ref": [ "b5", "b5", "b47", "b45", "b51", "b3", "b42" ], "table_ref": [], "text": "Experimental settings. We evaluate the proposed CageViT model on ImageNet-1k dataset for image classification. The dataset contains 1.28 million training images and 50,000 validation images from 1,000 classes. Most of the experimental setup follows [36]. We use AdamW optimizer [19] for 300 epochs with cosine decay learning rate scheduler and 5 epochs of linear warm-up. We use batch size of 2048 distributed over 8 GPUs with 256 images per GPU. The initial learning rate is set to 5e-4, weight decay is 0.05 and gradient clipping with a max norm of 5 is applied.\nWe employ most of the augmentation and regularization strategies from [36], including RandAugment [7], Mixup [47], Cutmix [45], Random erasing [51], and stochastic depth [14]. The degree of stochastic depth augmentation increases with the size of the model, i.e., 0. with similar complexities: +5.3% for CageViT-S (80.4%) over PVT-T (75.1%). For larger models, CageViT also significantly outperforms the counterpart Swin[18] and Twins[6] architectures with similar complexities: +0.3% for CageViT-B (82.0%) over Twins-SVT-S (81.7%), and +0.1% for CageViT-L(83.4%) over Swin-S(83.3%) using 224×224 input. Note that although the Top-1 accuracy of CageViT-B and CageViT-L is slightly lower than that of the same level Focal Transformer [43], this is reasonable given that CageViT has fewer parameters and requires less computational resources." }, { "figure_ref": [ "fig_0" ], "heading": "The impact of M conv accuracy", "publication_ref": [], "table_ref": [], "text": "Experimental settings. To investigate the impact of M conv accuracy on CageViT performance, we obtained a series of M i conv (Top-1 Acc = 20, 30, 40, 50, 60, 70, 75), models with different Top-1 accuracies using early stopping. For each M i conv , we employ the same experimental setting reported in Sec. 4.1 for CageViT-Tiny on ImageNet-1k dataset.\nResults. Fig. 3 presents the empirical results on the impact of different M i conv . We can see, that after improving the Top-1 accuracy of M conv above 50%, the accuracy of CageViT-Tiny generally converges to a constant value. Therefore, we can leverage a M conv that is not fully trained into a high accuracy, which means we were able to save a lot of computational costs during the training of M conv . Since we would take a weighted average of top-K class activations in CAGA (Sec. 3.2), there is a high probability for the true class label to appear in the top-K labels, despite the moderate accuracy of M conv . Table 3: The impact of the value ρ on the performance of CageViT. Most of the hyper-parameters are the same as the experimental setting for CageViT-T reported in Sec. 4.1, except for the ρ value, resulting in the ratio of selected major tokens. The highest results for Top- Results. Tab. 4 presents the empirical results on the impact of different k values. We can see, that the CageViT-Tiny performs relatively worse with K = 1, 2, which means that the M conv cannot precisely classify the correct labels. As we increase the K value, the model performs much better when K is around 10. However, the accuracy gets lower with K > 10. This means that an increase in the value of K may make the major tokens come more from small items in the background, which disrupts the model's judgment of fine-grained information." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "In this section, we report the ablation studies of the proposed CageViT, using ImageNet-1k image classification. All experimental setups are consistent with those reported in Sec. 4.1, except for the different modules (e.g. CAGA, Linear SRA) equipped on the model." }, { "figure_ref": [], "heading": "Ablation on the proposed designs", "publication_ref": [], "table_ref": [], "text": "From Tab. 5, we see that all three designs can improve the model in terms of performance, parameter number, or computation overhead.\nCAGA can vastly reduce the computational cost. After adding the CAGA module into ViT, the number of parameters is reduced from 64.8M to 18.4M, while losing some accuracy. Gated Linear SRA is more suitable than Linear SRA while the model is equipped with CAGA.. The top-1 accuracy is improved from 75.40% to 78.38% after replacing Linear SRA with Gated Linear SRA, while only increasing 0.75M parameters." }, { "figure_ref": [], "heading": "Ablation on different class activation method", "publication_ref": [ "b27", "b37", "b19" ], "table_ref": [], "text": "In Tab. 6, we present the performance of various class activation methods for selecting important tokens. These include Grad-CAM [28], Grad-CAM++ [4], Score-CAM [38], Ablation-CAM [27], and Eigen-CAM [20]. All the implementation of class activation methods comes from PyTroch-Grad-CAM [12]. Among the methods evaluated, Grad-CAM++ achieved the highest Top-1 accuracy (78.38%) and Top-5 accuracy (94.12%), while also being computationally efficient (5813 throughput).\nWe also report the results of removing the influence of confidence when using Grad-CAM++, i.e. setting all the z k to be 1 in the Eq. (4). The top-1 accuracy is reduced from 78.38% to 75.40%, which means that the quality of the selected major tokens gets lower, demonstrating the effectiveness of weighted average Grad-CAM++ depicted in Sec. 3.2.\nAs a point of comparison, we also report the results of randomly selecting ρ% of the tokens as a lower bound for token selection methods. It is evident that the performance of randomly selected tokens is significantly lower than that of using CAM methods." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we introduce a novel, efficient transformer model called CageViT. We propose a two-fold approach to accelerate the performance of transformers. Specifically, we use convolutional activation to preprocess the token after patchifying the image to select and rearrange major and minor tokens, which greatly reduces computation cost through the use of an additional fusion layer. Instead of directly using the class activation map of the convolutional model, we design a new weighted class activation to reduce the requirements for model quality of the additionally introduced convolutional model. Additionally, we propose Gated Linear SRA to bring the fusion tokens deeper into the core calculation of the attention mechanism, facilitating communication between major and fusion tokens. We evaluate CageViT on the image classification task and conduct an ablation study to validate the design of various modules. Corresponding results demonstrate the effectiveness of CageViT." } ]
Recently, Transformers have emerged as the go-to architecture for both vision and language modeling tasks, but their computational efficiency is limited by the length of the input sequence. To address this, several efficient variants of Transformers have been proposed to accelerate computation or reduce memory consumption while preserving performance. This paper presents an efficient vision Transformer, called CageViT, that is guided by convolutional activation to reduce computation. Our CageViT, unlike current Transformers, utilizes a new encoder to handle the rearranged tokens, bringing several technical contributions: 1) Convolutional activation is used to pre-process the token after patchifying the image to select and rearrange the major tokens and minor tokens, which substantially reduces the computation cost through an additional fusion layer. 2) Instead of using the class activation map of the convolutional model directly, we design a new weighted class activation to lower the model requirements. 3) To facilitate communication between major tokens and fusion tokens, Gated Linear SRA is proposed to further integrate fusion tokens into the attention mechanism. We perform a comprehensive validation of CageViT on the image classification challenge. Experimental results demonstrate that the proposed CageViT outperforms the most recent state-of-the-art backbones by a large margin in terms of efficiency, while maintaining a comparable level of accuracy (e.g. a moderate-sized 43.35M model trained solely on 224 × 224 ImageNet-1K can achieve Top-1 accuracy of 83.4% accuracy).
CageViT: Convolutional Activation Guided Efficient Vision Transformer
[ { "figure_caption": "Figure 3 :3Figure 3: Impact of convolutional model M conv accuracy (Top-1 accuracy) on CageViT-Tiny performance (Top-1 accuracy) on ImageNet-1k dataset.", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Details of CageViT variants. We also report the total parameters adding the MixConv-s [31] (4.10M) between brackets.", "figure_data": "ModelLdDp N h N f K ρ (%) #Params (M)CageViT-T87681024 7849209.91 (14.01)CageViT-S87681024 712892013.53 (17.63)CageViT-B 127682048 712892024.34 (28.44)CageViT-L 16 1024 2048 716892043.35 (47.45)", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Base, and CageViT-Large, respectively. During testing on the validation set, the shorter side of the input image is first resized to 256, and a center crop of 224 × 224 is used for evaluation. All images are patchified into 256 patches, each containing 14 × 14 pixels. All experiments are carried out using the PyTorch[23] framework. Comparison with state-of-the-art backbones on ImageNet-1k benchmark. Throughput(images / s) is measured on a single V100 GPU, following[36]. All models are trained and evaluated on 224 × 224 resolution. The parameters of our models are reported in two parts: CageViT parameters and total parameters (in parentheses), where total parameters take the extra convolutional model M conv into account.", "figure_data": "1, 0.1, 0.2, 0.3 for", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "1 and Top-5 are shown in bold. The second highest results are marked with underline. We test the effect of different ρ values on CageViT-Tiny performance. Most of the hyperparameters are the same as the experimental setting for CageViT-Tiny reported in Sec. 4.1, except for the value of ρ, which determines the ratio of selected major tokens.Results. Tab. 3 presents the empirical results on the impact of different ρ. We can see, that the CageViT-Tiny performs best at ρ values of 20 and 50, with a relatively small number of parameters (9.5 and 21.2, respectively).4.4 The impact of different K valuesExperimental settings. To investigate the impact of the different number of selected top-K labels during calculating weighted average Grad-CAM++, i.e. K values, on CageViT-Tiny performance, we conduct experiments on different K values. Most of the hyper-parameters are the same as the experimental setting for CageViT-Tiny reported in Sec. 4.1, except for the K value.", "figure_data": "0102030405060708090100Top-1 (%)29.3 62.1 78.4 77.5 76.9 78.4 76.1 77.5 76.0 72.9 75.3Top-5 (%)51.9 79.9 94.1 92.5 92.2 93.9 92.5 93.1 92.3 91.8 92.4#Params (M) 0.514.79.511.3 15.8 21.2 27.1 32.7 38.9 45.1 54.34.3 The impact of different ρ valuesExperimental settings.", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The impact of the value K on the performance of CageViT. Most of the hyper-parameters are the same as the experimental setting for CageViT-Tiny reported in Sec. 4.1, except for the K value, resulting in the selected top-K labels while calculating the weighted average Grad-CAM++. The best result on Top-1 and Top-5 are in bold.", "figure_data": "K123456789102050100Top-1 (%) 61.4 62.2 69.4 68.3 71.6 74.1 71.4 71.9 78.4 76.3 72.8 72.1 69.4Top-5 (%) 78.8 75.6 78.8 83.8 90.2 91.3 89.5 90.0 94.1 90.7 89.8 89.4 81.6", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablation of CageViT-Tiny: We report Top-1, Top-5 accuracy and number of parameters for different model designs on 224 × 224 image resolution. Our results indicate that the use of CAGA (as described in Section 3.2) significantly reduces computational cost, while the specifically designed Gated Linear SRA improves performance compared to the original Linear SRA.", "figure_data": "ModelTop-1 (%) Top-5 (%) #Params (M)ViT77.9193.8364.8ViT + CAGA72.5890.3119.2ViT + Linear SRA77.9493.9025.5ViT + CAGA + Linear SRA75.4091.539.16ViT + CAGA + Gated Linear SRA78.3894.129.91", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Ablations of CageViT-Tiny on different class activation methods. We report the Top-1 accuracy, Top-5 accuracy, and class activation method throughput. All the models are trained and evaluated on 224 × 224 resolution. The best records are marked in bold.", "figure_data": "CAMTop-1 (%) Top-5 (%) ThroughputGrad-CAM [28]74.5092.178492Grad-CAM++ [4]78.3894.125813Score-CAM [38]73.8191.846924Ablation-CAM [27]77.1993.352750Eigen-CAM [20]76.4392.904815Grad-CAM++ w/o confidence75.4292.586245Random Selection47.5361.40N/A", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" } ]
Hao Zheng; Jinbao Wang; Xiantong Zhen; Hong Chen; Jingkuan Song; Feng Zheng
[ { "authors": "Dzmitry Bahdanau; Kyunghyun Cho; Yoshua Bengio", "journal": "", "ref_id": "b0", "title": "Neural machine translation by jointly learning to align and translate", "year": "2014" }, { "authors": "Iz Beltagy; Matthew E Peters; Arman Cohan", "journal": "", "ref_id": "b1", "title": "Longformer: The long-document transformer", "year": "2020" }, { "authors": "Nicolas Carion; Francisco Massa; Gabriel Synnaeve; Nicolas Usunier; Alexander Kirillov; Sergey Zagoruyko", "journal": "Springer", "ref_id": "b2", "title": "End-to-end object detection with transformers", "year": "2020" }, { "authors": "Aditya Chattopadhay; Anirban Sarkar; Prantik Howlader; Vineeth N Balasubramanian", "journal": "IEEE", "ref_id": "b3", "title": "Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks", "year": "2018" }, { "authors": "Krzysztof Choromanski; Valerii Likhosherstov; David Dohan; Xingyou Song; Andreea Gane; Tamas Sarlos; Peter Hawkins; Jared Davis; Afroz Mohiuddin; Lukasz Kaiser", "journal": "", "ref_id": "b4", "title": "Rethinking attention with performers", "year": "2020" }, { "authors": "Xiangxiang Chu; Zhi Tian; Yuqing Wang; Bo Zhang; Haibing Ren; Xiaolin Wei; Huaxia Xia; Chunhua Shen", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b5", "title": "Twins: Revisiting the design of spatial attention in vision transformers", "year": "2021" }, { "authors": "Barret Ekin D Cubuk; Jonathon Zoph; Quoc V Shlens; Le", "journal": "", "ref_id": "b6", "title": "Randaugment: Practical automated data augmentation with a reduced search space", "year": "2020" }, { "authors": "Zihang Dai; Zhilin Yang; Yiming Yang; Jaime Carbonell; Ruslan Quoc V Le; Salakhutdinov", "journal": "", "ref_id": "b7", "title": "Transformer-xl: Attentive language models beyond a fixed-length context", "year": "2019" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b8", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b9", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Bo Haoqi Fan; Karttikeya Xiong; Yanghao Mangalam; Zhicheng Li; Jitendra Yan; Christoph Malik; Feichtenhofer", "journal": "", "ref_id": "b10", "title": "Multiscale vision transformers", "year": "2021" }, { "authors": "", "journal": "", "ref_id": "b11", "title": "Jacob Gildenblat and contributors", "year": "" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b12", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Gao Huang; Yu Sun; Zhuang Liu; Daniel Sedra; Kilian Q Weinberger", "journal": "Springer", "ref_id": "b13", "title": "Deep networks with stochastic depth", "year": "2016" }, { "authors": "Nikita Kitaev; Łukasz Kaiser; Anselm Levskaya", "journal": "", "ref_id": "b14", "title": "Reformer: The efficient transformer", "year": "2020" }, { "authors": "Youngwan Lee; Jonghee Kim; Jeffrey Willette; Sung Ju Hwang", "journal": "", "ref_id": "b15", "title": "Mpvit: Multi-path vision transformer for dense prediction", "year": "2022" }, { "authors": "Kevin Lin; Lijuan Wang; Zicheng Liu", "journal": "", "ref_id": "b16", "title": "End-to-end human pose and mesh reconstruction with transformers", "year": "2021" }, { "authors": "Ze Liu; Yutong Lin; Yue Cao; Han Hu; Yixuan Wei; Zheng Zhang; Stephen Lin; Baining Guo", "journal": "", "ref_id": "b17", "title": "Swin transformer: Hierarchical vision transformer using shifted windows", "year": "2021" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b18", "title": "Decoupled weight decay regularization", "year": "2017" }, { "authors": "Mohammed Bany; Muhammad ; Mohammed Yeasin", "journal": "IEEE", "ref_id": "b19", "title": "Eigen-cam: Class activation map using principal In", "year": "2020" }, { "authors": "P Ankur; Oscar Parikh; Dipanjan Täckström; Jakob Das; Uszkoreit", "journal": "", "ref_id": "b20", "title": "A decomposable attention model for natural language inference", "year": "2016" }, { "authors": "Niki Parmar; Ashish Vaswani; Jakob Uszkoreit; Lukasz Kaiser; Noam Shazeer; Alexander Ku; Dustin Tran", "journal": "PMLR", "ref_id": "b21", "title": "Image transformer", "year": "2018" }, { "authors": "Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga; Alban Desmaison; Andreas Kopf; Edward Yang; Zachary Devito; Martin Raison; Alykhan Tejani; Sasank Chilamkurthy; Benoit Steiner; Lu Fang; Junjie Bai; Soumith Chintala", "journal": "", "ref_id": "b22", "title": "Pytorch: An imperative style, highperformance deep learning library", "year": "" }, { "authors": "Jiezhong Qiu; Hao Ma; Omer Levy; Scott Wen-Tau Yih; Sinong Wang; Jie Tang", "journal": "", "ref_id": "b23", "title": "Blockwise self-attention for long document understanding", "year": "2019" }, { "authors": "Alec Radford; Jeff Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "", "ref_id": "b24", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Ilija Radosavovic; Raj Prateek Kosaraju; Ross Girshick; Kaiming He; Piotr Dollár", "journal": "", "ref_id": "b25", "title": "Designing network design spaces", "year": "2020" }, { "authors": "Guruprasad Harish; Ramaswamy", "journal": "", "ref_id": "b26", "title": "Ablation-cam: Visual explanations for deep convolutional network via gradient-free localization", "year": "2020" }, { "authors": "Michael Ramprasaath R Selvaraju; Abhishek Cogswell; Ramakrishna Das; Devi Vedantam; Dhruv Parikh; Batra", "journal": "", "ref_id": "b27", "title": "Grad-cam: Visual explanations from deep networks via gradient-based localization", "year": "2017" }, { "authors": "Mohammad Shoeybi; Mostofa Patwary; Raul Puri; Patrick Legresley; Jared Casper; Bryan Catanzaro", "journal": "", "ref_id": "b28", "title": "Megatron-lm: Training multi-billion parameter language models using model parallelism", "year": "2019" }, { "authors": "Aravind Srinivas; Tsung-Yi Lin; Niki Parmar; Jonathon Shlens; Pieter Abbeel; Ashish Vaswani", "journal": "", "ref_id": "b29", "title": "Bottleneck transformers for visual recognition", "year": "2021" }, { "authors": "Mingxing Tan; V Quoc; Le", "journal": "", "ref_id": "b30", "title": "Mixconv: Mixed depthwise convolutional kernels", "year": "2019" }, { "authors": "Yi Tay; Dara Bahri; Donald Metzler; Da-Cheng Juan; Zhe Zhao; Che Zheng", "journal": "icml", "ref_id": "b31", "title": "Synthesizer: Rethinking self-attention in transformer models", "year": "2020" }, { "authors": "Yi Tay; Dara Bahri; Liu Yang; Donald Metzler; Da-Cheng Juan", "journal": "PMLR", "ref_id": "b32", "title": "Sparse sinkhorn attention", "year": "2020" }, { "authors": "Yi Tay; Mostafa Dehghani; Samira Abnar; Yikang Shen; Dara Bahri; Philip Pham; Jinfeng Rao; Liu Yang; Sebastian Ruder; Donald Metzler", "journal": "", "ref_id": "b33", "title": "Long range arena: A benchmark for efficient transformers", "year": "2020" }, { "authors": "Yi Tay; Mostafa Dehghani; Dara Bahri; Donald Metzler", "journal": "", "ref_id": "b34", "title": "Efficient transformers: A survey", "year": "2020" }, { "authors": "Hugo Touvron; Matthieu Cord; Matthijs Douze; Francisco Massa; Alexandre Sablayrolles; Hervé Jégou", "journal": "PMLR", "ref_id": "b35", "title": "Training data-efficient image transformers & distillation through attention", "year": "2021" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b36", "title": "Attention is all you need", "year": "2017" }, { "authors": "Haofan Wang; Zifan Wang; Mengnan Du; Fan Yang; Zijian Zhang; Sirui Ding; Piotr Mardziel; Xia Hu", "journal": "", "ref_id": "b37", "title": "Score-cam: Score-weighted visual explanations for convolutional neural networks", "year": "2020" }, { "authors": "Sinong Wang; Belinda Z Li; Madian Khabsa; Han Fang; Hao Ma", "journal": "", "ref_id": "b38", "title": "Linformer: Self-attention with linear complexity", "year": "2020" }, { "authors": "Wenhai Wang; Enze Xie; Xiang Li; Deng-Ping Fan; Kaitao Song; Ding Liang; Tong Lu; Ping Luo; Ling Shao", "journal": "", "ref_id": "b39", "title": "Pyramid vision transformer: A versatile backbone for dense prediction without convolutions", "year": "2021" }, { "authors": "Wenhai Wang; Enze Xie; Xiang Li; Deng-Ping Fan; Kaitao Song; Ding Liang; Tong Lu; Ping Luo; Ling Shao", "journal": "Computational Visual Media", "ref_id": "b40", "title": "Pvt v2: Improved baselines with pyramid vision transformer", "year": "2022" }, { "authors": "Haiping Wu; Bin Xiao; Noel Codella; Mengchen Liu; Xiyang Dai; Lu Yuan; Lei Zhang", "journal": "", "ref_id": "b41", "title": "Cvt: Introducing convolutions to vision transformers", "year": "2021" }, { "authors": "Jianwei Yang; Chunyuan Li; Pengchuan Zhang; Xiyang Dai; Bin Xiao; Lu Yuan; Jianfeng Gao", "journal": "", "ref_id": "b42", "title": "Focal attention for long-range interactions in vision transformers", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b43", "title": "", "year": "2021" }, { "authors": "Zhilin Yang; Zihang Dai; Yiming Yang; Jaime Carbonell; Russ R Salakhutdinov; Quoc V Le", "journal": "Advances in neural information processing systems", "ref_id": "b44", "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "year": "2019" }, { "authors": "Sangdoo Yun; Dongyoon Han; Seong Joon Oh; Sanghyuk Chun; Junsuk Choe; Youngjoon Yoo", "journal": "", "ref_id": "b45", "title": "Cutmix: Regularization strategy to train strong classifiers with localizable features", "year": "2019" }, { "authors": "Manzil Zaheer; Guru Guruganesh; Avinava Kumar; Joshua Dubey; Chris Ainslie; Santiago Alberti; Philip Ontanon; Anirudh Pham; Qifan Ravula; Li Wang; Yang", "journal": "NeurIPS", "ref_id": "b46", "title": "Big bird: Transformers for longer sequences", "year": "2020" }, { "authors": "Hongyi Zhang; Moustapha Cisse; David Yann N Dauphin; Lopez-Paz", "journal": "", "ref_id": "b47", "title": "mixup: Beyond empirical risk minimization", "year": "" }, { "authors": "Pengchuan Zhang; Xiyang Dai; Jianwei Yang; Bin Xiao; Lu Yuan; Lei Zhang; Jianfeng Gao", "journal": "", "ref_id": "b48", "title": "Multi-scale vision longformer: A new vision transformer for high-resolution image encoding", "year": "2021" }, { "authors": "Zizhao Zhang; Han Zhang; Long Zhao; Ting Chen; Sercan Ö Arik; Tomas Pfister", "journal": "", "ref_id": "b49", "title": "Nested hierarchical transformer: Towards accurate, data-efficient and interpretable visual understanding", "year": "2022" }, { "authors": "Sixiao Zheng; Jiachen Lu; Hengshuang Zhao; Xiatian Zhu; Zekun Luo; Yabiao Wang; Yanwei Fu; Jianfeng Feng; Tao Xiang; Philip Hs Torr", "journal": "", "ref_id": "b50", "title": "Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers", "year": "2021" }, { "authors": "Zhun Zhong; Liang Zheng; Guoliang Kang; Shaozi Li; Yi Yang", "journal": "", "ref_id": "b51", "title": "Random erasing data augmentation", "year": "2020" }, { "authors": "Luowei Zhou; Yingbo Zhou; Jason J Corso; Richard Socher; Caiming Xiong", "journal": "", "ref_id": "b52", "title": "End-to-end dense video captioning with masked transformer", "year": "2018" } ]
[ { "formula_coordinates": [ 3, 219.42, 643.64, 284.58, 25.24 ], "formula_id": "formula_0", "formula_text": "Attention(Q, K, V ) = softmax( QK T √ d )V,(1)" }, { "formula_coordinates": [ 4, 200.21, 89.07, 303.79, 12.39 ], "formula_id": "formula_1", "formula_text": "MHA(Q, K, V ) = Concat(head 0 , ..., head N h )W O ,(2)" }, { "formula_coordinates": [ 4, 242.22, 123.51, 257.91, 9.65 ], "formula_id": "formula_2", "formula_text": "head i = Attention(Q i , K i , V i )(3" }, { "formula_coordinates": [ 4, 262.97, 334.81, 241.03, 30.55 ], "formula_id": "formula_3", "formula_text": "S ij = K k ( z k Z • L c k ij ),(4)" }, { "formula_coordinates": [ 4, 198.63, 489.79, 305.37, 9.65 ], "formula_id": "formula_4", "formula_text": "MHF i (m) = M LP (Concat(m 0 , m 1 , m 2 , ..., m Nm ),(5)" }, { "formula_coordinates": [ 4, 230.54, 506.81, 82.68, 9.65 ], "formula_id": "formula_5", "formula_text": "N m = N b • (1 -ρ)" }, { "formula_coordinates": [ 4, 169.1, 692.93, 331.03, 29.92 ], "formula_id": "formula_6", "formula_text": ") × d to hw R 2 × d in each encoder layer. SRA(Q, K, V ) = Concat(head 0 , ..., head N )W O . (6" }, { "formula_coordinates": [ 4, 500.13, 713.51, 3.87, 8.64 ], "formula_id": "formula_7", "formula_text": ") Multi-Head Attention Q K Average Pooling V ℎ𝑤𝑤 × 𝑑𝑑 𝑝𝑝 2 × 𝑑𝑑 Linear SRA Multi-Head Attention Q K Conv V ℎ𝑤𝑤 × 𝑑𝑑 ℎ𝑤𝑤 𝑅𝑅 2 × 𝑑𝑑 SRA Value of fusion token Multi-Head Attention Q K Average Pooling V ℎ𝑤𝑤 × 𝑑𝑑 𝑝𝑝 2 × 𝑑𝑑 Gated Linear SRA Gate 𝑁𝑁𝑓𝑓 × 𝑑𝑑 𝑝𝑝 2 × 𝑑𝑑 Figure 2: Comparison of SRA in PVTv1" }, { "formula_coordinates": [ 5, 223.93, 252.61, 280.07, 8.96 ], "formula_id": "formula_8", "formula_text": "head = Attention(Q, SR(K), SR(V )),(7)" }, { "formula_coordinates": [ 5, 229.35, 287.43, 274.65, 11.03 ], "formula_id": "formula_9", "formula_text": "SR(x) = Norm(Reshape(x, R)W S ).(8)" }, { "formula_coordinates": [ 5, 238.74, 353.06, 261.38, 23.89 ], "formula_id": "formula_10", "formula_text": "Ω(SRA) = 2h 2 w 2 d R 2 + hwd 2 R 2 . (9" }, { "formula_coordinates": [ 5, 500.13, 361.7, 3.87, 8.64 ], "formula_id": "formula_11", "formula_text": ")" }, { "formula_coordinates": [ 5, 247.78, 425.82, 252.08, 11.03 ], "formula_id": "formula_12", "formula_text": "Ω(LinearSRA) = 2hwp 2 d, (10" }, { "formula_coordinates": [ 5, 499.85, 428.22, 4.15, 8.64 ], "formula_id": "formula_13", "formula_text": ")" }, { "formula_coordinates": [ 5, 196.82, 543.1, 307.18, 11.03 ], "formula_id": "formula_14", "formula_text": "head = Attention(Q, SR(K), SR(V ) Gate(V f )),(11)" }, { "formula_coordinates": [ 5, 194.07, 639.6, 309.93, 11.03 ], "formula_id": "formula_15", "formula_text": "Ω(GatedLinearSRA) = Ω(LinearSRA) = 2hwp 2 d.(12)" } ]
10.18653/v1/2023.emnlp-main.59
2024-03-22
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b66", "b74", "b46", "b63", "b37", "b39", "b56", "b13", "b25", "b24", "b4", "b81", "b21", "b82", "b88", "b87", "b7" ], "table_ref": [], "text": "Large language models (LLMs) have demonstrated an impressive ability to encode world knowledge in model parameters (Petroni et al., 2019;Roberts et al., 2020). However, they still face various challenges in knowledge-intensive tasks and contexts: they suffer from hallucination (Kryściński et al., 2020;Pagnoni et al., 2021;Ji et al., 2023), struggle to encode long-tail facts (Kandpal et al., 2023;Mallen et al., 2023), and could not be easily updated with new and emerging knowledge (De Cao et al., 2021;Hase et al., 2021). Existing works propose addressing these limitations through retrieval augmentation or generated knowledge prompting. Retrieval-augmented LMs (Guu et al., 2020;Borgeaud et al., 2022;Shi et al., 2023) employ retrieval systems to fetch relevant documents from a general and fixed retrieval corpus (e.g., Wikipedia or the Pile (Gao et al., 2020)), leveraging external knowledge from non-parametric sources to aid LLM generation. Generated knowledge prompting approaches (Shin et al., 2020;Liu et al., 2022a;Sun et al., 2022) prompt LLMs to incorporate and generate contextual documents to encourage knowledge-aware generation.\nWhile the two lines of work have achieved some success, these existing systems struggle to reflect two key properties of knowledge. Knowledge is modular (Stuckenschmidt et al., 2009): it is an \"archipelago\" rather than a single \"continent\", encapsulating information that exists in diversified forms, domains, sources, perspectives, and more. The lack of knowledge modularity has made generalization to new domains and targeted updates of knowledge stored in LMs difficult. Knowledge is collaborative (Cayzer, 2004): LLMs should be able to represent and incorporate diverse and evolving knowledge, from multi-faceted sources and perspectives, while enabling collaborative contribution from various stakeholders. Community-driven knowledge could aggregate new knowledge from domain experts and enable the development of specialized LLMs, tailored to specific industries or applications. That being said, existing approaches and systems did not employ modular or collaborative knowledge sources that enable the plug-and-play updates and contributions from various stakeholders. While approaches such as retrieval augmentation could be extended for modularity, Please provide the state Tom Brady is from.\nWhat kind of information do you need?\nPlease provide the state Tom Brady is from.\nWhat kind of information do you need?\nPlease provide the state Tom Brady is from.\nAnswer: Answer: have different definitions and requirements for knowledge. Wikipedia factoids, biomedical literature, mathematical formulae, and commonsense knowledge graphs are all valuable knowledge components in various contexts, thus LLMs should be able to represent and incorporate knowledge contributed by stakeholders across multi-faceted domains and industries.\nTo this end, we propose to curate knowledge cards, specialized LMs that are much smaller than black-box LLMs, trained on diversified knowledge corpora from a wide range of domains and sources. Concretely, we obtain n knowledge cards C = {c 1 , c 2 , • • • , c n }, each starting from an existing LM checkpoint and further trained on a specific knowledge corpora D i with the causal language modeling objective. Given a query to the LLM, these knowledge cards are selectively activated and used with prompted generation. Formally, given query q, specialized LM c defines a mapping c(q) : q → d q where q is used as prompt to generate a continuation as the knowledge document d q , which are later prepended into the context of general-purpose LLMs through various mechanisms ( §2.3).\nIn this way, the modularity of knowledge is demonstrated through the effortless addition, removal, or selective activation of various knowledge cards during the LLM generation process. Similarly, the collaborative nature of knowledge is reflected by enabling individuals to contribute trained knowledge cards on their desired knowledge source to KNOWLEDGE CARD, expanding the knowledge of general-purpose LLMs through community-driven efforts." }, { "figure_ref": [], "heading": "KNOWLEDGE SELECTORS", "publication_ref": [ "b30", "b88", "b81", "b40", "b62", "b102", "b37", "b46", "b79" ], "table_ref": [], "text": "While it is possible to directly adopt d q as relevant knowledge, we identify three key challenges in the successful integration of knowledge cards and general-purpose LLMs: relevance, brevity, and factuality. We design three respective selectors to control for such factors.\nRelevance Selector While we expect knowledge cards to generate background information that is relevant and helpful to the query q, LMs sometimes deviate from the query (Holtzman et al., 2019). Furthermore, only a handful of knowledge cards would be relevant for a given query. To this end, we propose to select and retain knowledge documents based on relevance. Concretely, given a set of m generated documents {d 1 , • • • , d m } and the query q, we aim to retain the top-k relevant documents and discard irrelevant information. We adopt a separate encoder-based LM enc(•) that maps a token sequence to a feature vector and cosine similarity sim(•, •) to measure relevance. Formally, we retain\nd i if i ∈ top-k j (sim(enc(d j ), enc(q)))\nwhere top-k is the top-k argmax operation.\nPruning Selector Existing works mostly integrate one piece of external knowledge into LLMs (Sun et al., 2022;Shi et al., 2023), while tasks requiring integration of multiple domains of information, such as misinformation detection (Karimi et al., 2018) and multi-hop QA (Nishida et al., 2019), are not well supported by existing paradigms. To effectively incorporate generated documents from multiple LMs while fitting into the LLM context length limit, we propose to prune knowledge documents. Formally, given m documents {d 1 , • • • , d m }, we adopt a pruning model prune(•), operationalized most simply as a summarization system (Zhang et al., 2020;Liu et al., 2022b), to obtain the condensed versions separately { d1 , • • • , dm }. This pruning method allows for the integration into the main LLM of information from multiple domains while preserving space for in-context learning.\nFactuality Selector Language models are prone to hallucination (Ji et al., 2023) and the knowledge cards are no exception. Given a set of m pruned knowledge documents { d1 , • • • , dm }, their original versions {d 1 , • • • , d m }, and the query q, we filter out the non-factual knowledge and retain ℓ documents. Specifically, we evaluate the factuality of knowledge documents with two measures.\nWe first evaluate summarization factuality, ensuring that the pruned version di factually captures the important points in the original d i . Concretely, we adopt factuality evaluation models (Kryściński et al., 2020;Feng et al., 2023a) as a scoring function sum-fact(•, •), where each knowledge document d is assigned a summarization factuality score\ns sum d = sum-fact( d | d) ∈ [0, 1].\nWe then propose to evaluate whether the generated knowledge document is well-supported by real-world knowledge through retrieval-augmented fact checking. Specifically, given a knowledge document d, we retrieve k documents from a retrieval corpus t 1 , . . . , t k , then employ a fact-checking model (Schuster et al., 2021) as a scoring function fact-check(•, •). We then assign a fact-checked factuality score to each d based on the retrieved document that most supports d, formally\ns fact d = max 1≤i≤k fact-check(d | t i ) ∈ [0, 1].\nWe then average the summarization factuality score and the fact-checking score for each document to obtain s d .\nWhile it is straightforward to greedily select ℓ knowledge documents with the highest s d scores, new and more recent knowledge might not be well-supported by existing fact-checking tools. As a result, we propose top-k factuality sampling to allow for flexibility while remaining stringent towards knowledge documents that are clearly wrong. Formally, we first obtain D k as the set of knowledge documents with the top-k factuality scores where k > ℓ is a hyperparameter. We then define a sampling probability distribution over all m knowledge documents:\np( di | q) = exp(s di )/ dj ∈D k exp(s dj ), if di ∈ D k . 0, if di / ∈ D k .\nWe sample ℓ knowledge documents from { d1 , • • • , dm } with probabilities {p( d1 | q), • • • , p( dm | q)}. In this way, knowledge documents with very low factuality scores are strictly removed while flexibility is built in through sampling from the knowledge with factuality scores near the top." }, { "figure_ref": [], "heading": "KNOWLEDGE INTEGRATION", "publication_ref": [ "b30", "b105", "b38", "b94", "b69" ], "table_ref": [ "tab_0", "tab_0" ], "text": "After defining the modular components in KNOWLEDGE CARD (a general-purpose LLM, knowledge cards, and knowledge selectors), we propose two approaches, bottom-up and top-down, to integrate the general-purpose LLM with external knowledge sources, which are selected outputs of knowledge cards. Specifically, bottom-up activates all available knowledge cards at once and employs the three knowledge selectors to control for knowledge quality. Bottom-up enables multi-domain knowledge synthesis across all available sources, but these might occasionally introduce irrelevant information which may adversely impact LLM inference. We additionally propose a top-down approach, in which the LLM proactively seeks external information from selected knowledge cards. top-down is advantageous in tasks and domains where external knowledge is not always necessary.\nBottom-Up Approach Bottom-up starts by prompting available knowledge cards, then progressively goes through the three knowledge selectors, and these outputs are incorporated into the LLM via the prompt context. Formally, given n knowledge cards C = {c 1 , • • • , c n } and the query q, we generate n 1 documents with each knowledge card through temperature sampling (Holtzman et al., 2019) The final prompt for the LLM is a concatenation of knowledge documents and the query, formally [\"Knowledge: \"\n∥ d1 ∥ d2 ∥ • • • ∥ d n3 ∥ q]\nwhere ∥ denotes concatenation. We expect the bottomup approach to be strong in multi-domain knowledge synthesis since multiple knowledge cards could be activated at once to provide background knowledge from diverse perspectives. In addition, hyperparameters n 1 , n 2 , and n 3 enable fine-grained control over the knowledge synthesis process.\nTop-Down Approach In bottom-up, we assume that every query would benefit from external knowledge generated by knowledge cards. However, this could introduce unnecessary information in the LLM's prompt context (Zhao et al., 2023). Following Kadavath et al. (2022), who showed that LLMs possess preliminary abilities to identify their inherent knowledge limitations, we propose the top-down approach, putting the LLM in charge to iteratively identify whether external knowledge is needed and selectively activate relevant knowledge cards through various strategies.\nConcretely, for the n knowledge cards\nC = {c 1 , • • • , c n },\nwe also ask the knowledge card contributors to submit a textual description of LMs S = {s 1 , • • • , s n } such as \"biomedical literature\", \"college calculus\", or \"commonsense knowledge graph\". We first ask the LLM a yes/no question to determine whether external knowledge is needed for the given query q, specifically \"Do you need more information? (Yes or No)\". We encourage better-calibrated answers to the yes/no question through in-context learning (Wei et al.;Press et al., 2022): specifically, we introduce a set of in-context learning examples that encompass two distinct categories of questions posed to the LLM. The first category consists of questions that the LLM is capable of answering accurately without the need for any extra information. For these questions, the response to the query \"Do you need more information (Yes or No)?\" is \"No.\" The second category comprises questions that the LLM cannot answer correctly without the provision of additional information. In this case, the corresponding output label for the query is \"Yes.\" In this way, we prompt the LLM to learn to request external knowledge through in-context learning; we analyze the effectiveness of this approach in Section 5. If the LLM answers \"No\", we directly prompt the LLM to generate based on the query, without resorting to knowledge cards. If the LLM requests external knowledge by answering \"Yes\", we employ two strategies (Algoithm 2) to select a relevant knowledge card and generate background knowledge.\n• Automatic Selection (AUTO) We further prompt the LLM with \"What kind of information do you need?\" and select one knowledge card based on its response r q . Concretely, we identify which LM description {s 1 , • • • , s n } is most relevant to r q with the relevance selector ( §2.2) and activate the corresponding LM to generate multiple knowledge documents, then select one with the highest factuality score based on the factuality selector ( §2.2) to obtain d. • Explicit Selection (EXP) Alternatively, we ask the LLM to directly select one knowledge card by prompting with \"Choose an information source from the following: s 1 , . . . , s n \". If the LLM responds with s i , we activate the corresponding knowledge card c i to generate multiple knowledge documents and select one with the factuality selector ( §2.2) to obtain d.\nUpon obtaining the document, we append \"Knowledge: d\" to the LLM context. We then iteratively ask \"Do you need more information? (Yes or No)\" again, repeat the above process, until the LLM answers \"No\" and generates a knowledge-informed response. We expect top-down to perform better when external knowledge is not always necessary. In this way, the top-down approach enables LLMs to take charge in identifying their inherent knowledge limitations and seeking help from external knowledge cards proactively. We provide prompt examples in Tables 10 and11 in the Appendix." }, { "figure_ref": [], "heading": "EXPERIMENT SETTINGS", "publication_ref": [ "b103", "b21", "b50", "b86", "b95", "b90", "b64", "b16", "b104", "b19", "b85", "b102", "b79", "b9", "b36", "b14", "b41", "b9", "b11", "b88", "b99", "b9", "b34", "b84", "b81" ], "table_ref": [], "text": "Implementation For knowledge cards, we use OPT-1.3B (Zhang et al., 2022) as the starting point and separately train 25 specialized LMs on a wide range of knowledge sources and domains, including corpora in the Pile (Gao et al., 2020), branch-train-merge (Li et al., 2022), knowledge graphs (Speer et al., 2017;West et al., 2022;Vrandečić & Krötzsch, 2014;Pellissier Tanon et al., 2020;Feng et al., 2021;Zhang et al., 2021), news and social media (Liu et al., 2022c;Feng et al., 2023b), and more.\n(Appendix E) We use MPNet (Song et al., 2020) as the encoder in the relevance selector, Pegasus (Zhang et al., 2020) as the summarization model in the pruning selector, the WikiSearch API as the retrieval system in the factuality selector, and FactKB (Feng et al., 2023a) and VitaminC (Schuster et al., 2021) as the summarization and fact-checking factuality scoring functions. We use Codex (CODE-DAVINCI-002) (Chen et al., 2021) as the default, general-purpose, black-box LLM. MIDTERMQA presents three evaluation datasets and settings: open-book, 2-way, and 4-way multiple choice. 5-shot in-context learning is adopted to evaluate KNOWLEDGE CARD and baselines. We did not consider existing temporal QA datasets (Jang et al., 2021;Dhingra et al., 2022;Kasai et al., 2022) since they do not focus on any specific event or knowledge domain.\nBaselines We compare KNOWLEDGE CARD with a wide range of baseline methods in three categories. 1) vanilla black-box LLMs: Codex (Chen et al., 2021), PaLM (Chowdhery et al., 2022), andFlan-PaLM (Chung et al., 2022); 2) generated knowledge prompting approaches: GKP (Liu et al., 2022a), recitation (Sun et al., 2022), GRTR (Yu et al., 2022) (Note that we apply these methods to the same LLM Codex (Chen et al., 2021) for a fair comparison); 3) retrieval-augmented language models: Atlas (Izacard et al., 2022), Si et al. (2022), RePlug, and RePlug LSR (Shi et al., 2023)." }, { "figure_ref": [], "heading": "RESULTS", "publication_ref": [ "b27", "b73" ], "table_ref": [ "tab_1", "tab_2" ], "text": "MMLU For general-purpose knowledge QA, we use the MMLU benchmark (Hendrycks et al., 2020). As shown in Table 1, all three configurations of KNOWLEDGE CARD significantly improve vanilla Codex. Among them, the top-down approach with explicit selection performs best, improving Codex by 6.6% overall accuracy. Concurrently, top-down approaches surpass all baselines, including Flan-PaLM with a few hundred billion more parameters. These results suggest that we present an effective approach for making general-purpose LLMs better in knowledge-intensive contexts. In addition, top-down generally outperforms bottom-up likely because MMLU contains math-related questions that do not necessitate external knowledge. This observation suggests that top-down approaches are better at tasks where external knowledge is not always necessary.\nMisinformation Detection To examine whether KNOWLEDGE CARD successfully integrates multifaceted knowledge from diversified sources, we adopt the LUN misinformation dataset (Rashkin et al., 2017) with two-and four-way classification settings. Table 2 demonstrates that KNOWLEDGE CARD significantly improves Codex by at least 31.7% and 19.4% in balanced accuracy scores for both settings. In addition, bottom-up outperforms both variants of top-down, thanks to its methodology to jointly activate knowledge cards from various domains and enable multi-domain knowledge synthesis. MidtermQA To examine whether KNOWLEDGE CARD could update the parametric knowledge of LLMs, we train an additional knowledge card on news articles regarding the 2022 U.S. midterm elections and plug it into KNOWLEDGE CARD. We present model performance on MidtermQA in Table 3, which demonstrates that KNOWLEDGE CARD substantially outperforms all baselines in the open-book setting by as much as 57.3% in exact match scores (EM). This indicates that one knowledge card with 1.3B parameters successfully updates the parametric knowledge of the 175B Codex through KNOWLEDGE CARD. In addition, top-down outperforms bottom-up, indicating that the selective activation of knowledge cards is better when there is a specific knowledge card tied to the task domain. KNOWLEDGE CARD also outperforms SI ET AL. (Codex + Contriever) that uses the same midterm election news as retrieval corpora. In addition, generated knowledge prompting approaches (GKP, recitation, GRTR) underperform vanilla Codex, showing that probing LLMs for explicit knowledge is counterproductive when internal LLM knowledge is outdated or wrong." }, { "figure_ref": [], "heading": "ANALYSIS", "publication_ref": [], "table_ref": [], "text": "Patching LLM Knowledge When general-purpose LLMs struggle at tasks due to knowledge limitations, KNOWLEDGE CARD could serve as an efficient approach to patch LLM weaknesses by adding specialized language models. To this end, we evaluate the change in performance when five knowledge cards are separately added to augment Codex with the top-down approach. Results in Figure 2 demonstrate that patching the LLM with all five LMs leads to various levels of performance gains on misinformation detection, while the most in-domain LMs (Wikipedia and news) lead to greater improvements. This suggests that when LLMs perform poorly on knowledge-intensive tasks, an additional knowledge card trained on in-domain corpora could help with KNOWLEDGE CARD. Knowledge Selector Study In Section 2.2, we propose three levels of knowledge selectors to control for various factors and ensure knowledge quality. We conduct ablation studies to remove each knowledge selector in the bottom-up approach and re-evaluate on misinformation detection. Figure 3 demonstrates that while all three knowledge selectors are helpful, the factuality selector contributes most to model performance and thus plays a crucial role in ensuring the quality of generated knowledge documents.\nRetrieval vs. Specialized LMs In order to assess the effectiveness of modular specialized LMs as compared to non-parametric sources like retrieval, we exclusively use the Wikipedia LM in KNOWLEDGE CARD and compare with the state-of-the-art retrieval LM REPLUG that also uses Wikipedia as the retrieval knowledge source. Table 4 demonstrates that KNOWLEDGE CARD outperforms REPLUG on both settings of misinformation detection, suggesting that knowledge cards present a better knowledge repository. Note that KNOWLEDGE CARD is also compatible with multiple knowledge formats (e.g. retrieval and search engine) while they could be complementary (Appendix A)." }, { "figure_ref": [ "fig_4", "fig_5" ], "heading": "Knowledge Stream Analysis", "publication_ref": [ "b6", "b88", "b81", "b38", "b105" ], "table_ref": [ "tab_4" ], "text": "In bottom-up, three hyperparameters ( §2.3) govern the \"knowledge stream\" from knowledge cards to the general-purpose LLMs. Specifically, n 1 controls how many documents each LM generates, n 2 controls how many are retained after the three knowledge selectors, and n 3 controls how many are put into the context of LLMs. We investigate these control measures and report performance in Figure 4. It is illustrated that: 1) n 1 has a marginal impact, suggesting that knowledge cards generate largely homogeneous knowledge even with temperature sampling (Caccia et al., 2018); 2) larger n 2 leads to performance drops, suggesting that the three knowledge selectors ensure knowledge quality; 3) n3 = 1, where only one knowledge document is adopted at a time (as in previous works (Sun et al., 2022;Shi et al., 2023)) is worse than larger values, showing the advantage of multi-domain knowledge synthesis uniquely enabled by KNOWLEDGE CARD. We illustrate LLM responses along with the correctness of their answer in Figure 6. The vast majority of queries are mapped to the \"yes, correct\" and \"no, correct\" categories, suggesting that LLMs have preliminary abilities to \"know what they know\" and seek external information if necessary. However, this ability is far from perfect, evident in the non-negligible category of \"no, incorrect\", suggesting that prompting LLMs to acknowledge knowledge limitations requires further research (Kadavath et al., 2022;Zhao et al., 2023), while new approaches to abstain could be easily integrated into KNOWLEDGE CARD. In addition, the \"yes, incorrect\" categories suggest that specialized LMs occasionally fail to provide enough information. These confusion matrices provide fine-grained error analysis and guidance as to whether the general-purpose LLM, the yes/no question, or knowledge cards require further improvements. Qualitative Analysis We curated MIDTERMQA to evaluate whether KNOWLEDGE CARD enables efficient knowledge update. We examine the 88 races where the incumbent was not re-elected: Codex answered 1 out of the 88 questions correctly, while bottom-up and top-down with automatic and explicit selection answered 63, 77, and 42 correctly. Table 5 shows that Codex states the incumbents would win again in 2022, while KNOWLEDGE CARD successfully updates LLMs with 100x more parameters." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [ "b24", "b34", "b49", "b33", "b105", "b33", "b4", "b61", "b34", "b4", "b80", "b76", "b43", "b106", "b56", "b84", "b44", "b81", "b69", "b66", "b14", "b26", "b89", "b83", "b93", "b108", "b3", "b88", "b99", "b37", "b56", "b13", "b60", "b29", "b57", "b22", "b75", "b48", "b47", "b68", "b31", "b67", "b100", "b58", "b59", "b35", "b98", "b50", "b23", "b15", "b5", "b36", "b70", "b42", "b71", "b45", "b55" ], "table_ref": [], "text": "Retrieval-Augmented Language Models Augmenting language models with retrieval has advanced the state-of-the-art in open-domain QA (Guu et al., 2020;Izacard et al., 2022;Lewis et al., 2020;Hu et al., 2022), text classification (Zhao et al., 2023), and language modeling (Hu et al., 2022;Borgeaud et al., 2022;Min et al., 2023). The retrieval system could be integrated into encoder-decoder (Izacard et al., 2022) and decoder-only models (Borgeaud et al., 2022;Shi et al., 2022;Rubin et al., 2022), or leveraged to interpolate the next token probability distributions (Khandelwal et al., 2019;Zhong et al., 2022). Recent advances incorporated frozen (Mallen et al., 2023;Si et al., 2022;Khattab et al., 2022) and trainable retrievers (Shi et al., 2023) as well as search engines (Press et al., 2022) to augment LLMs. Compared to retrieval models and search engines, KNOWLEDGE CARD enables flexible information seeking, searching over knowledge domains, and employing private knowledge sources. In addition, these works often leverage only one retrieval corpora and assume that it's \"omniscient\" while suffering from various issues such as domain coverage and knowledge update.\nIn contrast, we propose to reflect the modularity and community-driven nature of knowledge by integrating plug-and-play knowledge cards with general-purpose LLMs. Generated Knowledge Prompting LMs acquire knowledge through training on gargantuan textual corpora (Petroni et al., 2019;Dhingra et al., 2022;He et al., 2021). Generated knowledge prompting (Liu et al., 2022a) is one of the early approaches to tap into the parametric knowledge of LLMs by prompting them to generate background information and re-using it for QA. Related works also propose to use LM parametric knowledge for retrieval (Tay et al., 2022), answer commonsense questions with self-talk (Shwartz et al., 2020), generate queries (Wang et al., 2022;Zhuang et al., 2022) or token sequences (Bevilacqua et al., 2022) for document augmentation. In addition, recitationaugmented language models (Sun et al., 2022) propose to augment QA examples with diversified knowledge recitations, while (Yu et al., 2022) shows that generated knowledge is, under certain circumstances, better than retrieval. However, this line of work assumes that the encoded knowledge in LLM parameters is all we need, while LLM knowledge suffers from hallucination (Ji et al., 2023), struggles to encode long-tail facts (Mallen et al., 2023), and can not be efficiently updated (De Cao et al., 2021). While recent works propose to edit LLM knowledge (Meng et al., 2022;Hernandez et al., 2023), they are hardly compatible with black-box LLMs. In addition, parametric knowledge in LLMs is far from modular and collaborative, while LMs should be able to incorporate knowledge contributed by all stakeholders in LLM research and applications. To this end, we propose KNOWLEDGE CARD as a community-driven initiative to empower general-purpose LLMs with modular and collaborative knowledge through the sharing and re-using of knowledge cards. Modular LMs Mixture-of-Experts (MoE) (Masoudnia & Ebrahimpour, 2014) aims to activate one expert based on the input instance, which has been adopted in language model research (Gururangan et al., 2022;Roller et al., 2021;Lewis et al., 2021;Kudugunta et al., 2021;Pfeiffer et al., 2022). Adapters are also proposed for task transfer and parameter-efficient fine-tuning (Houlsby et al., 2019;Pfeiffer et al., 2020;Zaken et al., 2022). In addition, parameter averaging (Matena & Raffel, 2022;McMahan et al., 2017;Izmailov et al., 2018;Wortsman et al., 2022;Li et al., 2022;Gururangan et al., 2023), model fusion (Don-Yehiya et al., 2022;Borzunov et al., 2022), continual learning (Jang et al., 2021;Qin et al., 2022;Ke et al., 2022;Qin et al., 2023), and other collaborative approaches (Köpf et al., 2023;Sha, 2023;Luo et al., 2023) have also shed light on the possibility of distributed LM training. However, existing modular LMs mostly operate in the white-box setting, i.e. assuming access to the model parameters, token probabilities, and more. Since the most prominent LLMs are only released behind API calls, we propose KNOWLEDGE CARD with the aim of empowering black-box general-purpose LLMs with community-driven and collaborative knowledge." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "We propose KNOWLEDGE CARD, a novel framework to empower general-purpose LLMs with modular and collaborative knowledge. We first present knowledge cards, specialized LMs trained on various domains and sources of knowledge, and propose three knowledge selectors to ensure knowledge quality. We then propose bottom-up and top-down approaches to integrate knowledge cards with general-purpose LLMs to enable multi-domain knowledge synthesis and grounding in external information when necessary. Extensive experiments demonstrate that KNOWLEDGE CARD outperforms vanilla LLMs, retrieval LMs, and generated knowledge prompting approaches across three tasks and six datasets, showcasing its ability to integrate multiple sources of information, efficiently update LLM's knowledge, and more. We envision KNOWLEDGE CARD as a communitydriven initiative to empower general-purpose LLMs with modular and collaborative knowledge." }, { "figure_ref": [], "heading": "B LIMITATIONS", "publication_ref": [ "b105", "b38" ], "table_ref": [], "text": "Knowledge cards are not perfect knowledge generators. While knowledge cards could be of any size or model architecture, we used OPT-1.3B, a relatively small language model to initialize knowledge cards trained on different domains and sources. As a result, not all of the generated knowledge documents are high-quality knowledge statements, occasionally suffering from degeneration, topic deviation, and more. While the three knowledge selectors in part alleviate the impact of low-quality generated knowledge documents, we hypothesize that improving the knowledge generation of autoregressive language models is an important, yet orthogonal, research question for future work. Two potential solutions include 1) increasing the model size of knowledge cards and 2) using specialized training objectives for knowledge cards, while both approaches require additional training and more computational resources. In addition, Appendix A discussed KNOWLEDGE CARD's compatibility with diverse knowledge sources, including retrieval, knowledge graphs, and search engines, while these knowledge repositories have their respective pros and cons. We leave it to future work on integrating multiple types of external knowledge stores to extend KNOWLEDGE CARD.\nThe factuality selector is biased towards information-rich domains and existing knowledge.\nTo ensure the factuality of generated knowledge documents, we employed a retrieval-augmented factuality selector based on both summarization factuality metrics and fact-checking models while enabling flexibility through our proposed top-k factuality sampling. However, domains with more Wikipedia entries might be better supported by retrieved documents and might receive higher factuality scores. In addition, new and emerging knowledge might be well supported by existing retrieval corpora and receive low factuality scores. We quantitatively evaluate this bias in Appendix D. Although top-k factuality sampling enables flexibility to some extent, it remains an important problem to design factuality evaluation measures that are generalizable and adaptable to varying and emerging domains.\nPrompting LLMs to seek help through yes/no questions is not perfect. Inspired by the findings that LLMs do not need external knowledge for every query (Zhao et al., 2023) and language models (mostly) know what they know (Kadavath et al., 2022), we propose to ask yes/no questions to decide whether to activate knowledge cards and encourage well-calibrated answers through incontext learning. Our analysis ( §5) shows that this strategy is effective but far from perfect: LLMs are occasionally over-confident about their knowledge capabilities. We leave it to future work on designing better strategies for LLMs to abstain, acknowledge knowledge limitations, and seek help from external information sources." }, { "figure_ref": [], "heading": "C ETHICS STATEMENT", "publication_ref": [ "b2", "b65", "b97" ], "table_ref": [], "text": "We envision KNOWLEDGE CARD as a community-driven and collaborative initiative to improve general-purpose LLMs in knowledge-intensive tasks and contexts. An important risk is the dual use and exploitation from malicious actors. Since modular knowledge cards have the ability to change or update LLM knowledge, malicious actors could advance their agenda by submitting malicious knowledge cards trained on disinformation, hyperpartisan content, propaganda, and more, while framing them as benign knowledge domains and deceive LLM users. We envision two lines of approaches towards this ethical risk: on the technical side, research on adversarial manipulation of language models and corresponding defense tactics (Bagdasaryan & Shmatikov, 2022;Perez et al., 2022) could be integrated to alleviate the impact of malicious knowledge cards; on the social side, we could rely on and reinforce the existing rules for model sharing on popular infrastructures (Wolf et al., 2020) to prevent such malicious contribution from happening. We encourage the responsible use of KNOWLEDGE CARD, while we also call on users and researchers to be mindful of this dual-use risk." }, { "figure_ref": [ "fig_6" ], "heading": "D ANALYSIS (CONT.)", "publication_ref": [], "table_ref": [], "text": "Knowledge Card Selection In the top-down approach, we ask large language models to choose relevant knowledge cards and obtain external knowledge. We illustrate the selection results of the automatic selection strategy on the MMLU dataset separated into the 57 sub-tasks. Figure 8 demonstrates that for most tasks knowledge selection exhibits spike-like patterns on Wikipedia corpora and encyclopedic knowledge graphs, suggesting that the majority of tasks have a few clearly relevant knowledge cards. In addition, for other tasks (e.g. juris prudence and high school U.S. history), it is not clear which knowledge cards would be most helpful, thus the selection is more spread-out. These results suggest that the selection patterns could also indicate whether a new and more in-topic knowledge card is needed for any given task. Yes/No Template Sensitivity In the top-down approach, we prompt LLMs with \"Do you need more information? (Yes/No)\" to identify if external knowledge is required and use in-context learning to encourage well-calibrated responses. Since language models are sensitive to minor changes in prompts, we devise two more questions: \"Is more information needed here?\" and \"Would you like additional information?\" and report the results on the 2-way misinformation task in Figure 7. It is demonstrated that LLMs give moderately consistent responses: 79.9% of cases received unanimous yes or no from the three prompts, while 20.1% examples received mixed results. This suggests that a potential improvement to KNOWLEDGE CARD is to employ multiple yes/no questions to probe knowledge limitations and use an ensemble of answers to improve robustness." }, { "figure_ref": [ "fig_7" ], "heading": "Factuality Scores of Knowledge Cards", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "We use the MMLU datasets as queries to prompt different knowledge cards, generate knowledge documents, and evaluate their factuality with the factuality selector ( §2.2). We illustrate the factuality score distributions of different knowledge cards in Figure 9, which shows that different knowledge cards have varying inherent factuality. We hypothesize that the distribution of factuality scores given by the factuality selector could guide efforts to evaluate the quality of community-contributed knowledge cards. Working Examples We present the specific prompts, generated knowledge documents, and prompts for the bottom-up approach, and the topdown approach with automatic and explicit selection in Tables 9,10, and 11 respectively." }, { "figure_ref": [], "heading": "E EXPERIMENT DETAILS", "publication_ref": [ "b8", "b54", "b95", "b107", "b86", "b104", "b91", "b16", "b28", "b78", "b101", "b19", "b17", "b90", "b64", "b27" ], "table_ref": [ "tab_6" ], "text": "Algorithm Details We present an algorithmic summary of the bottom-up and top-down approach in Algorithm 1 and 2.\nKnowledge Cards Domains We train a total of 25 knowledge cards from the following corpora and domains: one billion tokens (Chelba et al., 2013), ACL papers (Lo et al., 2020), commonsense knowledge graph ATOMIC (West et al., 2022), book corpus (Zhu et al., 2015), ConceptNet (Speer et al., 2017), biomedical knowledge graph UMLS (Zhang et al., 2021) IMDB movie reviews (Wang et al., 2023), political knowledge graph KGAP (Feng et al., 2021), legal contracts (Hendrycks et al., 2021), math problems (Saxton et al., 2019), midterm election news (ours), open subtitles2 , political news corpora POLITICS (Liu et al., 2022c), biomedical literature3 , RealNews (Zellers et al., 2019), Reddit (Feng et al., 2023b), Twitter (Feng et al., 2022), Wikidata knowledge graph (Vrandečić & Krötzsch, 2014), Wikipedia dump4 , YAGO (Pellissier Tanon et al., 2020), and Yelp reviews5 . For knowledge graphs, we follow Feng et al. (2023a) to construct textual corpora and use them as training data.\nHyperparameters We present hyperparameter settings in Table 6.\nDataset Details 1) The MMLU dataset (Hendrycks et al., 2020) " }, { "figure_ref": [], "heading": "ACKNOWLEDGEMENTS", "publication_ref": [], "table_ref": [], "text": "We thank the reviewers, the area chair, members of Tsvetshop, and the UW NLP Group for their feedback. This research is supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via the HIATUS Program contract #2022-22072200004. This material is also funded by the DARPA Grant under Contract No. HR001120C0124. We also gratefully acknowledge support from NSF CAREER Grant No. IIS2142739, NSF Grants No. IIS2125201, IIS2203097, and the Alfred P. Sloan Foundation Fellowship. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein." }, { "figure_ref": [], "heading": "A DISCUSSION", "publication_ref": [ "b97", "b77", "b1", "b10", "b50" ], "table_ref": [], "text": "Modularity at every turn. All components in KNOWLEDGE CARD are modular and easily substituted with future state-of-the-art. 1) While Codex is the default LLM in the experiments, KNOWL-EDGE CARD also works with TEXT-DAVINCI-003 and GPT-3.5-TURBO ( §5) and could be easily adapted to future LLMs. 2) If better models for embedding space similarity, abstractive summarization, and fact-checking are developed, the three knowledge selectors ( §2.2) could be seamlessly updated. 3) When new knowledge, information, and domains emerge, more knowledge cards could be trained and uploaded to a model-sharing infrastructure (Wolf et al., 2020) by any member of the machine learning community and adopted to improve general-purpose LLMs.\nUser-centric LLM adaptation. When general-purpose LLMs are released, everyone uses the same LLM with the same API calls, while real-world users have heterogeneous use cases and expectations that require personalization (Salemi et al., 2023). For example, grade school students might expect LLMs to be absolutely factual about knowledge and information in common textbooks, NLP researchers might expect LLMs to have a basic understanding of current NLP research, cooking amateurs might expect LLMs to understand the basic recipes and cuisines for different occasions, and more. As a result, KNOWLEDGE CARD presents a preliminary approach by letting the user select and activate knowledge cards to empower LLMs with different skill sets and domain expertise.\nCompatible with diversified forms of knowledge. By default, KNOWLEDGE CARD employs language models trained on varying domains and corpora as modular knowledge sources. In addition, KNOWLEDGE CARD is also compatible with 1) retrieval systems, where the retrieved text could similarly go through the three knowledge selectors and enrich LLM context, while retrieval corpora are harder to share and use than modular language models; 2) knowledge graphs, when combined with various proposals to construct natural language corpora out of symbolic knowledge bases (Agarwal et al., 2021;Chen et al., 2020;Feng et al., 2023a), which is already included in our prototype; 3) search engines, where content on the web could also be integrated into the black-box LLMs through KNOWLEDGE CARD. Such flexibility and compatibility are possible since KNOWLEDGE CARD conducts knowledge integration through natural language. Compared to retrieval models, using language models as knowledge sources enables flexible information seeking (rather than rigid token exact match), searching over knowledge domains, and employing private knowledge sources.\nKnowledge cards heterogeneity. While existing modular LM proposals often require modular sub-models to be of the same size and architecture for parameter averaging and model fusion (Li et al., 2022), the knowledge cards in this work could be fully heterogeneous. 1) Different knowledge cards could have different sizes. While OPT-1.3B is adopted as the default architecture in this work, other sizes of OPT, from 125M all the way up to tens of billions, could all be used to initialize knowledge cards. In addition, 2) knowledge cards could have different model architectures. Since the integration of general-purpose LLMs and modular knowledge cards happens at the natural language level, any language generation models could be adopted as knowledge cards. These two levels of heterogeneity allow for flexibility in knowledge card training: larger and more capable models could be trained on large corpora and extensive knowledge domains by compute-rich individuals, while smaller knowledge cards trained on small and dedicated domains by computationally underprivileged researchers could also help improve black-box LLMs, democratizing LLM research.\nKnowledge cards hierarchy. We believe that knowledge cards could reflect the hierarchical nature of knowledge. If KNOWLEDGE CARD is adopted for general question answering, then a general biomedical knowledge card trained on PubMed corpus would suffice. However, if KNOWLEDGE CARD is adopted for more fine-grained use cases, the \"biomedical\" domain could be further divided into sub-domains and one knowledge card could be trained for each. Similar divisions could be applied to sub-fields in NLP research, political news in different countries, and more.\nCombining bottom-up and top-down. One straightforward way to combine the two knowledge integration approaches would be: in each step of top-down, the LLM proposes multiple knowledge cards as candidates, then employs the bottom-up approach with the pool of these knowledge cards for knowledge generation. We leave further explorations to future work.\nPublished as a conference paper at ICLR 2024 ¡in-context examples with the same format¿ ... Knowledge: . . . San Mateo is located in the northwest of California . . . Dianne Feinstein, the senior senator from California, is rumored to retire . . . Tom Brady returned to his hometown of San Mateo . . . Question: Who is the senior senator from Tom Brady's birth place? Answer: world religions college mathematics public relations abstract algebra high school chemistry 0.7 1.4 0.7 13.8 1.4 0.0 0.0 0.0 0.7 0.0 0.7 0.0 1.4 0.7 0.0 0.7 0.7 0.0 0.0 0.0 0.0 62.8 0.7 13.8 0.0 1.0 0.0 1.0 1.9 1.0 1.9 1.9 0.0 1.0 1.0 1.9 0.0 1.9 1.0 1.9 0.0 1.0 0.0 0.0 1.9 49.5 0.0 30.1 0.0 1.0 0.0 10.017.0 1.0 0.0 0.0 1.0 3.0 1.0 0.0 0.0 3.0 0.0 0.0 1.0 0.0 1.0 1.0 1.0 51.0 1.0 7.0 0.3 0.3 0.3 0.7 0.7 1.3 0.3 0.0 0.0 0.3 0.0 0.7 0.0 0.0 0.3 0.3 0.0 0.7 0.3 0.0 0.3 61.8 0.0 31.4 0.0 0.0 1.0 4.0 3.0 7.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 1.0 0.0 1.0 2.0 0.0 0.0 1.0 0.0 43.0 0.0 35.0 0.0 0.6 0.0 0.0 0.6 0.0 1.2 0.6 3.0 0.0 0.0 0.0 1.2 0.0 0.0 1.2 1.8 0.0 1.2 0.0 10.338.8 0.0 39.4 1.0 1.0 1.0 3.0 7.0 2.0 0.0 2.0 3.0 0.0 0.0 1.0 0.0 0.0 3.0 1.0 0.0 0.0 3.0 0.0 2.0 41.0 3.0 26.0 0.7 0.0 0.0 4.9 0.7 0.7 0.0 0.0 0.0 0.7 0.7 0.7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.7 0.0 65.3 0.7 24.3 0.0 0.0 0.3 3.2 2.3 0.3 0.6 0.0 0.6 0.0 0.0 0.6 0.6 0.3 0.3 0.3 0.6 0.3 0.3 0.6 0.0 70.6 0.3 17.4 0.0 0.4 0.8 1.3 0.0 1.7 1.7 0.4 0.8 0.8 0.0 0.8 0.8 0.4 1.7 0.4 0.0 0.4 0.4 0.8 0.0 63.4 0.4 22.3 0.4 1.3 1.3 2.5 0.4 0.4 1.3 0.0 8.0 2.1 0.8 0.4 0.8 0.4 0.4 1.3 1.3 0.0 0.8 0.4 8.0 49.8 0.4 17.3 1.0 1.0 0.0 28.4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 1.0 0.0 0.0 1.0 1.0 0.0 57.8 1.0 6.9 0.5 0.5 0.0 0.5 1.5 1.0 0.0 0.5 1.0 0.5 1.0 0.0 0.0 0.0 0.0 0.5 0.5 0.5 0.0 0.0 2.0 53.0 1.0 35.4 0.4 0.4 0.8 4.9 1.5 1.5 0.0 0.4 0.0 0.4 0.0 0.0 0.4 0.4 0.0 0.4 0.4 0.4 0.0 0.8 0.0 58.9 0.8 27.5 0.8 0.3 1.0 2.6 0.5 1.0 0.5 1.0 1.3 1.0 1.0 1.0 0.0 1.0 0.8 1.0 0.5 0.8 1.0 1.3 1.0 63.8 0.8 15.9 0.0 0.9 0.0 0.9 0.9 1.3 0.4 2.6 0.9 0.4 0.9 0.0 0.0 0.4 0.9 0.4 0.9 1.7 0.9 0.4 0.4 55.1 3.0 26.9 0.8 0.8 1.6 2.0 1.2 4.5 1.6 1.2 2.0 2.0 1.2 1.2 1.2 1.2 0.8 1.6 0.8 0.4 0.4 2.0 1.2 28.6 2.9 38.4 0.4 1.3 0.4 2.2 0.4 2.2 0.0 1.8 1.3 2.2 0.9 0.9 0.0 1.3 0.0 0.4 0.4 0.4 0.0 0.0 0.9 57.4 0.0 24.7 1.0 1.0 0.0 7.0 25.0 2.0 0.0 0.0 1.0 0.0 0.0 2.0 1.0 2.0 0.0 1.0 2.0 0.0 1.0 4.0 1.0 38.0 0.0 11.0 1.6 1.3 0.3 5.0 3.4 1.9 1.1 0.5 1.1 0.8 1.3 1.3 0.3 1.3 1.1 1.3 1.1 0.8 1.9 0.3 1.3 62.4 0.8 7.9 0.4 0.0 0.9 16.6 0.4 0.9 0.4 0.0 0.0 0.4 0.0 0.4 0.4 1.3 0.0 0.0 0.9 0.4 0.0 0.4 0.0 62.6 0.4 13.2 0.0 0.7 0.0 1.5 0.7 1.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.5 0.0 0.7 1.5 0.7 0.7 0.0 0.0 77.0 0.7 12.6 0.0 0.0 0.0 1.0 0.0 1.0 0.0 0.0 5.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 0.0 0.0 4.0 35.0 0.0 51.0 1.0 0.0 0.0 2.0 0.0 2.0 0.0 1.0 7.0 2.0 3.0 5.0 0.0 0.0 0.0 0.0 3.0 0.0 0.0 0.0 1.0 40.0 0.0 33.0 0.0 0.0 0.0 40.011.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.0 1.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 33.0 0.0 12.0 1.2 0.0 1.2 4.3 1.8 0.6 1.2 0.0 0.0 1.8 0.6 0.6 1.8 1.2 0.0 0.6 0.6 0.0 0.6 1.8 2.5 32.5 0.6 44.2 0.0 0.0 0.0 1.0 0.0 0.5 0.0 0.5 1.5 1.5 0.0 0.5 1.0 0.0 0.5 0.5 0.0 1.0 0.0 1.0 1.0 21.9 0.0 67.7 1.5 0.0 0.5 2.0 1.0 1.5 0.0 1.5 21.1 2.9 0.5 0.0 1.5 1.0 0.0 0.5 0.5 1.0 0.5 0.5 6.9 21.1 0.5 33.8 0.1 0.4 0.3 2.0 0.9 0.3 0.4 0.6 0.8 1.0 0.3 0.4 0.0 0.4 0.3 0.5 0.4 0.0 0.1 0.8 1.0 55.3 0.3 33.6 0.0 0.6 0.6 5.8 1.7 0.6 0.0 0.0 0.0 1.2 0.6 0.0 0.6 0.0 0.6 0.0 0.0 0.0 0.6 0.0 0.0 58.4 0.6 28.3 0.6 0.0 0.0 1.2 1.2 3.0 0.6 1.2 0.0 0.6 0.0 0.0 0.0 0.0 0.0 1.8 0.6 0.0 0.0 1.2 1.2 49.4 1.2 36.1 0.9 0.9 0.9 1.9 0.9 0.0 0.0 0.0 1.9 20.4 0.9 0.0 0.9 0.0 0.0 0.9 0.0 0.9 1.9 0.0 0.0 32.4 0.0 34.3 0.3 0.3 0.6 1.5 0.9 0.0 0.6 0.0 0.0 0.0 0.3 0.0 0.3 0.0 0.3 0.3 0.9 0.6 0.0 0.3 1.9 50.9 0.0 39.8 2.0 0.0 0.0 1.0 1.0 1.0 3.0 3.0 1.0 4.0 1.0 2.0 1.0 0.0 1.0 0.0 0.0 0.0 0.0 3.0 2.0 26.0 0.0 48.0 0.0 0.0 0.0 0.0 0.0 0.8 0.0 0.0 1.7 19.0 1.7 0.0 0.0 0.0 0.0 0.0 0.8 0.0 0.0 0.0 2.5 40.5 0.8 32.2 0.5 0.5 0.5 2.8 1.4 0.5 0.9 0.5 0.9 0.0 0.9 0.5 0.9 0.5 0.0 0.5 0.5 0.5 0.5 0.5 0.0 68.5 0.0 18.1 0.7 1.3 0.2 1.7 1.3 1.3 0.7 1.1 0.6 1.1 1.1 0.6 1.1 0.9 0.7 0.9 0.2 1.3 0.6 0.6 0.6 63.5 0.6 17.6 0.5 0.7 0.2 1.3 0.3 0.7 0.3 0.8 0.7 1.8 0.8 0.8 0.3 0.3 0.7 0.0 0.5 0.5 0.3 0.5 0.3 53.3 0.5 34.0 0.4 0.7 0.7 0.7 0.4 0.0 0.4 1.1 0.4 3.5 1.1 0.7 1.4 0.7 1.1 1.1 0.0 0.4 0.4 0.7 0.4 69.1 0.4 14.5 0.7 0.7 2.0 14.5 1.3 0.7 0.7 0.0 0.0 0.0 0.7 0.7 0.7 0.0 1.3 0.0 0.7 0.0 0.7 0.7 2.0 46.1 0.0 26.3 1.1 0.0 2.2 16.3 7.4 0.4 0.7 2.6 1.1 1.1 1.5 1.5 1.5 1.9 0.7 0.7 0.4 0.0 0.4 1.5 1.1 52.2 1.5 2.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 61.8 0.0 37.9 0.9 0.0 0.0 3.5 7.9 0.9 0.0 0.9 0.9 1.8 0.0 1.8 1.8 0.9 0.9 0.9 1.8 0.0 0.0 0.0 1.8 62.3 0.0 11.4 0.5 0.0 0.0 0.0 0.0 0.5 0.5 0.5 17.1 6.2 0.0 0.5 0.5 0.0 0.0 0.0 0.0 0.5 0.0 0.0 0.0 40.4 0.5 32.1 0.9 0.9 1.8 3.6 25.9 0.0 0.9 0.0 0.9 0.0 0.9 0.0 0.0 0.0 0.0 0.9 0.0 0.9 0.9 0.9 0.0 50.9 0.0 9.8 0.9 0.6 0.9 0.3 0.3 1.7 0.3 0.6 1.2 1.4 0.3 0.3 1.4 0.3 0.9 0.3 0.9 0.9 0.0 0.6 1.2 35.0 0.3 49.7 0.0 1.3 0.0 20.5 2.6 0.0 0.0 0.0 0.0 0.7 0.0 0.7 0.7 0.7 0.0 0.0 0.0 0.0 0.7 0.7 0.0 67.5 0.0 4.0 0.0 0.6 0.0 1.9 0.3 0.0 0.0 0.0 1.9 0.6 0.0 0.0 0.0 0.6 1.0 0.3 0.3 0.3 0.0 0.3 0.0 28.0 0.0 63.7 1.6 1.6 0.0 15.1 0.0 2.4 0.0 0.8 0.8 3.2 0.8 0.8 1.6 1.6 2.4 0.0 0.8 1.6 0.8 1.6 0.8 48.4 4.0 9.5 0.0 2.3 0.8 0.8 0.0 3.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.5 0.8 0.8 0.0 55.0 0.8 34.4 0.6 0.0 0.0 0.0 0.6 0.6 0.0 0.0 0.6 0.6 0.0 0.0 0.0 0.0 0.6 0.0 0.6 0.0 0.0 1.2 1.8 47.4 0.0 45.6 0.0 1.0 1.0 15.021.0 0.0 1.0 1.0 0.0 0.0 2.0 3.0 0.0 2.0 0.0 2.0 1.0 0.0 1.0 1.0 0.0 39.0 0.0 9.0 1.8 0.9 0.0 0.9 2.7 0.9 0.9 2.7 1.8 1.8 1.8 0.0 0.9 0.0 0.0 0.0 0.9 0.0 0.9 1.8 0.0 32.7 2.7 43.6 4.0 0.0 2.0 11.016.0 0.0 0.0 1.0 2.0 1.0 0.0 2.0 2.0 1.0 1.0 3.0 2.0 1.0 1.0 2.0 0.0 42.0 2.0 4.0 0.5 0.0 0.0 20.2 7.4 0.0 0.0 0.5 0.0 0.5 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 64.5 0.0 5.4 " } ]
By design, large language models (LLMs) are static general-purpose models, expensive to retrain or update frequently. As they are increasingly adopted for knowledge-intensive tasks, it becomes evident that these design choices lead to failures to generate factual, relevant, and up-to-date knowledge. To this end, we propose KNOWLEDGE CARD, a modular framework to plug in new factual and relevant knowledge into general-purpose LLMs. We first introduce knowledge cards-specialized language models trained on corpora from specific domains and sources. Knowledge cards serve as parametric repositories that are selected at inference time to generate background knowledge for the base LLM. We then propose three content selectors to dynamically select and retain information in documents generated by knowledge cards, specifically controlling for relevance, brevity, and factuality of outputs. Finally, we propose two complementary integration approaches to augment the base LLM with the (relevant, factual) knowledge curated from the specialized LMs. Through extensive experiments, we demonstrate that KNOWLEDGE CARD achieves state-of-the-art performance on six benchmark datasets. Ultimately, KNOWLEDGE CARD framework enables dynamic synthesis and updates of knowledge from diverse domains. Its modularity will ensure that relevant knowledge can be continuously updated through the collective efforts of the research community.
KNOWLEDGE CARD: FILLING LLMS' KNOWLEDGE GAPS WITH PLUG-IN SPECIALIZED LANGUAGE MODELS
[ { "figure_caption": "Tom Brady returned to his hometown of San Mateo, CA . . .", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: Overview of KNOWLEDGE CARD. We train knowledge cards on various knowledge domains and employ three knowledge selectors for quality control. We propose bottom-up and top-down to integrate general-purpose LLMs with modular and specialized LMs for multi-domain knowledge synthesis (bottom-up) and proactively seeking external knowledge (top-down).", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "to obtain {d 1 , • • • , d n×n1 }. We first apply the relevance selector to retain n 2 most relevant documents {d 1 , • • • , d n2 }, then conduct knowledge pruning through the pruning selector { d1 , • • • , d n2 }, and finally leverage the factuality selector to obtain n 3 high-quality knowledge documents { d1 , • • • , d n3 }.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :Figure 3 :23Figure 2: Performance on misinformation detection when each knowledge card is separately added. KNOWLEDGE CARD enables modular patching of LLMs while in-domain knowledge cards help the most.", "figure_data": "", "figure_id": "fig_3", "figure_label": "23", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure4: Investigating the impact of n 1 , n 2 , and n 3 , which govern the knowledge stream from modular knowledge cards to general-purpose LLMs. These hyperparameters enable finegrained control over the knowledge synthesis process.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Confusion matrices of yes/no and correctness in top-down, enabling finegrained error analysis.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Yes/No questions in top-down are mostly consistent across three prompt templates, while there is space for improvement in future work.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Factuality score distributions of the 25 knowledge cards when prompted with questions in the MMLU benchmark. Different knowledge cards do have varying factuality score distributions.", "figure_data": "", "figure_id": "fig_7", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Model performance on the MMLU Benchmark. KNOWLEDGE CARD improves Codex by at least 3.5% while top-down outperforms all baselines.", "figure_data": "TypeModelHuman. Social STEM Other AllTypeModelTwo-Way BAcc MaF BAcc MaF Four-WayVanilla LMCODEX PALM74.2 77.076.9 81.057.8 55.670.1 69.668.3 69.3Vanilla LMCODEX65.651.052.844.0FLAN-PALM ATLAS-46.1-54.6-38.8-52.872.2 47.9RetrievalREPLUG REPLUG LSR78.8 78.867.8 68.555.8 57.553.0 54.4RetrievalREPLUG76.079.758.872.171.4GKP73.560.361.146.3REPLUG LSR76.579.958.973.271.8GenerateRECITATION65.047.764.248.6GenerateGKP RECITATION73.3 76.974.5 78.159.5 59.071.4 74.070.0 71.9GRTR BOTTOM-UP66.1 89.849.1 87.351.6 70.636.9 67.3KNOWLEDGE CARDBOTTOM-UP TOP-DOWN AUTO77.2 77.776.7 78.957.9 59.272.2 73.070.7 72.0KNOWLEDGE CARDTOP-DOWN AUTO 86.4 TOP-DOWN EXP 91.378.7 86.063.0 69.460.2 65.5TOP-DOWN EXP78.680.959.674.372.8", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Performance on misinformation detection. BAcc and MaF are balanced accuracy and macro F1. bottom-up performs best due to multi-domain knowledge integration.", "figure_data": "TypeModelOpen-Book Multiple-Choice EM F1 2-way 4-wayVanilla LMCODEX55.1 57.990.960.8REPLUG44.8-85.762.8RetrievalREPLUG LSR37.2-86.965.3SI ET AL.52.1 54.584.761.4GKP45.0 46.989.153.5GenerateRECITATION44.4 46.489.352.3GRTR55.6 58.477.459.0BOTTOM-UP83.6 85.681.664.5KNOWLEDGE CARDTOP-DOWN AUTO 87.5 89.389.563.0TOP-DOWN EXP75.3 75.791.967.6", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Performance on MidtermQA. KNOWL--way classification settings. All models are evaluated based on 16-shot in-context learning. 3) To evaluate temporal knowledge update, we curate MIDTERMQA, a QA benchmark focusing on the 2022 U.S. midterm elections since the knowledge cutoff of black-box LLMs is often 2021 or earlier.", "figure_data": "Tasks and Datasets 1) For general-purposeQA, we adopt MMLU (Hendrycks et al., 2020),a multiple-choice QA dataset covering 57 tasksin humanities, STEM, social sciences, and oth-ers. Following previous works (Si et al., 2022;Shi et al., 2023), we adopt a 5-shot in-contextlearning setting. 2) To evaluate multi-domainknowledge synthesis, we adopt misinformationdetection, since news articles often encompassfacts and opinions at the intersection of differ-ent domains and perspectives. We leverage thewidely adopted LUN misinformation detectiondataset (Rashkin et al., 2017) with both 2-way", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "While vanilla Codex falsely claims that these incumbents won again in the 2022 elections, KNOWLEDGE CARD successfully updates the knowledge of black-box LLMs.", "figure_data": "", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Hyperparameter settings.", "figure_data": "Knowledge Card Accumulation We expectKNOWLEDGE CARD to perform better whenrelevant knowledge cards are gradually addedto the system. To this end, we gradually addfive knowledge cards (PubMed, IMDB, Book-Corpus, News, and Wikipedia) to KNOWLEDGECARD and evaluate performance with the misin-formation dataset, 2-way setting, bottom-up ap-proach, and the ChatGPT model. Table 7 demon-strates that the addition of knowledge cards, es-", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": ", Gutenberg (Rae et al., 2019), // open-book setting Question: Who won the senate race of Oregon in the 2022 U.S. midterm elections? Answer: Ron Wyden // two-way setting Question: Who won the 24th congressional district of Texas in the 2022 U.S. midterm elections? A. Jan McDowell B. Beth Van Duyne Answer: B // four-way setting Question: Who won the 6th congressional district of Pennsylvania in the 2022 U.S. midterm elections? A. Christopher Hoeppner B. Doug Mastriano C. Chrissy Houlahan D. Guy Ciarrocchi Answer: C Examples of the MidtermQA dataset for the three settings.", "figure_data": "", "figure_id": "tab_7", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "contains a total of 15,908 fourchoice questions further divided into 57 sub-tasks in four domains: humanities, social sciences, STEM, and others. The official dataset also provides a demonstration set, i.e. 5-shot examples in each sub-task to enable few-shot in-context learning. We follow the official demonstration set and test set in our experiments. 2) The LUN dataset(Rashkin et al., 2017) is a misinformation detection dataset with two-or four-way classification settings, either with true/false only or fine-grained categories of trusted, hoax, propaganda, or satire. We use the official test set in(Hu et al., 2021) with 2,999 examples throughout the experiments. 3) We curate MidtermQA, a QA dataset focusing on the 2022 U.S. midterm elections to evaluate KNOWLEDGE CARD's ability for temporal knowledge update. Specifically, we first collect the results of the 510 races in congressional, senate, or gubernatorial elections in the 2022 midterms. We then construct three datasets: a) open-book, where we ask LLMs to directly answer the name of the election winner for a given race, b) two-way, where we ask LLMs to choose the winner from the two front runners, and c) four-way, where we increase the difficulty by including two other politicians contesting in the same state to create a distraction. We present examples of the MidtermQA dataset in Table8.Computation Resources DetailsWe used a GPU cluster with 16 NVIDIA A40 GPUs, 1988G memory, and 104 CPU cores for the experiments. Training knowledge cards took from around 7 hours to 10 days depending on the training corpora size. For the black-box LLMs, we used the OpenAI API to access CODE-DAVINCI-002, TEXT-DAVINCI-003, and GPT-3.5-TURBO in the experiments.", "figure_data": "pubmed yelp realnews_1 wikipedia realnews_2 realnews_3 realnews_4 acl_papers math 1B kgap cpnet gutenberg POLITICS reddit IMDB ddb wikidata yago midterm bookcorpus legal_contracts atomic opensubtitles twitter00.20.40.60.81.0", "figure_id": "tab_8", "figure_label": "", "figure_type": "table" } ]
Shangbin Feng; Weijia Shi; Yuyang Bai; Vidhisha Balachandran; Tianxing He; Yulia Tsvetkov; Al; Doug Jones; Katie Britt; ✓ Pa; Bob Casey; John Fetterman; ✓ Ca; Mike Thompson; Kevin Kiley; Jackie Walorski; Jim Banks; ✗ Nv; Steve Sisolak; Joe Lombardo
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "Sharegpt", "year": "2023-09-27" }, { "authors": "Oshin Agarwal; Heming Ge; Siamak Shakeri; Rami Al-Rfou", "journal": "", "ref_id": "b1", "title": "Knowledge graph based synthetic corpus generation for knowledge-enhanced language model pre-training", "year": "2021" }, { "authors": "Eugene Bagdasaryan; Vitaly Shmatikov", "journal": "IEEE", "ref_id": "b2", "title": "Spinning language models: Risks of propaganda-as-aservice and countermeasures", "year": "2022" }, { "authors": "Michele Bevilacqua; Giuseppe Ottaviano; Patrick Lewis; Scott Yih; Sebastian Riedel; Fabio Petroni", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b3", "title": "Autoregressive search engines: Generating substrings as document identifiers", "year": "2022" }, { "authors": "Sebastian Borgeaud; Arthur Mensch; Jordan Hoffmann; Trevor Cai; Eliza Rutherford; Katie Millican; George Bm Van Den Driessche; Jean-Baptiste Lespiau; Bogdan Damoc; Aidan Clark", "journal": "PMLR", "ref_id": "b4", "title": "Improving language models by retrieving from trillions of tokens", "year": "2022" }, { "authors": "Alexander Borzunov; Dmitry Baranchuk; Tim Dettmers; Max Ryabinin; Younes Belkada; Artem Chumachenko; Pavel Samygin; Colin Raffel", "journal": "", "ref_id": "b5", "title": "Petals: Collaborative inference and fine-tuning of large models", "year": "2022" }, { "authors": "Massimo Caccia; Lucas Caccia; William Fedus; Hugo Larochelle; Joelle Pineau; Laurent Charlin", "journal": "", "ref_id": "b6", "title": "Language gans falling short", "year": "2018" }, { "authors": "Steve Cayzer", "journal": "Communications of the ACM", "ref_id": "b7", "title": "Semantic blogging and decentralized knowledge management", "year": "2004" }, { "authors": "Ciprian Chelba; Tomas Mikolov; Mike Schuster; Qi Ge; Thorsten Brants; Phillipp Koehn; Tony Robinson", "journal": "", "ref_id": "b8", "title": "One billion word benchmark for measuring progress in statistical language modeling", "year": "2013" }, { "authors": "Mark Chen; Jerry Tworek; Heewoo Jun; Qiming Yuan; Henrique Ponde De Oliveira Pinto; Jared Kaplan; Harri Edwards; Yuri Burda; Nicholas Joseph; Greg Brockman", "journal": "", "ref_id": "b9", "title": "Evaluating large language models trained on code", "year": "2021" }, { "authors": "Wenhu Chen; Yu Su; Xifeng Yan; William Yang; Wang ", "journal": "", "ref_id": "b10", "title": "Kgpt: Knowledge-grounded pre-training for data-to-text generation", "year": "2020" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann", "journal": "", "ref_id": "b11", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Chung Hyung Won; Le Hou; Shayne Longpre; Barret Zoph; Yi Tay; William Fedus; Eric Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma", "journal": "", "ref_id": "b12", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "Nicola De Cao; Wilker Aziz; Ivan Titov", "journal": "", "ref_id": "b13", "title": "Editing factual knowledge in language models", "year": "2021" }, { "authors": "Bhuwan Dhingra; Jeremy R Cole; Julian Martin Eisenschlos; Daniel Gillick; Jacob Eisenstein; William W Cohen", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b14", "title": "Time-aware language models as temporal knowledge bases", "year": "2022" }, { "authors": "Elad Shachar Don-Yehiya; Colin Venezian; Noam Raffel; Yoav Slonim; Leshem Katz; Choshen", "journal": "", "ref_id": "b15", "title": "Cold fusion: Collaborative descent for distributed multitask finetuning", "year": "2022" }, { "authors": "Shangbin Feng; Zilong Chen; Wenqian Zhang; Qingyao Li; Qinghua Zheng; Xiaojun Chang; Minnan Luo", "journal": "", "ref_id": "b16", "title": "Kgap: Knowledge graph augmented political perspective detection in news media", "year": "2021" }, { "authors": "Shangbin Feng; Zhaoxuan Tan; Herun Wan; Ningnan Wang; Zilong Chen; Binchi Zhang; Qinghua Zheng; Wenqian Zhang; Zhenyu Lei; Shujie Yang", "journal": "", "ref_id": "b17", "title": "Twibot-22: Towards graph-based twitter bot detection", "year": "2022" }, { "authors": "Shangbin Feng; Vidhisha Balachandran; Yuyang Bai; Yulia Tsvetkov", "journal": "", "ref_id": "b18", "title": "FactKB: Generalizable factuality evaluation using language models enhanced with factual knowledge", "year": "2023-12" }, { "authors": "Shangbin Feng; Chan Young Park; Yuhan Liu; Yulia Tsvetkov", "journal": "", "ref_id": "b19", "title": "From pretraining data to language models to downstream tasks: Tracking the trails of political biases leading to unfair NLP models", "year": "2023-07" }, { "authors": "Shangbin Feng; Zhaoxuan Tan; Wenqian Zhang; Zhenyu Lei; Yulia Tsvetkov", "journal": "", "ref_id": "b20", "title": "KALM: Knowledgeaware integration of local, document, and global contexts for long document understanding", "year": "2023-07" }, { "authors": "Leo Gao; Stella Biderman; Sid Black; Laurence Golding; Travis Hoppe; Charles Foster; Jason Phang; Horace He; Anish Thite; Noa Nabeshima", "journal": "", "ref_id": "b21", "title": "The pile: An 800gb dataset of diverse text for language modeling", "year": "2020" }, { "authors": "Suchin Gururangan; Mike Lewis; Ari Holtzman; Noah A Smith; Luke Zettlemoyer", "journal": "", "ref_id": "b22", "title": "Demix layers: Disentangling domains for modular language modeling", "year": "2022" }, { "authors": "Suchin Gururangan; Margaret Li; Mike Lewis; Weijia Shi; Tim Althoff; Noah A Smith; Luke Zettlemoyer", "journal": "", "ref_id": "b23", "title": "Scaling expert language models with unsupervised domain discovery", "year": "2023" }, { "authors": "Kelvin Guu; Kenton Lee; Zora Tung; Panupong Pasupat; Mingwei Chang", "journal": "PMLR", "ref_id": "b24", "title": "Retrieval augmented language model pre-training", "year": "2020" }, { "authors": "Peter Hase; Mona Diab; Asli Celikyilmaz; Xian Li; Zornitsa Kozareva; Veselin Stoyanov; Mohit Bansal; Srinivasan Iyer", "journal": "", "ref_id": "b25", "title": "Do language models have beliefs? methods for detecting, updating, and visualizing model beliefs", "year": "2021" }, { "authors": "Tianxing He; Jun Liu; Kyunghyun Cho; Myle Ott; Bing Liu; James Glass; Fuchun Peng", "journal": "", "ref_id": "b26", "title": "Analyzing the forgetting problem in pretrain-finetuning of open-domain dialogue response models", "year": "2021-04" }, { "authors": "Dan Hendrycks; Collin Burns; Steven Basart; Andy Zou; Mantas Mazeika; Dawn Song; Jacob Steinhardt", "journal": "", "ref_id": "b27", "title": "Measuring massive multitask language understanding", "year": "2020" }, { "authors": "Dan Hendrycks; Collin Burns; Anya Chen; Spencer Ball", "journal": "", "ref_id": "b28", "title": "Cuad: An expert-annotated nlp dataset for legal contract review", "year": "2021" }, { "authors": "Evan Hernandez; Belinda Z Li; Jacob Andreas", "journal": "", "ref_id": "b29", "title": "Measuring and manipulating knowledge representations in language models", "year": "2023" }, { "authors": "Ari Holtzman; Jan Buys; Li Du; Maxwell Forbes; Yejin Choi", "journal": "", "ref_id": "b30", "title": "The curious case of neural text degeneration", "year": "2019" }, { "authors": "Neil Houlsby; Andrei Giurgiu; Stanislaw Jastrzebski; Bruna Morrone; Quentin De Laroussilhe; Andrea Gesmundo; Mona Attariyan; Sylvain Gelly", "journal": "PMLR", "ref_id": "b31", "title": "Parameter-efficient transfer learning for nlp", "year": "2019" }, { "authors": "Linmei Hu; Tianchi Yang; Luhao Zhang; Wanjun Zhong; Duyu Tang; Chuan Shi; Nan Duan; Ming Zhou", "journal": "", "ref_id": "b32", "title": "Compare to the knowledge: Graph neural fake news detection with external knowledge", "year": "2021" }, { "authors": "Yushi Hu; Hang Hua; Zhengyuan Yang; Weijia Shi; Noah A Smith; Jiebo Luo", "journal": "", "ref_id": "b33", "title": "Promptcap: Prompt-guided task-aware image captioning", "year": "2022" }, { "authors": "Gautier Izacard; Patrick Lewis; Maria Lomeli; Lucas Hosseini; Fabio Petroni; Timo Schick; Jane Dwivedi-Yu; Armand Joulin; Sebastian Riedel; Edouard Grave", "journal": "", "ref_id": "b34", "title": "Few-shot learning with retrieval augmented language models", "year": "2022" }, { "authors": "A G Izmailov; D Wilson; D Podoprikhin; Vetrov; Garipov", "journal": "", "ref_id": "b35", "title": "Averaging weights leads to wider optima and better generalization", "year": "2018" }, { "authors": "Joel Jang; Seonghyeon Ye; Sohee Yang; Joongbo Shin; Janghoon Han; Stanley Jungkyu Kim Gyeonghun; Minjoon Choi; Seo", "journal": "", "ref_id": "b36", "title": "Towards continual knowledge learning of language models", "year": "2021" }, { "authors": "Ziwei Ji; Nayeon Lee; Rita Frieske; Tiezheng Yu; Dan Su; Yan Xu; Etsuko Ishii; Ye ; Jin Bang; Andrea Madotto; Pascale Fung", "journal": "ACM Computing Surveys", "ref_id": "b37", "title": "Survey of hallucination in natural language generation", "year": "2023" }, { "authors": "Saurav Kadavath; Tom Conerly; Amanda Askell; Tom Henighan; Dawn Drain; Ethan Perez; Nicholas Schiefer; Zac Hatfield Dodds; Nova Dassarma; Eli Tran-Johnson", "journal": "", "ref_id": "b38", "title": "Language models (mostly) know what they know", "year": "2022" }, { "authors": "Nikhil Kandpal; Haikang Deng; Adam Roberts; Eric Wallace; Colin Raffel", "journal": "PMLR", "ref_id": "b39", "title": "Large language models struggle to learn long-tail knowledge", "year": "2023" }, { "authors": "Hamid Karimi; Proteek Roy; Sari Saba-Sadiya; Jiliang Tang", "journal": "", "ref_id": "b40", "title": "Multi-source multi-class fake news detection", "year": "2018" }, { "authors": "Jungo Kasai; Keisuke Sakaguchi; Yoichi Takahashi; Ronan Le Bras; Akari Asai; Xinyan Yu; Dragomir Radev; Noah A Smith; Yejin Choi; Kentaro Inui", "journal": "", "ref_id": "b41", "title": "Realtime qa: What's the answer right now?", "year": "2022" }, { "authors": "Zixuan Ke; Yijia Shao; Haowei Lin; Tatsuya Konishi; Gyuhak Kim; Bing Liu", "journal": "", "ref_id": "b42", "title": "Continual pre-training of language models", "year": "2022" }, { "authors": "Urvashi Khandelwal; Omer Levy; Dan Jurafsky; Luke Zettlemoyer; Mike Lewis", "journal": "", "ref_id": "b43", "title": "Generalization through memorization: Nearest neighbor language models", "year": "2019" }, { "authors": "Omar Khattab; Keshav Santhanam; Lisa Xiang; David Li; Percy Hall; Christopher Liang; Matei Potts; Zaharia", "journal": "", "ref_id": "b44", "title": "Demonstrate-search-predict: Composing retrieval and language models for knowledge-intensive nlp", "year": "2022" }, { "authors": "Andreas Köpf; Yannic Kilcher; Sotiris Dimitri Von Rütte; Zhi-Rui Anagnostidis; Keith Tam; Abdullah Stevens; Barhoum; Minh Nguyen; Oliver Duc; Richárd Stanley; Nagyfi", "journal": "", "ref_id": "b45", "title": "Openassistant conversations-democratizing large language model alignment", "year": "2023" }, { "authors": "Wojciech Kryściński; Bryan Mccann; Caiming Xiong; Richard Socher", "journal": "", "ref_id": "b46", "title": "Evaluating the factual consistency of abstractive text summarization", "year": "2020" }, { "authors": "Sneha Kudugunta; Yanping Huang; Ankur Bapna; Maxim Krikun; Dmitry Lepikhin; Minh-Thang Luong; Orhan Firat", "journal": "", "ref_id": "b47", "title": "Beyond distillation: Task-level mixture-of-experts for efficient inference", "year": "2021" }, { "authors": "Mike Lewis; Shruti Bhosale; Tim Dettmers; Naman Goyal; Luke Zettlemoyer", "journal": "PMLR", "ref_id": "b48", "title": "Base layers: Simplifying training of large, sparse models", "year": "2021" }, { "authors": "Patrick Lewis; Ethan Perez; Aleksandra Piktus; Fabio Petroni; Vladimir Karpukhin; Naman Goyal; Heinrich Küttler; Mike Lewis; Wen-Tau Yih; Tim Rocktäschel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b49", "title": "Retrieval-augmented generation for knowledge-intensive nlp tasks", "year": "2020" }, { "authors": "Margaret Li; Suchin Gururangan; Tim Dettmers; Mike Lewis; Tim Althoff; Noah A Smith; Luke Zettlemoyer", "journal": "", "ref_id": "b50", "title": "Branch-train-merge: Embarrassingly parallel training of expert language models", "year": "2022" }, { "authors": "Jiacheng Liu; Alisa Liu; Ximing Lu; Sean Welleck; Peter West; Le Ronan; Yejin Bras; Hannaneh Choi; Hajishirzi", "journal": "", "ref_id": "b51", "title": "Generated knowledge prompting for commonsense reasoning", "year": "2022" }, { "authors": "Yixin Liu; Pengfei Liu; Dragomir Radev; Graham Neubig", "journal": "", "ref_id": "b52", "title": "Brio: Bringing order to abstractive summarization", "year": "2022" }, { "authors": "Yujian Liu; Xinliang Frederick Zhang; David Wegsman; Nicholas Beauchamp; Lu Wang", "journal": "", "ref_id": "b53", "title": "Politics: Pretraining with same-story article comparison for ideology prediction and stance detection", "year": "2022" }, { "authors": "Kyle Lo; Lucy Lu Wang; Mark Neumann; Rodney Kinney; Daniel S Weld", "journal": "", "ref_id": "b54", "title": "S2orc: The semantic scholar open research corpus", "year": "2020" }, { "authors": "Ziyang Luo; Can Xu; Pu Zhao; Xiubo Geng; Chongyang Tao; Jing Ma; Qingwei Lin; Daxin Jiang", "journal": "", "ref_id": "b55", "title": "Augmented large language models with parametric knowledge guiding", "year": "2023" }, { "authors": "Alex Mallen; Akari Asai; Victor Zhong; Rajarshi Das; Daniel Khashabi; Hannaneh Hajishirzi", "journal": "", "ref_id": "b56", "title": "When not to trust language models: Investigating effectiveness of parametric and non-parametric memories", "year": "2023" }, { "authors": "Saeed Masoudnia; Reza Ebrahimpour", "journal": "The Artificial Intelligence Review", "ref_id": "b57", "title": "Mixture of experts: a literature survey", "year": "2014" }, { "authors": "S Michael; Colin A Matena; Raffel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b58", "title": "Merging models with fisher-weighted averaging", "year": "2022" }, { "authors": "Brendan Mcmahan; Eider Moore; Daniel Ramage; Seth Hampson; Blaise Aguera Y Arcas", "journal": "PMLR", "ref_id": "b59", "title": "Communication-efficient learning of deep networks from decentralized data", "year": "2017" }, { "authors": "Kevin Meng; Sen Arnab; Alex J Sharma; Yonatan Andonian; David Belinkov; Bau", "journal": "", "ref_id": "b60", "title": "Mass-editing memory in a transformer", "year": "2022" }, { "authors": "Sewon Min; Weijia Shi; Mike Lewis; Xilun Chen; Wen-Tau Yih; Hannaneh Hajishirzi; Luke Zettlemoyer", "journal": "", "ref_id": "b61", "title": "Nonparametric masked language modeling", "year": "2023-07" }, { "authors": "Kosuke Nishida; Kyosuke Nishida; Masaaki Nagata; Atsushi Otsuka; Itsumi Saito; Hisako Asano; Junji Tomita", "journal": "", "ref_id": "b62", "title": "Answering while summarizing: Multi-task learning for multi-hop qa with evidence extraction", "year": "2019" }, { "authors": "Artidoro Pagnoni; Vidhisha Balachandran; Yulia Tsvetkov", "journal": "", "ref_id": "b63", "title": "Understanding factuality in abstractive summarization with frank: A benchmark for factuality metrics", "year": "2021" }, { "authors": "Thomas Pellissier Tanon; Gerhard Weikum; Fabian Suchanek", "journal": "Springer", "ref_id": "b64", "title": "Yago 4: A reason-able knowledge base", "year": "2020-06-04" }, { "authors": "Ethan Perez; Saffron Huang; Francis Song; Trevor Cai; Roman Ring; John Aslanides; Amelia Glaese; Nat Mcaleese; Geoffrey Irving", "journal": "", "ref_id": "b65", "title": "Red teaming language models with language models", "year": "2022" }, { "authors": "Fabio Petroni; Tim Rocktäschel; Sebastian Riedel; Patrick Lewis; Anton Bakhtin; Yuxiang Wu; Alexander Miller", "journal": "", "ref_id": "b66", "title": "Language models as knowledge bases?", "year": "2019" }, { "authors": "Jonas Pfeiffer; Ivan Vulić; Iryna Gurevych; Sebastian Ruder", "journal": "", "ref_id": "b67", "title": "Mad-x: An adapter-based framework for multi-task cross-lingual transfer", "year": "2020" }, { "authors": "Jonas Pfeiffer; Naman Goyal; Xi Lin; Xian Li; James Cross; Sebastian Riedel; Mikel Artetxe", "journal": "", "ref_id": "b68", "title": "Lifting the curse of multilinguality by pre-training modular transformers", "year": "2022" }, { "authors": "Ofir Press; Muru Zhang; Sewon Min; Ludwig Schmidt; Noah A Smith; Mike Lewis", "journal": "", "ref_id": "b69", "title": "Measuring and narrowing the compositionality gap in language models", "year": "2022" }, { "authors": "Yujia Qin; Jiajie Zhang; Yankai Lin; Zhiyuan Liu; Peng Li; Maosong Sun; Jie Zhou", "journal": "", "ref_id": "b70", "title": "Elle: Efficient lifelong pre-training for emerging data", "year": "2022" }, { "authors": "Yujia Qin; Cheng Qian; Xu Han; Yankai Lin; Huadong Wang; Ruobing Xie; Zhiyuan Liu; Maosong Sun; Jie Zhou", "journal": "", "ref_id": "b71", "title": "Recyclable tuning for continual pre-training", "year": "2023" }, { "authors": "Anna Jack W Rae; Potapenko; M Siddhant; Chloe Jayakumar; Timothy P Hillier; Lillicrap", "journal": "", "ref_id": "b72", "title": "Compressive transformers for long-range sequence modelling", "year": "2019" }, { "authors": "Eunsol Hannah Rashkin; Jin Yea Choi; Svitlana Jang; Yejin Volkova; Choi", "journal": "", "ref_id": "b73", "title": "Truth of varying shades: Analyzing language in fake news and political fact-checking", "year": "2017" }, { "authors": "Adam Roberts; Colin Raffel; Noam Shazeer", "journal": "", "ref_id": "b74", "title": "How much knowledge can you pack into the parameters of a language model", "year": "2020" }, { "authors": "Stephen Roller; Sainbayar Sukhbaatar; Jason Weston", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b75", "title": "Hash layers for large sparse models", "year": "2021" }, { "authors": "Ohad Rubin; Jonathan Herzig; Jonathan Berant", "journal": "", "ref_id": "b76", "title": "Learning to retrieve prompts for in-context learning", "year": "2022" }, { "authors": "Alireza Salemi; Sheshera Mysore; Michael Bendersky; Hamed Zamani", "journal": "", "ref_id": "b77", "title": "Lamp: When large language models meet personalization", "year": "2023" }, { "authors": "David Saxton; Edward Grefenstette; Felix Hill; Pushmeet Kohli", "journal": "", "ref_id": "b78", "title": "Analysing mathematical reasoning abilities of neural models", "year": "2019" }, { "authors": "Tal Schuster; Adam Fisch; Regina Barzilay", "journal": "", "ref_id": "b79", "title": "Get your vitamin c! robust fact verification with contrastive evidence", "year": "2021" }, { "authors": "Weijia Shi; Julian Michael; Suchin Gururangan; Luke Zettlemoyer", "journal": "", "ref_id": "b80", "title": "Nearest neighbor zero-shot inference", "year": "2022" }, { "authors": "Weijia Shi; Sewon Min; Michihiro Yasunaga; Minjoon Seo; Rich James; Mike Lewis; Luke Zettlemoyer; Wen-Tau Yih", "journal": "", "ref_id": "b81", "title": "Replug: Retrieval-augmented black-box language models", "year": "2023" }, { "authors": "Taylor Shin; Yasaman Razeghi; Robert L Logan; I V ; Eric Wallace; Sameer Singh", "journal": "", "ref_id": "b82", "title": "Autoprompt: Eliciting knowledge from language models with automatically generated prompts", "year": "2020" }, { "authors": "Vered Shwartz; Peter West; Le Ronan; Chandra Bras; Yejin Bhagavatula; Choi", "journal": "", "ref_id": "b83", "title": "Unsupervised commonsense question answering with self-talk", "year": "2020" }, { "authors": "Chenglei Si; Zhe Gan; Zhengyuan Yang; Shuohang Wang; Jianfeng Wang; Jordan Lee Boyd-Graber; Lijuan Wang", "journal": "", "ref_id": "b84", "title": "Prompting gpt-3 to be reliable", "year": "2022" }, { "authors": "Kaitao Song; Xu Tan; Tao Qin; Jianfeng Lu; Tie-Yan Liu", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b85", "title": "Mpnet: Masked and permuted pre-training for language understanding", "year": "2020" }, { "authors": "Robyn Speer; Joshua Chin; Catherine Havasi", "journal": "", "ref_id": "b86", "title": "Conceptnet 5.5: An open multilingual graph of general knowledge", "year": "2017" }, { "authors": "Heiner Stuckenschmidt; Christine Parent; Stefano Spaccapietra", "journal": "Springer", "ref_id": "b87", "title": "Modular ontologies: concepts, theories and techniques for knowledge modularization", "year": "2009" }, { "authors": "Zhiqing Sun; Xuezhi Wang; Yi Tay; Yiming Yang; Denny Zhou", "journal": "", "ref_id": "b88", "title": "Recitation-augmented language models", "year": "2022" }, { "authors": "Yi Tay; Vinh Tran; Mostafa Dehghani; Jianmo Ni; Dara Bahri; Harsh Mehta; Zhen Qin; Kai Hui; Zhe Zhao; Jai Gupta", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b89", "title": "Transformer memory as a differentiable search index", "year": "2022" }, { "authors": "Denny Vrandečić; Markus Krötzsch", "journal": "Communications of the ACM", "ref_id": "b90", "title": "Wikidata: a free collaborative knowledgebase", "year": "2014" }, { "authors": "Heng Wang; Wenqian Zhang; Yuyang Bai; Zhaoxuan Tan; Shangbin Feng; Qinghua Zheng; Minnan Luo", "journal": "", "ref_id": "b91", "title": "Detecting spoilers in movie reviews with external movie knowledge and user networks", "year": "2023" }, { "authors": "Xiaozhi Wang; Tianyu Gao; Zhaocheng Zhu; Zhengyan Zhang; Zhiyuan Liu; Juanzi Li; Jian Tang", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b92", "title": "Kepler: A unified model for knowledge embedding and pre-trained language representation", "year": "2021" }, { "authors": "Yujing Wang; Yingyan Hou; Haonan Wang; Ziming Miao; Shibin Wu; Qi Chen; Yuqing Xia; Chengmin Chi; Guoshuai Zhao; Zheng Liu", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b93", "title": "A neural corpus indexer for document retrieval", "year": "2022" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Fei Xia; Ed H Chi; V Quoc; Denny Le; Zhou", "journal": "", "ref_id": "b94", "title": "Chain-of-thought prompting elicits reasoning in large language models", "year": "" }, { "authors": "Peter West; Chandra Bhagavatula; Jack Hessel; Jena Hwang; Liwei Jiang; Ronan Le Bras; Ximing Lu; Sean Welleck; Yejin Choi", "journal": "", "ref_id": "b95", "title": "Symbolic knowledge distillation: from general language models to commonsense models", "year": "2022" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Rémi Louf; Morgan Funtowicz", "journal": "", "ref_id": "b96", "title": "Huggingface's transformers: State-of-the-art natural language processing", "year": "2019" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Rémi Louf; Morgan Funtowicz", "journal": "", "ref_id": "b97", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" }, { "authors": "Mitchell Wortsman; Gabriel Ilharco; Ya Samir; Rebecca Gadre; Raphael Roelofs; Ari S Gontijo-Lopes; Hongseok Morcos; Ali Namkoong; Yair Farhadi; Simon Carmon; Kornblith", "journal": "PMLR", "ref_id": "b98", "title": "Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time", "year": "2022" }, { "authors": "Wenhao Yu; Dan Iter; Shuohang Wang; Yichong Xu; Mingxuan Ju; Soumya Sanyal; Chenguang Zhu; Michael Zeng; Meng Jiang", "journal": "", "ref_id": "b99", "title": "Generate rather than retrieve: Large language models are strong context generators", "year": "2022" }, { "authors": "Elad Ben Zaken; Yoav Goldberg; Shauli Ravfogel", "journal": "", "ref_id": "b100", "title": "Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models", "year": "2022" }, { "authors": "Rowan Zellers; Ari Holtzman; Hannah Rashkin; Yonatan Bisk; Ali Farhadi; Franziska Roesner; Yejin Choi", "journal": "Advances in neural information processing systems", "ref_id": "b101", "title": "Defending against neural fake news", "year": "2019" }, { "authors": "Jingqing Zhang; Yao Zhao; Mohammad Saleh; Peter Liu", "journal": "PMLR", "ref_id": "b102", "title": "Pegasus: Pre-training with extracted gap-sentences for abstractive summarization", "year": "2020" }, { "authors": "Susan Zhang; Stephen Roller; Naman Goyal; Mikel Artetxe; Moya Chen; Shuohui Chen; Christopher Dewan; Mona Diab; Xian Li; Xi Victoria Lin", "journal": "", "ref_id": "b103", "title": "Opt: Open pre-trained transformer language models", "year": "2022" }, { "authors": "Xikun Zhang; Antoine Bosselut; Michihiro Yasunaga; Hongyu Ren; Percy Liang; Christopher D Manning; Jure Leskovec", "journal": "", "ref_id": "b104", "title": "Greaselm: Graph reasoning enhanced language models", "year": "2021" }, { "authors": "Xinran Zhao; Hongming Zhang; Xiaoman Pan; Wenlin Yao; Dong Yu; Jianshu Chen", "journal": "", "ref_id": "b105", "title": "Thrust: Adaptively propels large language models with external knowledge", "year": "2023" }, { "authors": "Zexuan Zhong; Tao Lei; Danqi Chen", "journal": "", "ref_id": "b106", "title": "Training language models with memory augmentation", "year": "2022" }, { "authors": "Yukun Zhu; Ryan Kiros; Rich Zemel; Ruslan Salakhutdinov; Raquel Urtasun; Antonio Torralba; Sanja Fidler", "journal": "", "ref_id": "b107", "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", "year": "2015" }, { "authors": "Shengyao Zhuang; Houxing Ren; Linjun Shou; Jian Pei; Ming Gong; Guido Zuccon; Daxin Jiang", "journal": "", "ref_id": "b108", "title": "Bridging the gap between indexing and retrieval for differentiable search index with query generation", "year": "2022" } ]
[ { "formula_coordinates": [ 3, 108, 686.7, 158.48, 10.62 ], "formula_id": "formula_0", "formula_text": "d i if i ∈ top-k j (sim(enc(d j ), enc(q)))" }, { "formula_coordinates": [ 4, 294.66, 260.44, 129.89, 13.71 ], "formula_id": "formula_1", "formula_text": "s sum d = sum-fact( d | d) ∈ [0, 1]." }, { "formula_coordinates": [ 4, 108, 323.28, 396, 23.34 ], "formula_id": "formula_2", "formula_text": "s fact d = max 1≤i≤k fact-check(d | t i ) ∈ [0, 1]." }, { "formula_coordinates": [ 4, 189.17, 432.87, 232.46, 25.82 ], "formula_id": "formula_3", "formula_text": "p( di | q) = exp(s di )/ dj ∈D k exp(s dj ), if di ∈ D k . 0, if di / ∈ D k ." }, { "formula_coordinates": [ 5, 174.03, 94.83, 107.06, 12.28 ], "formula_id": "formula_4", "formula_text": "∥ d1 ∥ d2 ∥ • • • ∥ d n3 ∥ q]" }, { "formula_coordinates": [ 5, 256.67, 225, 75.44, 9.68 ], "formula_id": "formula_5", "formula_text": "C = {c 1 , • • • , c n }," } ]
2023-05-17
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b11", "b13", "b34", "b3", "b26", "b39", "b30", "b16", "b32", "b27", "b0", "b21", "b28", "b29", "b12", "b15", "b7", "b25", "b22", "b40", "b37", "b22", "b40", "b37", "b19", "b10" ], "table_ref": [], "text": "The Knowledge Graph (KG) has demonstrated significant potential to organize world knowledge and human commonsense in the form of factual triples (head entity, relation, tail entity) [12,14,35]. As KG is not possible to record innumerable world knowledge, there is an increasing research interest in KG reasoning techniques, aiming to deduce new facts from existing KG triples [4,27,40]. The fundamental triple-level KG reasoning task, denoted by the query (head entity, relation, ?), aims to predict the missing tail entity from the entity set of the KG. As an example shown in Fig. 1(a), the answer to the query (\"River of No Return\", \"sung_by\") is the entity \"Marilyn Monroe\". One of the mainstream KG reasoning techniques is Knowledge Graph Embedding (KGE) [31,17,33,28]. Represented by TransE [1] and RotatE [22], KGE models represent entities as d-dimensional trainable embedding vectors for further KG reasoning, but suffer from high storage costs and cannot handle unseen entities [29,30].\nRecent works generate \"relative\" entity embeddings without entity-specific parameters, thereby reducing storage costs and supporting inductive reasoning. Path-level methods [13,16,8,26] encode entities by aggregating the features along all the paths that reach the candidate entity from the query entity. Such path encoding usually processes short paths with up to only three triples, because of high computational costs caused by the exponential growth of paths To reduce inference complexity, recent subgraph-level methods based on Graph Neural Networks (GNN) further convert complicated path encoding to a subgraph message-passing process in GNNs [23,41,38]. GraIL [23] extracts an enclosing subgraph for each candidate entity. NBFNet [41] and RED-GNN [38] propagate the query features layer by layer in the L-hop neighborhood subgraph of the query entity. However, these state-of-the-art GNN-based methods follow the same GNN message-passing process, i.e. propagating messages freely through all edges in graphs, which can bring in redundancy and inefficiency. In particular, the L-layer graph propagation in GNNs is equivalent to encoding all possible paths with lengths up to L, which is redundant for entities near the query entity to generate relative knowledge embeddings. Meanwhile, the existence of self-loop and reverse edges introduce an exponential number of relational paths, resulting in high computation overhead. For example, in Fig. 1(b), the dozens of redundant paths like \"Song-Song-1954-Movie-Marilyn\" are computationally intensive in GNNs but contribute negligibly to the query compared with the shortest path \"Song-Movie-Marilyn\".\nTo overcome the path redundancy issues of traditional graph propagation, we build a novel GNNbased KG reasoning framework, called Graph Percolation Embeddings (GraPE). From the novel perspective of the Transformation Error Entropy, we first theoretically analyze the entropy increase caused by path redundancy in the path encoding schema, and derive a clear definition for redundant paths in KG reasoning. On this basis, we design an efficient Graph Percolation Process to maintain low-entropy paths and remove redundant paths in the GNN message passing, inspired by the Percolation process in Fluid Mechanics [20,11]. The illustration of this new paradigm based on entropy-guided percolation is shown in Fig. 1(c). After that, a lightweight but effective GNN-based architecture with only two GNN layers is proposed to conduct the multi-layer graph percolation on KG subgraphs. To verify the performance of GraPE, we conduct extensive KG reasoning experiments in both transductive and inductive settings on five datasets. In the transductive reasoning, GraPE outperforms the second-best NBFNet with less than 50% training parameters and 10% triple calculations, while in the inductive reasoning, GraPE obtains the best performance using around 30% parameters and 50% triple calculations of RED-GNN on average. The rest of the paper is organized as follows. We introduce the background and notations in Section 2. Section 3 details the GraPE framework and its theoretical analysis. Section 4 reports the experimental studies. Finally, we provide some concluding remarks in Section 5." }, { "figure_ref": [], "heading": "Preliminary", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Notations and Definitions", "publication_ref": [ "b37", "b40", "b24" ], "table_ref": [], "text": "A Knowledge Graph is in the form of G = {E, R, T }, where T = {(e h , r, e t )|e h , e t ∈ E, r ∈ R} is a set of factual triples, and E, R are the sets of entities and relations, respectively. Given a query (q, r q ) containing a query entity q ∈ E and a query relation r q ∈ R, the knowledge graph reasoning task aims to find the target entity e a ∈ E satisfying that (q, r q , e a ) or (e a , r q , q) belongs to the knowledge graph G. Following previous work [38,41,25], we augment the triples in G with reverse and identity relations. The augmented triple set T + is defined as: T + = T ∪{(e t , r , e h )|(e h , r, e t ) ∈ T } ∪ {(e, r i , e)|e ∈ E}, where relation r is the reverse relation of a relation r, relation r i refers to the identity relation, and the number of augmented triples is\n|T + | = 2|T | + |E|.\nFurthermore, a relational path from the query entity q to an entity e t is denoted as P q,et = {(q, r 1 , e 1 ), (e 1 , r 2 , e 2 ), • • • , (e |P |-1 , r |P | , e t )}, which is a set of triples connected head-to-tail sequentially. |P | L is the number of triples in the path P et . We define the relative distance γ q,et as the length of the shortest relational path between q and e t . Unless otherwise specified, we use P et and γ et as the abbreviations of P q,et and γ q,et in this paper. Then, we donate N q as the -hop neighborhood entities of q in which the relative distance of each entity from q is equal to , i.e. N q = {e|γ e = , e ∈ E}. The L-hop neighborhood subgraph G q ⊆ G of the query entity q consists of the triples whose head and tail entities belong to N L q . The main notations that will be used in this paper are summarized in Appendix A." }, { "figure_ref": [], "heading": "Related Work: Three Embedding Levels for KG Reasoning", "publication_ref": [ "b0", "b32", "b21", "b12", "b15", "b7", "b25", "b17", "b40", "b37" ], "table_ref": [], "text": "Triple-level Absolute Embedding: Traditional entity embedding models, such as TransE [1], Dist-Mult [33], RotatE [22], assign an individual, trainable d-dimensional vector e i ∈ R d for each entity. The embedding vector of one entity e t is expected to be close to e h ⊗ r for each training triple (e h , r, e t ) ∈ T + in the embedding space, such that it can be represented as:\ne t = 1 n (e h ,r,et)∈T + (e h ⊗ r),(1)\nwhere the transformation operator ⊗ transforms the head entity vector e h using the relation-specific parameter r. Such absolute embeddings are effective but cannot handle unseen entities after training.\nPath-level Relative Embedding: Path encoding-based methods [13,16,8,26,18] aim to capture local entity semantics by encoding relational paths in the KG without entity-specific parameters.\nGiven the query entity q as the start node, the basic idea is to represent an entity e t as the feature aggregation of all relational paths from q to e t in the L-hop neighborhood subgraph G q , such as:\ne t|q = F(P et ) = 1 n P ∈Pe t (q ⊗ r 1 ⊗ • • • ⊗ r |P | | (ei,ri,e i )∈P ),(2)\nwhere P et = {P q,et } is the path set and n is the number of relational paths. However, it is timeconsuming since the number of paths grows exponentially w.r.t. path length.\nSubgraph-level Iterative Embedding: Recent GNN-based methods, such as NBFNet [41] and RED-GNN [38], utilize the iterative process of graph message-passing to avoid the exponentiallygrowing triple calculations. Given e 0 q|q = q, the formula in the -th iteration is as follows:\ne t|q = ϕ W φ e -1 i|q ⊗ r | (ei,r,et)∈Gq , e -1 t|q ,(3)\nin which the notation φ(•) refers to the aggregation function that aggregates all triple messages on the 1-hop neighborhood subgraph, W is a weighting matrix in the -th layer and ϕ(•) is the update function. Although path encoding in this way can be completed in the polynomial time complexity, this technique still encodes all possible L-hop relational paths for each entity, which leads to redundancy." }, { "figure_ref": [], "heading": "Entropy-Guided Path Redundancy", "publication_ref": [], "table_ref": [], "text": "In order to reduce redundant paths in the KG reasoning models, we theoretically analyze the path redundancy from the novel view of Transformation Error Entropy.\nTransformation Error. Based on the descriptions in Sec. 2.2, we observe that Equation (2) and Equation ( 3) can be regarded as a nested form of the triple-level Equation (1), hence path encoding is also based on the same assumption that e t ≈ e h ⊗ r. However, when training in the whole KG, trainable parameters would ultimately converge to a sub-optimal solution for each training triple, such that we have e t = e ⊗ r + . We call the error between e t and e ⊗ r as the transformation error. Note that, the transformation error objectively exists caused by model training, as long as the model employs a trainable relation vector for one relation r to encode all r-involved triples.\nConsidering the transformation error in path encoding, we can measure path redundancy by computing the error entropy. A high-entropy path means the message passing through this path entails more uncertainty. For instance, a redundant path \"A-B-A-B-C\" conveys repetitive information and more transformation errors for node C if the path \"A-B-C\" has been calculated. Because the transformation error per triple exists but is hard to measure, without loss of generality, we assume that the transformation errors of all triples are independent and identically distributed (i.i.d.). Then, our findings about Transformation Error Entropy are formalized in the theorems below: Theorem 1. (Error Propagation) Given the k-th triple (e k-1 , r, e k ) of a relational path starting from q, the transformation error entropy of the relative embedding vector e k|q is not smaller than that of e k-1|q , i.e., H(e k|q ) H(e k-1|q ).\nTheorem 1 indicates that transformation errors can propagate through paths and longer relational paths accumulate larger entropy. It motivates us to encode the shortest paths to gather low-entropy entity embeddings. However, longer paths may contain more semantics and the mean entropy of multiple paths may decrease in Equation (2). For instance, path \"A-D-E-C\" has larger entropy than path \"A-B-C\", but the mean entropy of the two paths is lower than each of them. Therefore, taking the inevitable entropy increase as the criterion, the definition of redundant paths is clarified as follows:\nDefinition 1. (Redundant Path) Suppose a path P from the query entity q to an entity e t with its length more than the relative distance γ q,t , and at least γ q,t triples in P also exist in the shortest paths from q to e t , then the path P is a redundant path for the relative entity embedding e t|q . Theorem 2. (Entropy Increase) Let P be the set of all shortest paths between the query entity q and a candidate entity e t in G. If a redundant path P is added to the path set P, then the transformation error entropy of the mean-aggregated vector e t|q would increase, i.e. H(F(P ∪ P )) > H(F(P)).\nTheorem 2 indicates that adding a redundant path P would inevitably increase the entropy of the mean-aggregated vector of all shortest paths (calculated as Equation ( 2)). It is reasonable because the \"effective\" part of P is overlapped with existing paths while the longer P has larger transformation errors, such as \"A-B-D-B-C\" and \"A-B-A-B-C\". We prove two theorems in Appendix B.1 and B.2.\nIn summary, to minimize the transformation error entropy in the path encoding, the above theoretical analysis guides us to solve the path redundancy issue, by maintaining the shortest paths (Theorem 1) and removing redundant paths (Theorem 2)." }, { "figure_ref": [ "fig_1" ], "heading": "Graph Percolation Embeddings", "publication_ref": [ "b38", "b41", "b19", "b10", "b9" ], "table_ref": [], "text": "Integrating all possible paths in path encoding or GNNs requires huge computation costs and accumulates transformation errors. In order to reduce path amount, previous work [39,42] has attempted to select top K edges in each iteration by random sampling strategies or learnable attention mechanisms, but unavoidably caused information loss resulting from unprocessed triples. Meanwhile, although Theorem 2 points out definite redundant paths, removing them in the GNN message-passing process is still intractable.\nThe Percolation phenomenon in Fluid Mechanics [20,11,10] motivates us to propose a queryspecific message-passing process maintaining shortest paths while avoiding redundant paths without complicated calculations and triple loss. Similar to a river flowing downhill through a porous medium under the control of gravity, we model GNN message passing as the graph percolation process by removing \"uphill edges\" to avoid redundant paths, which will be detailed in Sec. 3.1. On this basis, we propose a novel Graph Percolation Embeddings (GraPE) framework in Sec. 3.2. GraPE employs the graph percolation process and achieves efficient Knowledge Graph reasoning in both transductive and inductive settings, whose diagram is shown in Fig. 2 3" }, { "figure_ref": [], "heading": ".1 Graph Percolation Process", "publication_ref": [ "b5", "b19", "b37" ], "table_ref": [], "text": "In Fluid Mechanics, the flow of a fluid through a porous medium is called Percolation, the percolation model computes the percolation flux following Darcy's law [6,20]: q = κ(p b -pa) µh = κ µ ∆h, where the unit potential difference (or called pressure gradient) is denoted as ∆h = (p b -p a )/h. If there is no potential difference over a distance h (i.e. ∆h = 0), no flow occurs. Otherwise, a downhill flow will occur from high potential towards low potential because of gravity or artificial pressure.\nPercolation Paths. Simulating the above gravitational potential in KGs as the potential proportional to the relative distance between each entity and the query entity q, it is obvious that a path is the shortest path from q to an entity if and only if each triple (e h , r, e t ) of the path is \"downhill\" (i.e. γ e h < γ et ). Meanwhile, as in Proposition 1, we discover that redundant paths would be introduced by \"uphill\" triples (γ e h γ et ).\nProposition 1. (See Appendix B.3 for proof) Let Ĝ be the KG subgraph constructed by the triples of all shortest paths starting from the query entity q. Suppose a new triple t = (e 1 , r, e 2 ) whose head and tail entities exist in Ĝ and the relative distance γ e1 γ e2 . Adding this triple t into Ĝ leads to at least one new redundant path passing t.\nTo this end, we are motivated to define \"percolation paths\" without uphill triples in the middle. Specifically, given the -th triple (e 1 , r, e 2 ) in the path P (1 |P |), the potential difference ∆h between the two entities are as follows:\n∆h (e 1 , e 2 ) = { max(γ e2 -γ e1 , 0) < |P | min(γ e2 -γ e1 + 1, 1) = |P |(4)\ne t|q = 1 n\nP ∈Pe t ∆h |P | • • • ∆h 1 (q ⊗ r 1 ⊗ • • • ⊗ r |P | )| (ei,ri,e i )∈P .(5)\nBased on the percolation model, a percolation path is valid only when all the ∆h values along the path are positive. Such that the whole set of percolation paths in a subgraph G q satisfies three principles:\n(1) all shortest paths are included; (2) no redundant path is involved;\n(3) all knowledge facts in the subgraph are calculated. In particular, as defined in Equation ( 4), the last triple of a percolation path is allowed to be a same-potential triple (γ e2 -γ e1 = 0), because such paths are also not redundant by Definition 1. Nevertheless, if we only maintain downhill triples for the shortest paths, the knowledge facts in the same-potential triples would be unavoidably lost.\nGraph Percolation Process. We further integrate the percolation paths into the GNN message passing. Due to the directionality of the percolation paths, the relative embedding vector of an -hop entity e t ∈ N q cannot be influenced by the low-potential entities in the deeper layers. Therefore, inspired by the efficiency gained by iterative embeddings in previous work [38], we combine the percolation paths of all neighborhood entities into L layer-wise subgraphs Ĝq = {{E q , R, T q }|1 L} as the following:\nE q = N -1 q ∪ N q , T q = {(e 1 , r, e 2 )|e 1 ∈ N -1 q , e 2 ∈ E q } (6)" }, { "figure_ref": [ "fig_0" ], "heading": "Algorithm 1 Graph Percolation Process", "publication_ref": [], "table_ref": [], "text": "Require: a KG G, query (q, rq), and layer number L.\n1: initialize e 0 q|q = q, N 0 q = {q}, e 0 t|q = 0; 2: for = 1 . . . L do 3:\ncollect the -hop entities N q from G; 4:\nE q = N -1 q ∪ N q ; 5:\ncollect the -hop triples T q w.r.t E q ; 6: message passing with Equation ( 7) for E q ; 7:\ne t|q = e t|q + e -1 t|q for entities et ∈ N -1 q ; 8: end for 9: return relative embeddings e t|q for all et ∈ E.\nConsistent with the percolation paths, the subgraph Ĝ q in the -th layer only involve the limited triples from the ( -1) hop to the entities in the same hop or the lower -th hop. By integrating the layer-wise subgraphs Ĝq with the basic GNN process in Equation ( 3), the graph percolation process recursively constructs the relative knowledge embeddings of neighborhood entities layer by layer as:\ne t|q = δ W φ e -1 i|q ⊗ r i|q | (e i ,r i ,e t )∈T q ,(7)\nwhere r i|q is the relation parameters corresponding to the query relation r q . The entire graph percolation process is shown in Algorithm 1.\nIt is worth noting that the L-layer graph percolation is computationally efficient, because each neighborhood entity only conducts two times of message passing and each triple (γ e h γ et ) is calculated only once. Fig. 1(b)(c) vividly illustrate the differences between graph percolation and graph propagation processes. Different from basic graph propagation that considers every triple, graph percolation is like the river of no return, flowing from the source q, percolating the entities layer by layer and never going backward." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "GraPE Architecture", "publication_ref": [ "b36", "b6", "b37" ], "table_ref": [], "text": "GNN-based KG reasoning usually requires deep GNNs with at least three layers to cover as many candidate entities as possible. As proved in the previous research [37], deep GNNs suffer from over-smoothing and model degradation issues, which would negatively impact the model accuracy. Meanwhile, deep GNNs require more trainable parameters, the number of which scales linearly with the number of layers. In order to improve the efficiency of GNN-based KG reasoning, we design a novel GraPE framework that follows the encoder-decoder architecture, as shown in Fig. 2.\nGraph Percolation Encoder. Given a query (q, r q ) and the layer-wise subgraphs Ĝq , the encoder conducts the graph percolation process in Algorithm 1 to generate relative knowledge embeddings. Different from previous work utilizing multiple GNN layers, the GraPE encoder employs only one GNN layer to process the L-layer subgraphs via Equation (7). The design space of the relation parameters and other components will be detailed in Appendix C.\nTo further reduce complexity, we propose a compression module with two MLP layers to transfer the relative entity embeddings e t|q from the d-dimensional space to a smaller d l -dimensional space as:\nêt|q = δ W h2 • δ W h1 • [e t|q : r q ]\n, where r q is a trainable relation embedding vector of r q , concatenated with the relative entity vector. W h1 ∈ R 2d×d and W h2 ∈ R d×d l are weighting matrices, and δ is the activation function.\nGraph Propagation Decoder. The GraPE decoder predicts the missing entity using the d ldimensional relative entity embeddings. Here we employ another GNN layer to conduct a basic graph propagation in the whole L-hop neighborhood subgraph, which aims to provide the 1-hop neighbor features to each candidate entity. This process is performed once as Equation ( 3), whose time complexity compared with Graph Percolation will be discussed in Sec. 4.4. It is necessary because graph percolation ignores the neighbor information on the lower-potential nodes and may fail to distinguish similar candidate entities. For example, in Fig. 2(c), the entity 'Marilyn' and 'Robert' would get the same relative embedding vectors if we ignore the specific features of 'Singer'.\nAfter that, for each entity e t , GraPE predicts its plausibility score s t = H(ê t|q , r q ) via a two-layer MLP module, which is similar to the compression module but compresses embedding vectors from d l dimensions to one. Following the previous work [38], we optimize the GraPE parameters by minimizing the multi-class cross entropy loss with each training triple (q, r q , e a )." }, { "figure_ref": [], "heading": "L =", "publication_ref": [], "table_ref": [], "text": "(q,rq,ea)∈T H(ê a|q , r q ) + log et∈E e H(ê t|q ,rq) (\nRegardless of the number of layers in the neighborhood subgraph, GraPE only requires two GNN layers for path encoding, i.e. the GNN layer weights for the first L-1 layers are tied. Meanwhile, in practice, we conduct only graph percolation in the first L-1 layers, because the L-th layer calculations are overlapped with the decoder propagation. Furthermore, it is convenient to design different data flows in the two components, better adapting to the local propagation of the shallower layer and the global propagation of the final layer." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [], "table_ref": [], "text": "Task Settings. To verify the performance of GraPE, we conduct experiments on the KG reasoning tasks. There are two task settings in current KGh reasoning studies: Transductive Reasoning and Inductive Reasoning, which are determined by the scope of the KG facts used to make predictions. Specifically, given a knowledge graph G tra = {E tra , R, T tra }, the transductive KG reasoning task trains and evaluates a model with the same G tra . Differently, the inductive KG reasoning task evaluates the trained model on a new knowledge graph G tst = {E tst , R, T tst }. G tra and G tst contain the same set of relations R but disjoint sets of entities and triples, i.e. E tra ∩ E tst = ∅ and T tra ∩ T tst = ∅." }, { "figure_ref": [], "heading": "Datasets.", "publication_ref": [ "b1", "b23", "b22", "b1", "b23", "b31", "b0", "b22", "b32", "b4", "b40" ], "table_ref": [ "tab_6", "tab_7" ], "text": "Our experimental studies are conducted on five commonly used datasets. WN18RR [2] and FB15k237 [24] are used for the transductive reasoning task. The two datasets are extracted from the English lexical database WordNet and the knowledge base Freebase, respectively. For the inductive reasoning task, we use the three series of benchmark datasets [23] created on WN18RR [2], Hits@1 Hits@3 Hits@10 MRR Hits@1 Hits@3 Hits@10 KGE-based FB15k237 [24] and NELL-995 [32]. The statistics of the datasets are given in Table 7 and Table 8 in Appendix. We use two evaluation metrics for both task settings following the previous work [1,23]. MRR (Mean Reciprocal Rank) is the average inverse rank of test triples and Hits@N is the proportion of correct entities ranked in the top N. A higher MRR and Hits@N indicate improved performance.\nImplementation Details. We select the hyperparameters of our model via grid search according to the metrics on the validation set. For the model architecture, we set the default transform operator ⊗ as the Hadamard product in DistMult [33] and the aggregation function φ(•) as the PNA aggregator [5]. Following the data preprocessing of NBFNet [41], we drop out triples that directly connect query entities during training on FB15k237. More hyperparameter configurations on different datasets are shown in Appendix C. All experiments are performed on Intel Xeon Gold 6238R CPU @ 2.20GHz and NVIDIA RTX A5000 GPU, and are implemented in Python using the PyTorch framework." }, { "figure_ref": [], "heading": "Main Experiments", "publication_ref": [], "table_ref": [ "tab_0", "tab_2", "tab_2" ], "text": "Transductive KG Reasoning. We compare GraPE with 14 baselines, including six KGE-based, four path-based, and four GNN-based models. The experimental results on WN18RR and FB15k237 are shown in Table 1. We observe that GraPE outperforms existing methods on all metrics of the two datasets. Compared with traditional KGE-based and path-based baselines, GraPE achieves significant performance gains. Especially, the MRR of GraPE has a more than 10% increase on both datasets which improves from 0.496 to 0.568 on WN18RR and from 0.366 to 0.423 on FB15k237.\nIn the five GNN-based models, we find that two models based on the absolute knowledge embeddings, RGCN and CompGCN, are significantly weaker than the latter three. It proves the effectiveness of the relative knowledge embeddings on KG reasoning. Benefiting from the filtered relation paths and lower transformation errors, GraPE outperforms the state-of-the-art NBFNet, especially on MRR and Hits@1. It indicates that the relative embeddings of GraPE can better distinguish similar entities and achieve accurate predictions. Although the performance gains in Hit@3 and Hit@10 are smaller than 1%, the computational complexity and model cost of GraPE are much lower than NBFNet, which will be discussed in Sec. 4.4. Inductive KG Reasoning. Considering most previous models cannot handle the inductive settings, we compare GraPE against three path-based and three GNN-based baselines and summarize the MRR results on three series of inductive datasets in Table 2.\nFrom Table 2, we have the following observations. On all inductive subsets of three datasets, GraPE achieves state-of-the-art performance. Compared with the previous best method RED-GNN, GraPE obtains an average 8% relative performance gain in MRR. Especially, the MRR of GraPE improves from 0.369 to 0.422 on the FB15k237-v1 dataset, and from 0.419 to 0.517 on NELL-995-v2. In the inductive task, the generalization ability of relational features learned from paths is much more important than that in the transductive one. Instead of encoding all possible relational paths in RED-GNN, GraPE only extracts features from the limited low-entropy paths, thus improving precision and efficiency simultaneously." }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [], "table_ref": [ "tab_3", "tab_3" ], "text": "We further verify the GraPE performance with different numbers of layers and dimensions and different GNN functions. The experimental results evaluated with MRR are shown in Table 3. In contrast, the KGs on the inductive datasets usually have fewer entities, and sometimes a four-layer propagation already covers the whole entity set. Although the 64-dimensional GraPE outperforms the lightweight one in several metrics, the performance gains are not significant.\nWith the same subgraph layers, the 64-dimensional GraPE only obtains an average 1% relative performance gain in MRR. Considering the model inference time and GPU memory costs, we recommend using the 32-dimensional setting in inductive reasoning to balance both accuracy and efficiency.\nCore Functions in GNN Layers. Table 3(c) compares the results of GraPE with different transform operators and aggregation functions. Compared with the PNA aggregation containing multiple aggregators and scalars, the pure SUM and Mean aggregations perform weaker on the four datasets, especially on WN18RR. Meanwhile, using vector addition in TransE or vector rotation in RotatE as the transform operator has relatively fewer performance changes. Overall, the group of the PNA aggregation and Hadamard product in DistMult achieves better and more robust performance on all datasets." }, { "figure_ref": [], "heading": "Complexity Analysis", "publication_ref": [], "table_ref": [ "tab_4", "tab_4" ], "text": "Benefiting from the graph percolation and the lightweight architecture, GraPE achieves a lower time and space complexity than previous GNN-based methods and requires less inference time than the previous two methods. In Appendix D, we compare the computational complexity of GraPE and previous methods. We further compare the actual parameter amounts of three GNN-based methods when they achieve state-of-the-art performance. Table 4(a) shows the parameter amount on five datasets. All data is calculated based on the best hyperparameter settings reported by these methods. Because of the lightweight architecture, GraPE requires the fewest parameters to get the best performance on all datasets. Comparing GraPE and RED-GNN, we find that RED-GNN requires much more parameters on the WN-v1 and NE-v1 datasets. It is because the number of vector dimensions d required by RED-GNN on the two datasets is two times larger than that of GraPE. While GraPE still gets better performance with fewer parameters. Meanwhile, NBFNet requires much more parameters than the other two methods, because it utilizes a linear layer for relation embeddings with O L|R|d 2 costs. Therefore, adopting NBFNet on complicated KGs is very resource-intensive.\nWe also compare the total computations of triples involved by the three state-of-the-art methods in the inference period. The average number of triples that a model calculates for one query is shown in Table 4(b). We observe that GraPE needs the fewest triple calculations benefiting from the graph percolation. NBFNet encodes all triples in the subgraph with L times, hence it costs the highest calculations. In the two datasets of transductive tasks, GraPE only calculates ten percent of triples and obtains better performance than NBFNet. Meanwhile, RED-GNN encodes triples in the -hop with (L -+ 1) times in order to encode all relational paths. The redundant paths negatively influence the embedding effect and force RED-GNN to do reasoning with more layers. We further compare the inference time of three GNN-based methods. The running time per inference epoch on the validation dataset is shown in Table 4(c). We observe that GraPE achieves much less inference time than RED-GNN and NBFNet on both transductive datasets. Especially on FB15k-237, NBFNet gets the second-best performance requiring a ten times longer inference phase. The efficiency differences are relatively small on the three inductive datasets, because of the small graph scale.\nIn summary, GraPE outperforms the second-best NBFNet with less than 50% parameters and 10% triple calculations in the transductive task, while in the inductive task, GraPE gets the best performance using around 30% parameters and 50% triple calculations of RED-GNN on average." }, { "figure_ref": [], "heading": "Discussion and Conclusions", "publication_ref": [], "table_ref": [], "text": "Limitation and Future Work. There are two limitations to GraPE. First, our theoretical analysis is based on the basic path encoding process and the i.i.d. assumption. We will further extend GraPE by designing more refined percolation paths for different subgraphs and GNN architectures. Second, to process large-scale KGs containing millions or billions of entities, only algorithm improvement of GraPE is not enough, we will further conduct system design and training optimization.\nSocietal Impact. Our work drastically reduces the computational time of reasoning models, aiding in the control of carbon emissions. However, as the efficacy of these models improves, their potential for misuse increases, such as exposing sensitive relationships in anonymized data.\nConclusions. We propose a novel GNN-based KG reasoning framework, Graph Percolation Embeddings (GraPE). According to the theoretical analysis of error entropy, we design a potential-involved path encoding method and extend it to GNN message passing. GraPE achieves state-of-the-art performance in both transductive and inductive reasoning tasks. Besides, GraPE has a relatively lower time and space complexity than previous GNN-based methods." }, { "figure_ref": [], "heading": "A Summary of Notations", "publication_ref": [], "table_ref": [], "text": "The main notations used in this paper and their descriptions are summarized in Table 5.\nTable 5: Summary of the major notations in this paper." }, { "figure_ref": [], "heading": "Symbol Description G", "publication_ref": [], "table_ref": [], "text": "A knowledge graph (KG) T\nThe set of existing triples in a KG T\nThe set of augmented triples E, R\nThe entity set and relation set in a KG" }, { "figure_ref": [], "heading": "|T |, |E|", "publication_ref": [], "table_ref": [], "text": "The item number in a specific set e, r\nAn entity (e) or a relation (r) in a KG (q, r q )\nA query with an entity q and a relation\nr q G q ⊆ G\nThe neighborhood subgraph of the entity q e i\nThe absolute embedding vector of the entity e i e i|q\nThe relative embedding vector of e i relative to q r\nThe parameters corresponding to the relation r d, d l Dimension of embedding vectors or features L Number of GNN layers or neighborhood hops P q,et , P q,et The path set and a relational path from q to e t F(P et )\nThe path encoding process with the path set P et γ q,et , γ et\nThe relative distance from q to e t N q\nThe -th hop neighbors of q H( )\nThe information entropy of the error ∆h(e 1 , e 2 ) The potential difference between two entities e h ⊗ r\nThe transform operator of a KGE model φ(•)\nThe aggregation function of a GNN model" }, { "figure_ref": [], "heading": "B Proof B.1 Proof for Theorem 1", "publication_ref": [ "b0", "b32", "b21" ], "table_ref": [], "text": "Given the k-th triple (e k-1 , r, e k ) of a relational path starting from q, the transformation error entropy of the relative embedding vector e k|q is not smaller than that of e k-1|q , i.e., H(e k|q ) H(e k-1|q ).\nProof. In most embedding-based models, affine transformations are employed to model the interactions between the relative embedding vector of the head entity and the relation-specific parameter within a triple. Given the triple (e k-1 , r, e k ) in the path, we denote the general form of a relational affine transformation as follows:\ne k|q = e k-1|q ⊗ r k + = A k e k-1|q + b k + ,(9)\nwhere A k ∈ R d×d , b k ∈ R d are the linear part (matrix) and the translation part (vector) of the affine transformation. Without loss of generality, we assume that the transformation error k in each triple is independent and identically distributed Gaussian noise, i.e.: ∼ N (0, σ2 ). Then, we can represent the variance of the entity vector e k|q as:\nVar(e k|q ) = Var(A k e k-1|q + b + ) = A k Var(e k-1|q )A T k + σ2 .(10)\nMeanwhile, we already know the information entropy of Gaussian distribution H(N (µ, σ 2 )) = 1 2 ln 2πeσ 2 . In this formula, when the variance σ 2 increases, the information entropy will also increase accordingly. Therefore, we have:\nH(e k|q ) H(e k-1|q ) ⇔ Var(e k|q ) Var(e k-1|q )\n⇔ A k Var(e k-1|q )A T k + σ2 Var(e k-1|q ).(11)\nTo prove Equation (12), we discuss three major types of affine transformations in KGE models:\n1 Translation Transformation (used in TransE [1]): A k is the identity matrix (I).\n2 Scaling Transformation (used in DistMult [33]): A k is a diagonal matrix. 3 Rotation Transformation (used in RotatE [22]): A k is an orthogonal matrix.\nFor the types 1 and 3 , the A k is orthogonal (A k A T k = I). Because the orthogonal matrix A k preserves the variances in all directions when transforming Var(e k-1|q ), we have\nA k Var(e k-1|q )A T k\nVar(e k-1|q ). Therefore, Equation ( 12) holds.\nFor the type 2 , it is complicated to analyze each relation-specific diagonal matrix, so we assume one general diagonal matrix A = diag(a 1 , a 2 , • • • , a d ) for k steps. And the single item of embedding variance (AVar(e k-1|q\n)A T ) i = a 2 i σ 2 (k-1,i) .\nAs the embedding error is Gaussian distributed, Equation ( 12) can be represented as:\nd i=0 σ 2 (k,i) = d i=0 a 2 i σ 2 (k-1,i) + d i=0 σ2 i d i=0 σ 2 (k-1,i) ⇐ a 2 i σ 2 (k-1,i) + σ2 i σ 2 (k-1,i) ,(13)\nwhere σ 2 (k-1,i) and σ2 i are the i-th dimentional variance of Var(e k-1|q ) and σ2 . Therefore, we further prove Equation ( 13) for the type 2 using mathematical induction:\nBase case (k=1): The variance of the query vector Var(q) is zero, i.e. σ 2 (0,i) = 0, and σ2 i 0. Thus, Equation ( 13) holds for k = 1.\nInductive step: Assume that Equation ( 13) is true for k = n, where n is an arbitrary natural number. That is, we assume that:\na 2 i σ 2 (n-1,i) + σ2 i σ 2 (n-1,i) ⇔ (a 2 i -1)σ 2 (n-1,i) + σ2 i 0.(14)\nNow, we aim to show that Equation ( 13) is also true for k = n + 1:\nVar(e n+1|q ) -Var(e n|q ) =\nd i=0 σ 2 (n+1,i) - d i=0 σ 2 (n,i)(15)\n= d i=0 a 2 i σ 2 (n,i) + d i=0 σ2 i - d i=0 σ 2 (n,i)(16)\n= d i=0 a 2 i [a 2 i σ 2 (n-1,i) + σ2 i ] - d i=0 a 2 i σ 2 (n-1,i)(17)\n= d i=0 a 2 i [(a 2 i -1)σ 2 (n-1,i) + σ2 i ](18)\nUsing our inductive assumption in Equation ( 14) and a 2 i 0, it shows that Var(e n+1|q )-Var(e n|q ) 0 is true for k = n + 1. By mathematical induction, Equation ( 13) is proven to be true for all natural numbers n.\nAs a result, H(e k|q ) H(e k-1|q ) holds in three types of affine transformations utilized in this paper, the theorem is proved." }, { "figure_ref": [], "heading": "B.2 Proof for Theorem 2", "publication_ref": [], "table_ref": [], "text": "Let P be the set of all shortest paths between the query entity q and a candidate entity e t in G. If a redundant path P is added to the path set P, then the transformation error entropy of the mean-aggregated vector e t|q would increase, i.e. H(F(P ∪ P )) > H(F(P)).\nProof. Given the path set P = {P 1 , P 2 , • • • , P m } in which m is the number of the shortest paths, we can compute the transformation error variance of F(P) as follows:\nVar(F(P)) = 1 m 2 m i=1 Var(P i ) + i =j Cov( i , j ) ,(19)\nwhere P i , i denote the embedding vector and the transformation error of the path P i , and Cov() is the covariance of two error variables. To simplify the following derivation, we adjust the single-path error variance in Equation 10: Var(P i ) = Var(e |q ) = Var(e -1|q ) + σ2 = Var(q) + σ2 = σ2 , (20) which means the variance of input errors won't be changed (neither enlarged nor shrunk) by embedding or GNN operations. Such that, given the path P i from q containing triples, the error variance of the path σ 2 Pi = σ2 > 0. Then, we can represent the aggregated variance Var(F(P)) as:\nVar(F(P)) = 1 m 2 m i=1 σ 2 Pi + i =j Cov( i , j ) = 1 m 2 m i=1 k i σ2 + C m m 2 = m σ2 + C m m 2 , (21\n)\nwhere C m = i =j Cov( i , j ) denotes the covariance value derived from the path overlapping. Now, we discuss the changes in the error variance after adding a new path P . Because P contains at least triples that exist in P, without loss of generality, we first assume that P internally contains an independent part with length α > 0, and the remainder of P with length is exactly an existing path p l ∈ P. Then, the changed error variance can be computed as follows:\nVar(F(P\n∪ { P })) = 1 (m + 1) 2 m i=1 σ 2 Pi + i =j Cov( i , j ) + σ 2 P + 2 m i=1 Cov( i , l )(22)\n≈ 1 (m + 1) 2 m i=1 σ 2 Pi + C m + σ 2 p + 2(Cov( P , l ) + 1 m C m )(23)\n= 1 (m + 1) 2 m σ2 + C m + ( + α)σ 2 + 2 σ2 + 2 m C m(24)\n= m + 3 + α (m + 1) 2 σ2 + m + 2 m(m + 1) 2 C m ,(25)\nNote that, in Equation 23, due to the path overlapping in P is agnostic, we assume the covariance of one path is the average value of the total covariance C m , i.e.\ni =l Cov( i , l ) ≈ 1 m C m .\nThen, we need to prove H(F(P ∪ P )) > H(F(P)), and we simplify the equation as follows:\nH(F(P ∪ P )) -H(F(P)) = m + 3 + α (m + 1) 2 σ2 + m + 2 m(m + 1) 2 C m - m σ2 + C m m 2 (26) = m 2 (m + 3 + α)σ 2 + m(m + 2)C m -m(m + 1) 2 σ2 -(m + 1) 2 C m m 2 (m + 1) 2 (27) = (m(m -1) + m 2 α)σ 2 -C m m 2 (m + 1) 2 . (28\n)\nMeanwhile, we know that the total covariance C m reaches its maximum value when all m paths fully overlap:\nC m = i =j Cov( i , j ) i =j σ2 = m(m -1) σ2 .(29)\nCombining Equation 28and Equation 29, we have:\nH(F(P ∪ P )) -H(F(P)) = m 2 ασ 2 + m(m -1) σ2 -C m m 2 (m + 1) 2 ασ 2 (m + 1) 2 > 0.(30)\nTherefore, H(F(P ∪ P )) > H(F(P)) holds, when the new path P can be divided into two parts, an independent part with length α > 0 and an existing path p l .\nThen, considering the general form of P having triples in P, we need to recompute the covariance term in Equation 22. Let P denotes the triples and Pα is the additional part with length α. Due to the assumption that the transformation errors in all triples are i.i.d, the covariance of P is equal to that of the path P l . Meanwhile, if the path part Pα is overlapped with some paths in P, the total covariance would further increase. Such that, we have:\nm i=1 Cov( i , P ) = ( m i=1 Cov( i , Pα ) + m i=1 Cov( i , P )) m i=1 Cov( i , l ),(31)\nTherefore, H(F(P ∪ P )) > H(F(P)) holds, when adding a new path P having triples in P. The theorem is proved." }, { "figure_ref": [], "heading": "B.3 Proof for Proposition 1", "publication_ref": [], "table_ref": [], "text": "Let Ĝ be the KG subgraph constructed by the triples of all shortest paths starting from the query entity q. Suppose a new triple t = (e 1 , r, e 2 ) whose head and tail entities exist in Ĝ and the relative distance γ e1 γ e2 . Adding this triple t into Ĝ leads to at least one new redundant path passing t.\nProof. Since the entity e 1 exists in Ĝ, there must be one shortest path P to a candidate entity e t passing e 1 .\nIf the path P also passes e 2 , we can divide P into three parts: P q,e2 , P e2,e1 , and P e1,et . It is easy to prove that the path sequentially connecting {P q,e2 , P e2,e1 , (e 1 , r, e 2 ), P e2,e1 , P e1,et } is a new redundant path.\nIf the path P does not pass e 2 , there must be another shortest path P to a candidate entity e t via e 2 .\nIn this case, we can divide P and P into four parts: P q,e1 , P e1,et , P q,e2 , and P e2,e t . There is a new path sequentially connecting {P q,e1 , (e 1 , r, e 2 ), P e2,e t }. Because the relative distance of e 1 is not smaller than that of e 2 , the length of P q,e1 is not shorter than that of P q,e2 . Such that, the length of the new path is longer than the shortest path. It is a new redundant path.\nOverall, a new redundant path exists after adding such a new triple, so the theorem is proved. " }, { "figure_ref": [], "heading": "C Model Details and Implementation", "publication_ref": [ "b0", "b32", "b21", "b8", "b2", "b27", "b4", "b40" ], "table_ref": [ "tab_5" ], "text": "GraPE is expected to absorb effective components in both KGE and GNN areas. To prove the feasibility of the entropy-guided percolation and the graph percolation process, we set the input query vector q as the all-ones vector. And each relative embedding vector e t|q is initialized as the zero vector. It will be our future work to explore more effective initialization settings. Then, we discuss the design space of the detailed functions in the GNN layer, including the transform operator ⊗, the aggregation function φ(•), and the relation parameters r.\nFor the transform operator ⊗, traditional KGE models provide multiple selections, such as vector addition in TransE [1], vector multiplication in Distmult [33] and vector rotation in RotatE [22]. Although there are more complicated scoring functions [9,3,28], we instantiate the above three efficient operators in GraPE. Compared with typical KGE models using static scoring functions, GraPE has a stronger representational capability due to the neural networks in GNN layers.\nThe aggregation function φ(•) is the key component of the GNN layer, and we specify it in GraPE to be SUM, MEAN, and PNA [5]. The original PNA aggregator jointly utilizes four types of aggregations which is computationally intensive. We simplify it by only using MEAN and STD aggregations. It is worth noting that we utilize the global degree value of each entity for averaging in MEAN and PNA, to avoid the problem of indistinguishable entities on local subgraphs.\nThe relation parameters r ∈ R d are the major trainable parameters in GraPE. In order to capture the dependencies between the triple relation and the query relation r q , we follow NBFNet [41] About the model hyperparameters, we list the default hyperparameter configurations of GraPE on different datasets in Table 6. Note for inductive settings, we use the same hyperparameters for four sub-datasets. All the hyperparameters are chosen by the performance on the validation set." }, { "figure_ref": [], "heading": "D Model Computational Complexity", "publication_ref": [ "b22", "b40", "b37", "b22", "b40", "b37", "b37" ], "table_ref": [], "text": "Space Complexity: GraPE outperforms previous KG reasoning methods in terms of space complexity. Traditional KGE models using absolute entity embeddings require parameter storage costs as O |E|d + |R|d , GraPE and GNN-based methods using relative knowledge embeddings, such as GraIL [23], NBFNet [41] and RED-GNN [38] Time Complexity: GraPE has a lower time complexity in the inference period compared with the recent three GNN-based methods. Considering the inference process with one query, its time complexity O | T |d + |E|d) of these GNN-based models is mainly determined by the calculated triple amount | T |. Therefore, we compare these GNN-based methods by quantifying the calculation times of involved triples for a query. Given the standard L-hop neighborhood subgraph G q of the entity q, the -th hop contains triples as {(e h , r, e t )|e h , e t ∈ N -1 q ∪ N q } and we denote its amount as n . Then, the total triple amount in G q is denoted as N L q = L =1 n . A detailed comparison is as follows:\n• GraIL [23] extracts an enclosing subgraph from G eq and conducts the L-layer GNN propagation for each candidate entity e t . So the calculated triple amount is N GraIL = L|E|N L q,et > LN L q .\n• NBFNet [41] calculates once L-layer GNN propagation for a query on the whole graph. We can represent its calculated triple amount as N N BF N et = L(min(|T |, N L q )) < N GraIL . • RED-GNN [38] only calculates involved triples from the first hop to the -th hop in the -th GNN layer. Its calculated triples are fewer than the global propagation in NBFNet, i.e. N RED-GN N = N L q +\nL-1 =1\ni=1 n i < N N BF N et . • GraPE contains a L-1 layer graph percolation process and a 1-layer GNN propagation on the whole subgraph. Considering its calculated triples in each -th percolation layer are fewer than n , we have\nN GraP E < N L q + L-1 =1 n < N RED-GN N .\nTraining Complexity: The process of subgraph indexing and mini-batch training in GraPE is following the RED-GNN approach [38]. The -hop neighbors N q are constructed by the 1-hop neighbors of entities in the N -1 q . And we record the out-going triple ids of every entity in the form of a sparse matrix, whose reading complexity for one entity is O 1) and the space complexity is O | T |). Given the m entities in N -1 q and the average triple number D, we can collect the 1-hop neighbors and triples with the time complexity O mD).\nAs shown in Algorithm 1, for one query, we progressively load 1-hop neighbors and triples from the sparse matrix, and then index them for each GNN iteration. The space and time complexity of related operations can be linear with the number of entities or triples in each layer (The entity/triple amount in each layer is usually much smaller than the whole KG). Besides, GraPE is parallelizable and supports mini-batch training. Following RED-GNN, in each iteration, we reindex all the nodes in different query subgraphs and construct a whole subgraph (two same entities in different subgraphs will be distinguished). Therefore, given the batch size B, the space and time cost of batch operations for subgraph indexing is approximately B times that of one query." } ]
We study Graph Neural Networks (GNNs)-based embedding techniques for knowledge graph (KG) reasoning. For the first time, we link the path redundancy issue in the state-of-the-art KG reasoning models based on path encoding and message passing to the transformation error in model training, which brings us new theoretical insights into KG reasoning, as well as high efficacy in practice. On the theoretical side, we analyze the entropy of transformation error in KG paths and point out query-specific redundant paths causing entropy increases. These findings guide us to maintain the shortest paths and remove redundant paths for minimized-entropy message passing. To achieve this goal, on the practical side, we propose an efficient Graph Percolation Process motivated by the Percolation model in Fluid Mechanics, and design a lightweight GNN-based KG reasoning framework called GraPE. GraPE outperforms previous state-of-the-art methods in both transductive and inductive reasoning tasks, while requiring fewer training parameters and less inference time.
River of No Return: Graph Percolation Embeddings for Efficient Knowledge Graph Reasoning
[ { "figure_caption": "Figure 1 :1Figure 1: (a) Example of KG reasoning in a Freebase subgraph. \"Marilyn Monroe\" is the answer to the query. (b) A multi-layer GNN propagation with augmented triples traverses all possible paths. (c) The graph percolation only calculates a few triples to encode the entity 'Marilyn' and 'Robert'.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Graphical illustration of the GraPE architecture.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "to generate relation embeddings {r i|q } via a linear function over the query relation but only use O |R|d + d 2 parameters. Furthermore, for two GNN layers in GraPE, we utilize different dimensions of relation parameters, i.e. d and d l .", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "have a lower complexity around O |R|d + d 2 , because |E| |R| in large-scale KGs. Furthermore, denote the parameter amount in each GNN layer as dΘ R , GNN-based methods require at least O LdΘ R ) parameters. GraPE with two GNN layers contains significantly fewer parameters whose amount is O dΘ R + d l Θ R ).", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Transductive reasoning results on the WN18RR and FB15k237 datasets. The boldface numbers indicate the best performance and the underlined means the second best.", "figure_data": "TypeMethodsMRRWN18RRFB15k237", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Inductive reasoning results on three series of datasets (evaluated with MRR).", "figure_data": "MethodsV1WN18RR V2 V3V4V1FB15k237 V2 V3V4V1NELL-995 V2 V3V4RuleN [15].668 .645 .368 .624 .363 .433 .439 .429 .615 .385 .381 .333NeuralLP [34].649 .635 .361 .628 .325 .389 .400 .396 .610 .361 .367 .261DRUM [19].666 .646 .380 .627 .333 .395 .402 .410 .628 .365 .375 .273GraIL [23].627 .625 .323 .553 .279 .276 .251 .227 .481 .297 .322 .262NBFNet [41].685 .659 .417 .610 .306 .344 .328 .312 .481 .379 .385 .203RED-GNN [38] .701 .690 .427 .651 .369 .469 .445 .442 .637 .419 .436 .363GraPE.742 .707 .472 .653 .415 .488 .481 .470 .777 .494 .450 .383(±).007 .003 .006 .003 .006 .007 .009 .006 .012 .004 .011 .011", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation studies of GraPE on four datasets (evaluated with MRR).", "figure_data": "ModelWN18RRWNv1FBv1NEv1origin0.5700.7420.4150.777(a) w all paths0.5700.7350.3960.769(b) w shortest paths0.5650.7410.4010.634(c) w long paths0.5340.5160.2900.657(d) w/o decoder0.5480.7250.3850.726(e) w/o rel feature0.5660.7400.3980.741(a) Different GraPE variantsDim(d) Layer(L) WN18RRWNv1FBv1NEv13240.5480.7390.4150.6963250.5610.7420.4120.7773260.5620.7410.4120.7596440.5500.7530.4160.7686450.5700.7500.4150.6836460.5670.7410.4040.778(b) Different layers and dimensionsAGGφTRA⊗WN18RRWNv1FBv1NEv1PNADistMult0.5700.7420.4150.777MEANDistMult0.5450.7310.4030.702SUMDistMult0.5420.7180.3760.743PNATransE0.5590.7400.3950.736PNARotatE0.5610.7390.4000.726(c) Different GNN functions", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Complexity comparison on training parameters, involved triples, and inference time. ×R denotes the ratio between the right method and GraPE. Bigger R means higher complexity.", "figure_data": "DatasetsGraPE RED-GNN×R NBFNet×RWN18RR45,65757,7191.389,6012.0FB15k237110,745 117,5001.13,103,10528.0WN-v112,79356,4394.488,7056.9FB-v140,15358,6361.52,377,15359.2NE-v113,59359,6394.490,9456.7(a) Training ParametersDatasetsGraPE RED-GNN×R NBFNet×RWN18RR10,35811,7641.1162,22315.7FB15k237300,002 823,3042.73,351,89811.2WN-v12393881.61,7427.3FB-v11,2731,4811.219,43315.3NE-v12,7206,4142.411,3464.2(b) Involved TriplesDatasetsGraPE RED-GNN×R NBFNet×RWN18RR24s33s1.4120s5FB15k237120s1,920s161,320s11WN-v1<1s1s12s2FB-v1<1s2s21s1NE-v1<1s3s31s1(c) Inference Time", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Hyperparameter configurations of GraPE on different datasets.", "figure_data": "Task SettingTransductive ReasoningInductive ReasoningHyperparameterFB15k-237 WN18RR FB15k-237 WN18RR NELL-995layer(L)35455Architectureencoder dim (d).6464323232decoder dim (d l ).88888Functiontransform aggregateDistMult PNADistMult PNADistMult PNADistMult PNADistMult PNAoptimizerAdamAdamAdamAdamAdamLearningbatch size learning rate16 5e-316 5e-316 5e-416 5e-416 5e-4epoch2020202020", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Statistics of transductive benchmark datasets.", "figure_data": "Dataset|E||R|#Train|F| #Validation #TestFB15k-237 [24] 14,541 237 272,11517,53520,466WN18RR [2]40,943 1186,8353,0343,134", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Statistics of inductive benchmark datasets.", "figure_data": "WN18RRFB15k-237NELL-995|R| |E||F||R| |E||F||R| |E||F|v1train 9 2,746 6,678 test 9 922 1,991183 2,000 5,226 146 1,500 2,40414 10,915 5,540 14 225 1,034v2train 10 6,954 18,968 203 3,000 12,085 88 2,564 10,109 test 10 2,923 4,863 176 2,000 5,092 79 4,937 5,521v3train 11 12,078 32,150 218 4,000 22,394 142 4,647 20,117 test 11 5,084 7,470 187 3,000 9,137 122 4,921 9,668v4train 9 3,861 9,842 test 9 7,208 15,157 204 3,500 14,554 61 3,294 8,520 222 5,000 33,916 77 2,092 9289", "figure_id": "tab_7", "figure_label": "8", "figure_type": "table" } ]
Kai Wang; Siqiang Luo; Dan Lin
[ { "authors": "Antoine Bordes; Alberto García-Durán; Jason Weston; Oksana Yakhnenko", "journal": "", "ref_id": "b0", "title": "Translating embeddings for modeling multi-relational data", "year": "2013" }, { "authors": "Antoine Bordes; Xavier Glorot; Jason Weston; Yoshua Bengio", "journal": "Machine Learning", "ref_id": "b1", "title": "A Semantic Matching Energy Function for Learning with Multi-relational Data", "year": "2014" }, { "authors": "Ines Chami; Adva Wolf; Da-Cheng Juan; Frederic Sala; Sujith Ravi; Christopher Ré", "journal": "", "ref_id": "b2", "title": "Lowdimensional hyperbolic knowledge graph embeddings", "year": "2020" }, { "authors": "Aaron Chan; Jiashu Xu; Boyuan Long; Soumya Sanyal; Tanishq Gupta; Xiang Ren", "journal": "", "ref_id": "b3", "title": "Salkg: Learning from knowledge graph explanations for commonsense reasoning", "year": "2021-12-06" }, { "authors": "Gabriele Corso; Luca Cavalleri; Dominique Beaini; Pietro Liò; Petar Velickovic", "journal": "", "ref_id": "b4", "title": "Principal neighbourhood aggregation for graph nets", "year": "2020-12-06" }, { "authors": "Henry Darcy", "journal": "Victor Dalmont", "ref_id": "b5", "title": "Les fontaines publiques de la ville de Dijon", "year": "1856" }, { "authors": "Rajarshi Das; Shehzaad Dhuliawala; Manzil Zaheer; Luke Vilnis; Ishan Durugkar; Akshay Krishnamurthy; Alex Smola; Andrew Mccallum", "journal": "", "ref_id": "b6", "title": "Go for a walk and arrive at the answer: Reasoning over knowledge bases with reinforcement learning", "year": "2017-12-08" }, { "authors": "Rajarshi Das; Arvind Neelakantan; David Belanger; Andrew Mccallum", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Chains of reasoning over entities, relations, and text using recurrent neural networks", "year": "2017" }, { "authors": "Tim Dettmers; Pasquale Minervini; Pontus Stenetorp; Sebastian Riedel", "journal": "", "ref_id": "b8", "title": "Convolutional 2d knowledge graph embeddings", "year": "2018" }, { "authors": "Barbara A Donald F Elger; Clayton T Lebret; John A Crowe; Roberson", "journal": "John Wiley & Sons", "ref_id": "b9", "title": "Engineering fluid mechanics", "year": "2020" }, { "authors": "John Finnemore; Joseph B Franzini", "journal": "McGraw-Hill Education", "ref_id": "b10", "title": "Fluid mechanics with engineering applications", "year": "2002" }, { "authors": "Shaoxiong Ji; Shirui Pan; Erik Cambria; Pekka Marttinen; Philip S Yu", "journal": "IEEE Trans. Neural Networks Learn. Syst", "ref_id": "b11", "title": "A survey on knowledge graphs: Representation, acquisition, and applications", "year": "2022" }, { "authors": "Yankai Lin; Zhiyuan Liu; Huan-Bo Luan; Maosong Sun; Siwei Rao; Song Liu", "journal": "The Association for Computational Linguistics", "ref_id": "b12", "title": "Modeling relation paths for representation learning of knowledge bases", "year": "2015" }, { "authors": "Shuwen Liu; Bernardo Cuenca Grau; Ian Horrocks; Egor V Kostylev", "journal": "", "ref_id": "b13", "title": "INDIGO: gnnbased inductive knowledge graph completion using pair-wise encoding", "year": "2021-12-06" }, { "authors": "Christian Meilicke; Manuel Fink; Yanjie Wang; Daniel Ruffinelli; Rainer Gemulla; Heiner Stuckenschmidt", "journal": "Springer", "ref_id": "b14", "title": "Fine-grained evaluation of rule-and embedding-based systems for knowledge graph completion", "year": "2018" }, { "authors": "Arvind Neelakantan; Benjamin Roth; Andrew Mccallum", "journal": "The Association for Computer Linguistics", "ref_id": "b15", "title": "Compositional vector space models for knowledge base completion", "year": "2015" }, { "authors": "Xuran Pan; Tianzhu Ye; Dongchen Han; Shiji Song; Gao Huang", "journal": "", "ref_id": "b16", "title": "Contrastive languageimage pre-training with knowledge graphs", "year": "2022" }, { "authors": "Meng Qu; Junkun Chen; A C Louis-Pascal; Yoshua Xhonneux; Jian Bengio; Tang", "journal": "", "ref_id": "b17", "title": "Rnnlogic: Learning logic rules for reasoning on knowledge graphs", "year": "2021" }, { "authors": "Ali Sadeghian; Mohammadreza Armandpour; Patrick Ding; Daisy Zhe Wang", "journal": "", "ref_id": "b18", "title": "DRUM: end-to-end differentiable rule mining on knowledge graphs", "year": "2019-12-08" }, { "authors": "A Joseph; Allen E Schetz; Fuhs", "journal": "John Wiley & Sons", "ref_id": "b19", "title": "Fundamentals of fluid mechanics", "year": "1999" }, { "authors": "Sejr Michael; Thomas N Schlichtkrull; Peter Kipf; Rianne Bloem; Van Den; Ivan Berg; Max Titov; Welling", "journal": "", "ref_id": "b20", "title": "Modeling relational data with graph convolutional networks", "year": "2018" }, { "authors": "Zhiqing Sun; Zhi-Hong Deng; Jian-Yun Nie; Jian Tang", "journal": "", "ref_id": "b21", "title": "Rotate: Knowledge graph embedding by relational rotation in complex space", "year": "2019" }, { "authors": "K Komal; Etienne G Teru; William L Denis; Hamilton", "journal": "PMLR", "ref_id": "b22", "title": "Inductive relation prediction by subgraph reasoning", "year": "2020-07" }, { "authors": "Kristina Toutanova; Danqi Chen", "journal": "", "ref_id": "b23", "title": "Observed versus latent features for knowledge base and text inference", "year": "2015" }, { "authors": "Shikhar Vashishth; Soumya Sanyal; Nitin Vikram; Partha P Talukdar", "journal": "", "ref_id": "b24", "title": "Composition-based multi-relational graph convolutional networks", "year": "2020" }, { "authors": "Hongwei Wang; Hongyu Ren; Jure Leskovec", "journal": "ACM", "ref_id": "b25", "title": "Relational message passing for knowledge graph completion", "year": "2021" }, { "authors": "Kai Wang; Yu Liu; Xiujuan Xu; Quan Z Sheng", "journal": "Computing", "ref_id": "b26", "title": "Enhancing knowledge graph embedding by composite neighbors for link prediction", "year": "2020" }, { "authors": "Kai Wang; Yu Liu; Dan Lin; Michael Sheng", "journal": "", "ref_id": "b27", "title": "Hyperbolic geometry is not necessary: Lightweight euclidean-based models for low-dimensional knowledge graph embeddings", "year": "2021-11-20" }, { "authors": "Kai Wang; Yu Liu; Qian Ma; Quan Z Sheng", "journal": "", "ref_id": "b28", "title": "Mulde: Multi-teacher knowledge distillation for low-dimensional knowledge graph embeddings", "year": "2021" }, { "authors": "Kai Wang; Yu Liu; Quan Z Sheng", "journal": "ACM", "ref_id": "b29", "title": "Swift and sure: Hardness-aware contrastive learning for low-dimensional knowledge graph embeddings", "year": "2022" }, { "authors": "Ruijie Wang; Zheng Li; Dachun Sun; Shengzhong Liu; Jinning Li; Bing Yin; Tarek F Abdelzaher", "journal": "", "ref_id": "b30", "title": "Learning to sample and aggregate: Few-shot reasoning over temporal knowledge graphs", "year": "2022" }, { "authors": "Wenhan Xiong; Thien Hoang; William Yang; Wang ", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "Deeppath: A reinforcement learning method for knowledge graph reasoning", "year": "2017" }, { "authors": "Bishan Yang; Wen-Tau Yih; Xiaodong He; Jianfeng Gao; Li Deng", "journal": "", "ref_id": "b32", "title": "Embedding entities and relations for learning and inference in knowledge bases", "year": "2015" }, { "authors": "Fan Yang; Zhilin Yang; William W Cohen", "journal": "", "ref_id": "b33", "title": "Differentiable learning of logical rules for knowledge base reasoning", "year": "2017" }, { "authors": "Haotong Yang; Zhouchen Lin; Muhan Zhang", "journal": "", "ref_id": "b34", "title": "Rethinking knowledge graph evaluation under the open-world assumption", "year": "2022" }, { "authors": "Shuai Zhang; Yi Tay; Lina Yao; Qi Liu", "journal": "", "ref_id": "b35", "title": "Quaternion knowledge graph embeddings", "year": "2019-12-08" }, { "authors": "Wentao Zhang; Zeang Sheng; Ziqi Yin; Yuezihan Jiang; Yikuan Xia; Jun Gao; Zhi Yang; Bin Cui", "journal": "ACM", "ref_id": "b36", "title": "Model degradation hinders deep graph neural networks", "year": "2022" }, { "authors": "Yongqi Zhang; Quanming Yao", "journal": "ACM", "ref_id": "b37", "title": "Knowledge graph reasoning with relational digraph", "year": "2022" }, { "authors": "Yongqi Zhang; Zhanke Zhou; Quanming Yao; Xiaowen Chu; Bo Han", "journal": "", "ref_id": "b38", "title": "Learning adaptive propagation for knowledge graph reasoning", "year": "2022" }, { "authors": "Zhanqiu Zhang; Jie Wang; Jiajun Chen; Shuiwang Ji; Feng Wu", "journal": "", "ref_id": "b39", "title": "Cone: Cone embeddings for multi-hop reasoning over knowledge graphs", "year": "2021-12-06" }, { "authors": "Zhaocheng Zhu; Zuobai Zhang; A C Louis-Pascal; Jian Xhonneux; Tang", "journal": "", "ref_id": "b40", "title": "Neural bellmanford networks: A general graph neural network framework for link prediction", "year": "2021-12-06" }, { "authors": "Zhaocheng Zhu; Xinyu Yuan; Mikhail Galkin; Sophie Xhonneux; Ming Zhang; Maxime Gazeau; Jian Tang", "journal": "", "ref_id": "b41", "title": "A*net: A scalable path-based reasoning approach for knowledge graphs", "year": "2022" } ]
[ { "formula_coordinates": [ 2, 349.01, 650.49, 78.22, 10.31 ], "formula_id": "formula_0", "formula_text": "|T + | = 2|T | + |E|." }, { "formula_coordinates": [ 3, 239.14, 207.85, 264.86, 22.31 ], "formula_id": "formula_1", "formula_text": "e t = 1 n (e h ,r,et)∈T + (e h ⊗ r),(1)" }, { "formula_coordinates": [ 3, 176.31, 317.12, 327.69, 22.83 ], "formula_id": "formula_2", "formula_text": "e t|q = F(P et ) = 1 n P ∈Pe t (q ⊗ r 1 ⊗ • • • ⊗ r |P | | (ei,ri,e i )∈P ),(2)" }, { "formula_coordinates": [ 3, 210.77, 421.72, 293.23, 13.77 ], "formula_id": "formula_3", "formula_text": "e t|q = ϕ W φ e -1 i|q ⊗ r | (ei,r,et)∈Gq , e -1 t|q ,(3)" }, { "formula_coordinates": [ 5, 186.8, 264.68, 317.2, 20.56 ], "formula_id": "formula_4", "formula_text": "∆h (e 1 , e 2 ) = { max(γ e2 -γ e1 , 0) < |P | min(γ e2 -γ e1 + 1, 1) = |P |(4)" }, { "formula_coordinates": [ 5, 215.5, 301.38, 288.5, 16.12 ], "formula_id": "formula_5", "formula_text": "P ∈Pe t ∆h |P | • • • ∆h 1 (q ⊗ r 1 ⊗ • • • ⊗ r |P | )| (ei,ri,e i )∈P .(5)" }, { "formula_coordinates": [ 5, 186.9, 476.71, 317.1, 12.69 ], "formula_id": "formula_6", "formula_text": "E q = N -1 q ∪ N q , T q = {(e 1 , r, e 2 )|e 1 ∈ N -1 q , e 2 ∈ E q } (6)" }, { "formula_coordinates": [ 5, 309.78, 562.85, 90.05, 21.52 ], "formula_id": "formula_7", "formula_text": "E q = N -1 q ∪ N q ; 5:" }, { "formula_coordinates": [ 5, 113.78, 606.79, 182.26, 13.43 ], "formula_id": "formula_8", "formula_text": "e t|q = δ W φ e -1 i|q ⊗ r i|q | (e i ,r i ,e t )∈T q ,(7)" }, { "formula_coordinates": [ 6, 123.27, 255.06, 149.31, 9.99 ], "formula_id": "formula_9", "formula_text": "êt|q = δ W h2 • δ W h1 • [e t|q : r q ]" }, { "formula_coordinates": [ 14, 180.99, 236.47, 222.11, 23.55 ], "formula_id": "formula_11", "formula_text": "r q G q ⊆ G" }, { "formula_coordinates": [ 14, 210.88, 605.84, 293.12, 9.98 ], "formula_id": "formula_12", "formula_text": "e k|q = e k-1|q ⊗ r k + = A k e k-1|q + b k + ,(9)" }, { "formula_coordinates": [ 14, 174.38, 678.66, 329.62, 12.69 ], "formula_id": "formula_13", "formula_text": "Var(e k|q ) = Var(A k e k-1|q + b + ) = A k Var(e k-1|q )A T k + σ2 .(10)" }, { "formula_coordinates": [ 15, 264.2, 94.78, 239.8, 26.71 ], "formula_id": "formula_14", "formula_text": "⇔ A k Var(e k-1|q )A T k + σ2 Var(e k-1|q ).(11)" }, { "formula_coordinates": [ 15, 416.77, 237.67, 75.13, 12.55 ], "formula_id": "formula_16", "formula_text": "A k Var(e k-1|q )A T k" }, { "formula_coordinates": [ 15, 222.47, 287.89, 84.5, 12.94 ], "formula_id": "formula_17", "formula_text": ")A T ) i = a 2 i σ 2 (k-1,i) ." }, { "formula_coordinates": [ 15, 129.44, 321.59, 374.56, 30.32 ], "formula_id": "formula_18", "formula_text": "d i=0 σ 2 (k,i) = d i=0 a 2 i σ 2 (k-1,i) + d i=0 σ2 i d i=0 σ 2 (k-1,i) ⇐ a 2 i σ 2 (k-1,i) + σ2 i σ 2 (k-1,i) ,(13)" }, { "formula_coordinates": [ 15, 187.12, 454.21, 316.88, 12.94 ], "formula_id": "formula_19", "formula_text": "a 2 i σ 2 (n-1,i) + σ2 i σ 2 (n-1,i) ⇔ (a 2 i -1)σ 2 (n-1,i) + σ2 i 0.(14)" }, { "formula_coordinates": [ 15, 313.32, 501.95, 190.68, 30.32 ], "formula_id": "formula_20", "formula_text": "d i=0 σ 2 (n+1,i) - d i=0 σ 2 (n,i)(15)" }, { "formula_coordinates": [ 15, 202.77, 536.98, 301.23, 30.32 ], "formula_id": "formula_21", "formula_text": "= d i=0 a 2 i σ 2 (n,i) + d i=0 σ2 i - d i=0 σ 2 (n,i)(16)" }, { "formula_coordinates": [ 15, 202.77, 572, 301.23, 30.32 ], "formula_id": "formula_22", "formula_text": "= d i=0 a 2 i [a 2 i σ 2 (n-1,i) + σ2 i ] - d i=0 a 2 i σ 2 (n-1,i)(17)" }, { "formula_coordinates": [ 15, 202.77, 607.03, 301.23, 30.32 ], "formula_id": "formula_23", "formula_text": "= d i=0 a 2 i [(a 2 i -1)σ 2 (n-1,i) + σ2 i ](18)" }, { "formula_coordinates": [ 16, 201.54, 167.66, 302.46, 30.55 ], "formula_id": "formula_24", "formula_text": "Var(F(P)) = 1 m 2 m i=1 Var(P i ) + i =j Cov( i , j ) ,(19)" }, { "formula_coordinates": [ 16, 122.86, 305.22, 376.99, 30.55 ], "formula_id": "formula_25", "formula_text": "Var(F(P)) = 1 m 2 m i=1 σ 2 Pi + i =j Cov( i , j ) = 1 m 2 m i=1 k i σ2 + C m m 2 = m σ2 + C m m 2 , (21" }, { "formula_coordinates": [ 16, 499.85, 315.95, 4.15, 8.64 ], "formula_id": "formula_26", "formula_text": ")" }, { "formula_coordinates": [ 16, 165.37, 410.13, 338.63, 30.55 ], "formula_id": "formula_27", "formula_text": "∪ { P })) = 1 (m + 1) 2 m i=1 σ 2 Pi + i =j Cov( i , j ) + σ 2 P + 2 m i=1 Cov( i , l )(22)" }, { "formula_coordinates": [ 16, 128.45, 444.9, 375.55, 30.32 ], "formula_id": "formula_28", "formula_text": "≈ 1 (m + 1) 2 m i=1 σ 2 Pi + C m + σ 2 p + 2(Cov( P , l ) + 1 m C m )(23)" }, { "formula_coordinates": [ 16, 128.45, 478.47, 375.55, 22.31 ], "formula_id": "formula_29", "formula_text": "= 1 (m + 1) 2 m σ2 + C m + ( + α)σ 2 + 2 σ2 + 2 m C m(24)" }, { "formula_coordinates": [ 16, 128.45, 505.44, 375.55, 22.31 ], "formula_id": "formula_30", "formula_text": "= m + 3 + α (m + 1) 2 σ2 + m + 2 m(m + 1) 2 C m ,(25)" }, { "formula_coordinates": [ 16, 361.61, 546.89, 96.67, 13.47 ], "formula_id": "formula_31", "formula_text": "i =l Cov( i , l ) ≈ 1 m C m ." }, { "formula_coordinates": [ 16, 128.03, 582.52, 375.97, 78.63 ], "formula_id": "formula_32", "formula_text": "H(F(P ∪ P )) -H(F(P)) = m + 3 + α (m + 1) 2 σ2 + m + 2 m(m + 1) 2 C m - m σ2 + C m m 2 (26) = m 2 (m + 3 + α)σ 2 + m(m + 2)C m -m(m + 1) 2 σ2 -(m + 1) 2 C m m 2 (m + 1) 2 (27) = (m(m -1) + m 2 α)σ 2 -C m m 2 (m + 1) 2 . (28" }, { "formula_coordinates": [ 16, 499.85, 645.89, 4.15, 8.64 ], "formula_id": "formula_33", "formula_text": ")" }, { "formula_coordinates": [ 16, 201.02, 698.86, 302.98, 20.14 ], "formula_id": "formula_34", "formula_text": "C m = i =j Cov( i , j ) i =j σ2 = m(m -1) σ2 .(29)" }, { "formula_coordinates": [ 17, 141.62, 91.93, 362.38, 23.88 ], "formula_id": "formula_35", "formula_text": "H(F(P ∪ P )) -H(F(P)) = m 2 ασ 2 + m(m -1) σ2 -C m m 2 (m + 1) 2 ασ 2 (m + 1) 2 > 0.(30)" }, { "formula_coordinates": [ 17, 155.25, 225.78, 348.75, 30.32 ], "formula_id": "formula_36", "formula_text": "m i=1 Cov( i , P ) = ( m i=1 Cov( i , Pα ) + m i=1 Cov( i , P )) m i=1 Cov( i , l ),(31)" }, { "formula_coordinates": [ 19, 151.17, 337.33, 183.8, 14.11 ], "formula_id": "formula_37", "formula_text": "N GraP E < N L q + L-1 =1 n < N RED-GN N ." } ]
10.18653/v1/P17-1074
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b14", "b0", "b4", "b10", "b17", "b12", "b7", "b8" ], "table_ref": [], "text": "Writing assistance is a widely used application of natural language processing (NLP) that helps millions of people. In addition to common features like grammatical error correction (Ng et al., 2014;Bryant et al., 2017), paraphrasing (Fader et al., 2013;Lin et al., 2014) and automatic essay scoring (Song et al., 2020), providing word suggestions is a desired feature to enhance the overall quality of the writing. As illustrated in figure 1, the word \"intimate\" in the first sentence should be replaced with \"close\", as \"intimate\" is not suitable for describing relationships between colleagues.\nFigure 1: Examples for Smart Word Suggestions (SWS). All samples consist of sentences annotated with multiple improvable targets, each of which is further annotated with multiple substitution suggestions. To save space, the sentences are simplified, and only one target and one suggestion are presented per case. The suggestions can be divided into two types: refine-usage and diversify-expression, which are described in section 3.1\nIn this paper, we introduce the task and benchmarks of Smart Word Suggestion (SWS). Figure 2 shows the definition of SWS. The goal of SWS is to identify potential improvable targets in the form of words or phrases within a given context, and provide substitution suggestions for every improvable target. These suggestions may include correcting improper word usage, ensuring that language usage conforms to standard written conventions, enhancing expression, and so on. Specifically, we categorize these suggestions into two types: refine-usage and diversify-expression.\nLexical Substitution (LS) (McCarthy and Navigli, 2007;Kremer et al., 2014;Lee et al., 2021) the most relevant research benchmark in the field. LS systems aim to provide substitute words that maintain the original meaning of a given word within a sentence. However, in practical situations, it is important to recognize words that can be improved or replaced. Identifying these targets is crucial for practical use and a necessary step for making accurate substitution suggestions. In order to reproduce the real-world scenarios, we design SWS as an end-to-end process that takes a sentence as input and provides substitution suggestions for all improvable targets as output.\nThe SWS benchmark includes human-labeled data for testing, a large distantly supervised dataset for training, and a corresponding framework for evaluation. For testing, we collect 1,000 segments from English learners' essays, and ask ten annotators to identify improvable targets and provide substitution suggestions. The high level of agreement among the annotators confirms the quality of the annotation. For weakly supervised training, we compile a large amount of distantly supervised data by using a synonym thesaurus to randomly substitute words in corpus. We also provide settings for both end-to-end evaluation and sub-task evaluation.\nTo investigate the challenges, we implemented seven baselines, including knowledge-driven methods, state-of-the-art lexical substitution methods, and end-to-end approaches for SWS. The experimental results show that the performance of the existing lexical substitution methods decreases significantly when applied to SWS. Additionally, the end-to-end methods we designed struggle to identify and improve targeted words or phrases. Detailed analysis and discussions on the results suggest several areas for further research.\nTo conclude, our contributions are as follows:\n• Introducing the SWS task for writing assis-tance, and providing a benchmark with highquality human-labeled testing data and large distantly supervised training data.\n• Developing the evaluation framework for SWS, and conducting extensive evaluations on the provided baselines. • Identifying several directions for further research on SWS through analysis." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [], "table_ref": [], "text": "We begin by comparing SWS with three related tasks, highlighting the unique value of our work." }, { "figure_ref": [], "heading": "Lexical Substitution", "publication_ref": [ "b12", "b7", "b8" ], "table_ref": [], "text": "Lexical substitution (LS) (McCarthy and Navigli, 2007;Kremer et al., 2014;Lee et al., 2021) is the task of providing substitute words for a specific word in a sentence. There are some major distinctions between the SWS and LS.\n(1) In LS, the target word is already provided, while in SWS, the system needs to detect the improvable targets first.\n(2) LS focuses on finding synonyms that maintain the meaning of both the word and the sentence. On the other hand, SWS is designed for writing assistance scenarios, so the substitutions aim to improve the writing of the sentences. LS focuses on word sense disambiguation in the context, which doesn't require any \"improvement\".\nHere is an example in the LS07 dataset: This is clearly a terrible and shameful blot on UN peacekeeping. One of the substitutions is \"terrible\" → \"very bad\". This substitution doesn't meet the SWS's requirement as the use of \"very bad\" is less accurate, and the substitution worsens writing.\n(3) LS uses lemmatized annotations for the target word and substitutions, while SWS extracts annotations directly from the sentence and requires that the substitutions fit grammatically within the sentence to evaluate the model's end-to-end performance." }, { "figure_ref": [], "heading": "Grammatical Error Correction", "publication_ref": [ "b14", "b0", "b14", "b5" ], "table_ref": [], "text": "Grammatical error correction (GEC) (Ng et al., 2014;Bryant et al., 2017) also shares some similarities with SWS. Ng et al. (2014) pointed that more than 85% of the corrections in GEC are word-level and that these corrections improve users' writing as well. However, the substitution suggestions provided by SWS do not include suggestions for correcting grammatical errors. Instead, SWS focuses on identifying and improving word or phrase usage. It is worth noting that the source sentences in the SWS test set are first processed by a GEC model (Ge et al., 2018) and then further checked by human annotators to ensure no grammatical errors in the inputs. In the writing assistant, SWS is the next step following GEC." }, { "figure_ref": [], "heading": "Paraphrase Generation", "publication_ref": [ "b4", "b10", "b6", "b2", "b16" ], "table_ref": [], "text": "Paraphrase generation (PG) (Fader et al., 2013;Lin et al., 2014) aims to alter the form or structure of a given sentence while preserving its semantic meaning. PG has a variety of potential applications, such as data augmentation (Iyyer et al., 2018), query rewriting (Dong et al., 2017), and duplicate question detection (Shah et al., 2018). PG is different from SWS in two main ways: (1) SWS places a greater emphasis on improving writing by identifying and correcting inappropriate word usage or providing diverse expression options. (2) SWS focuses on substitution suggestions of words or phrases, and evaluations are based on word level. In contrast, PG directly measures performance at the sentence level." }, { "figure_ref": [], "heading": "Data Collection", "publication_ref": [ "b12", "b7", "b3", "b18" ], "table_ref": [], "text": "This work is to construct a Smart Word Suggestion benchmark that accurately represents writing assistance scenarios. For evaluation, we collect sentences from English learners and use human annotations in accordance with McCarthy and Navigli (2007) and Kremer et al. (2014). For training, we compile a large-scale, distantly supervised dataset from Wikipedia (Erxleben et al., 2014;Vrandečić and Krötzsch, 2014)." }, { "figure_ref": [], "heading": "Human-Annotated Data Collection", "publication_ref": [ "b5", "b12", "b7" ], "table_ref": [], "text": "Human-annotated data is obtained through a threestage process: (1) cleaning corpus data from En-glish learners' essays, (2) labeling improvable targets and corresponding substitution suggestions, and (3) merging annotations and filtering out lowconfidence annotations.\nStage 1: Corpus Cleaning. We collect essays written by undergraduate English learners via an online writing assistance platform1 . We divide them into individual sentences. To avoid annotators making corrections beyond SWS, the sentences are refined with following actions: (1) removing sentences that have unclear meanings. (2) applying a correction model (Ge et al., 2018) to correct grammatical errors. (3) asking human reviewers to double-check for any remaining grammatical errors. Additionally, we filter out short sentences as they may not provide enough context or contain sufficient words to improve. We thoroughly reviewed all sentences to ensure that they do not contain any information that could identify individuals or any offensive content.\nStage 2: Human Annotation. Ten native English-speaking undergraduate students majoring in linguistics were recruited as annotators to independently annotate each sentence. To ensure annotation quality, all annotators were required to pass test tasks before participating in the annotation.\nThe annotators carried out the annotations in three steps: (1) identifying words or phrases in the sentence that could be improved, (2) offering one or more suggestions for each identified target, and (3) assigning a type of improvement after the substitution.\nSpecifically, we define the substitution suggestions as two types. (1) Refine-usage refers to instances where the use of a specific word or phrase is inappropriate in the current context, such as when it has a vague meaning, is a non-native expression, or is an incorrect usage of English. For instance, in the second sentence shown in figure 1, the word \"possible\" is intended to convey the meaning of \"having the possibility\", and is not appropriate in the context of the sentence. The annotators replaced \"possible\" with \"likely.\" These suggestions are designed to help English learners understand the differences in word usage in specific contexts and to enable them to write in a way that is more consistent with native speakers. (2) Diversify-expression refers to instances where this word or phrase could be substituted with other words or phrases. These suggestions aim to help users use a more diverse range of expressions. The last case in figure 1 is a corresponding example.\nThe annotators were required to provide at least three suggestions for each sentence. For the entire dataset of 1000 sentences, each annotator was required to provide at least 1500 refine-usage type suggestions. The detailed annotation instruction is in appendix A.\nStage 3: Merging and Filtering. Previous lexical substitution tasks (McCarthy and Navigli, 2007;Kremer et al., 2014) merged all the annotators' results into a key-value dictionary, where the value indicates the number of annotators who provided this substitution suggestion. We merged the labeling results of 10 annotators in a similar way. Take the merging of two annotators' annotations as an example. One is {happy: glad/merry, possible: likely}, and the other is {help: aid, possible: likely/probable}. The result after merging would be:\n{happy: {glad: 1, merry: 1}, possible: {likely: 2, probable: 1}, help: {aid: 1}} where happy, possible, help are improvable targets, and the sub-level dictionaries are the substitution suggestions after merging. We also collect the type of refine-usage or diversify-expression for each improvable target by taking the majority of the type labeling.\nIn order to reduce subjective bias among annotators, we discarded all improvable targets that were only annotated by one annotator. Finally, the dataset was split into a validation set of 200 sentences and a test set of 800 sentences." }, { "figure_ref": [], "heading": "Distantly Supervised Data Collection", "publication_ref": [ "b15" ], "table_ref": [], "text": "We collect a large amount of distantly supervised data for weakly supervised training by using a synonym thesaurus to randomly substitute words in a corpus. The source corpus contains 3.7 million sentences from Wikipedia2 . The synonym thesaurus we use is the intersection of PPDB (Pavlick et al., 2015) and Merriam-Webster thesaurus3 . The sentences are processed in 3 steps: (1) Selecting all the words or phrases in the synonym thesaurus, and treating them as improvable targets. (2) Using a tagger to find the part of speech of the improvable targets. (3) Randomly substituting the improv-able targets with one synonyms of the same part of speech.\nNote that the random substitution with the synonym dictionary may result in a more inappropriate word or phrase usage than the original text. Therefore, we treat the generated substitutions as the improvable targets, and the original targets as substitution suggestions.\nIn contrast to the human-annotated dataset, the distantly supervised dataset only includes one suggestion for each improvable target and does not have the annotation of suggestion type. The code for generating distantly supervised datasets will be released for further studies. The distantly supervised dataset SWS DS contains over 12.7 million suggestions in 3.7 million sentences. 2.67% are multi-word phrases, and 0.3% of the suggestions are multi-word." }, { "figure_ref": [], "heading": "Data Statistics", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Inner Annotator Agreements", "publication_ref": [ "b12", "b7", "b12", "b7" ], "table_ref": [], "text": "Previous studies on lexical substitution (McCarthy and Navigli, 2007;Kremer et al., 2014) evaluated the quality of the dataset with inter-annotator agreement (IAA). We adopt this approach and calculate pairwise inter-annotator agreement (PA) to assess the quality of the dataset.\nPA det measures the consistency of identifying improvable targets:\nPA det = 1 |P | (i,j)∈P PA det ij PA det ij = N k=1 1 N |s i k ∩ s j k | |s i k ∪ s j k |\nwhere P is the set of annotator pairs. We have ten annotators, so |P | = C 2 10 = 45. N is the number of all the sentences, and s i k , s j k are the improvable target sets of sentence k identified by annotator i and j, respectively.\nPA sug measures the consistency of substitution suggestions of a same improvable target:\nPA sug = 1 |P | (i,j)∈P PA sug ij PA sug ij = M ij l=1 1 M ij |t i l ∩ t j l | |t i l ∪ t j l |\nwhere M ij is the size of the intersection of the improvable target sets identified by annotator i and j. t i l , t j l are the suggestions for target l given by annotator i and j, respectively.\nIn the SWS benchmark, the PA det and the PA sug are 23.2% and 35.4%, respectively. Our PA sug is significantly higher compared to previous LS datasets, 27.7% of SemEval (McCarthy and Navigli, 2007) and 19.3% of COINCO (Kremer et al., 2014), thereby confirming the annotation quality." }, { "figure_ref": [], "heading": "Data Quality of the Distantly Supervised Dataset", "publication_ref": [], "table_ref": [], "text": "According to our statistics, 71.8% of the substitutions in the test set appear in the training set, and each substitution in the test set appears in the training set 10.4 times on average. Those data show the substitutions in the training set covers most of the substitutions in the test set, which verify the synthetic method is close to real-world scenarios." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce the evaluation settings and metrics for SWS, including both the end-to-end evaluation and the sub-task evaluation.\nFor the end-to-end evaluation and the improvable target detection sub-task, we introduce precision, recall, and F 0.5 as metrics. For the substitution suggestion sub-task, we utilize accuracy to evaluate the quality of the predicted substitutions. Examples of calculating the metrics can be found in appendix B." }, { "figure_ref": [], "heading": "End-to-end Evaluation", "publication_ref": [], "table_ref": [], "text": "The end-to-end evaluation is computed based on each substitution suggestion. A true prediction is counted if and only if both the detected improvable target is in the annotated improvable target set and the suggested substitution is in the annotated substitutions of the target:\nTP e2e = N k=1 M k l=1 1 if s kl ∈ S k else 0\nwhere N is the number of all the sentences, M k is the number of targets in the sentence k, S k is the set of annotated suggestions of sentence k, and s kl is the l-th predicted suggestion of sentence k. The precision (P e2e ) and recall (R e2e ) for end-to-end evaluation are calculated as follows:\nP e2e = TP e2e N P , R e2e = TP e2e N G\nwhere N P and N G are the number of predicted suggestions and annotated suggestions, respectively.\nIn the writing assistance scenario, precision is more important than recall, so we calculate F e2e 0.5 as the overall metric.\nF e2e 0.5 = 1.25 • P e2e • R e2e 0.25 • P e2e + R e2e" }, { "figure_ref": [], "heading": "Sub-Task Evaluation", "publication_ref": [], "table_ref": [], "text": "Improvable Target Detection. In this task, model needs to find all the annotated improvable targets in the sentence. The precision (P det ) and recall (R det ) for detection are calculated as follows:\nP det = N k=1 |s k ∩ s k | N k=1 |s k | , R det = N k=1 |s k ∩ s k | N k=1 |s k |\nwhere s k and s k are the annotated improvable target set and predicted improvable target set for sentence k, respectively. Same with end-to-end evaluation, we compute F det 0.5 to assess the performance for detection of improvable targets.\nF det 0.5 = 1.25 • P det • R det 0.25 • P det + R det\nSubstitution Suggestion. In this task, model needs to give suggestions for each improvable target. We calculate accuracy of the suggestions on those correctly detected targets:\nAcc sug = 1 N N k=1 1 M k M k l=1 1 if t l ∈ T l else 0\nwhere T l is the annotated recommendation set of target l, t l is the predicted recommendation for target l, and M k is the total number of correctly detected targets in sentence k." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [ "b19", "b13", "b1", "b9" ], "table_ref": [], "text": "We test 7 methods on SWS. The methods could be divided into three groups: (1) Adopting external knowledge to give suggestions. ( 2) State-of-the-art lexical substitution methods. (3) End-to-end SWS baselines. We also list the human performance for reference.\nExternal Knowledge Methods. Here are two methods that use external knowledge to give suggestions. (1) Rule-based synonyms replacement as how we construct the distantly supervised data. We adopt a greedy replacement strategy, where all entries are replaced. (2) ChatGPT 4 , a large language model trained on massive data and further fine-tuned with human feedback. We ask ChatGPT to directly generate the suggestions in every giving sentence. The prompt and details for utilizing ChatGPT can be found in appendix C. Lexical Substitution Methods. Two state-ofthe-art lexical substitution methods are tested on SWS, i.e. BERT sp,sv (Zhou et al., 2019) and LexSubCon (Michalopoulos et al., 2022). We use the open-sourced code of LexSubCon and reimplement BERT sp,sv . We let the model give a substitution for each word, and if the substitution is different with the original word, the word is regarded as a detected improvable target.\n4 https://openai.com/blog/chatgpt/ End-to-end Baselines. In the end-to-end framework, we treat SWS as three training paradigms, and provide one baseline for each. (1) Masked language modeling (MLM): We use BERT-baseuncased (Devlin et al., 2019) with an MLM head as the baseline. ( 2) Sequence-to-sequence generation: We use BART-base (Lewis et al., 2020) as the baseline. (3) Token-level rewriting: We use CMLM (Marjan Ghazvininejad, Omer Levy, Yinhan Liu, Luke Zettlemoyer, 2019) as the baseline. The distantly supervised dataset is utilized to train the end-to-end baselines. For the improvable targets, the model is expected to learn the suggestions. Otherwise, the model is expected to keep the original words." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "Table 3 shows the experimental results of the baselines, from which we have the following observations:\n(1) The rule-based approach is similar to the process of creating distantly supervised data. Both the rule-based method and end-to-end baselines, which are trained using distantly supervised data, have high P det and low R det values. This suggests that the synonym dictionary used in this work has high quality but low coverage.\n(2) Compared with the rule-based method, the end-to-end models trained on distantly supervised dataset show a decline in performance for the improvable target detection, but an increase in performance for substitution suggestion. The improvable targets of the distantly supervised data do not accurately reflect the words or phrases that need improvement, resulting in difficulty in effectively training the models in detecting. However, the substitution suggestions in the distantly supervised data are derived from original words in Wikipedia, enabling the models to learn a relatively appropriate word usage in context.\n(3) The results of the CMLM model show a decrease in performance compared to the pre-trained models, namely BERT and BART, particularly in terms of substitution suggestions. The pre-training of semantic knowledge may contribute to the superior performance of the pre-trained models for this task.\n(4) There is a notable decrease in SWS for LS methods. Moreover, different LS methods have significant differences in detecting improvable targets. Only 2.1% of the words in the input sentence are identified as improvable targets by BERT sp,sv , while LexSubCon detects 32.4%. The current LS methods are not compatible with the SWS task.\n(5) The results from ChatGPT are comparable with the end-to-end baselines trained on 3.7 million sentences, but it is still has room for improvement.\n(6) Human performance is significantly better than baselines. We believe there is a lot of room for the baselines to improve." }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "We analyze the experimental results with two questions: (1) Does the model have the capability to accurately identify words that require improvement, or does it simply make random guesses? (2) Does the model have the ability to provide multiple useful suggestions for each target word?" }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Detection Analysis", "publication_ref": [ "b15" ], "table_ref": [ "tab_6", "tab_8" ], "text": "Voting Index and Weighted Accuracy. After merging the annotations, we determine the voting index for each improvable target, i.e. the number of annotators who identified the word or phrase. The voting index reflects the necessary level of replacement for the word. Figure 3 shows R det for the improvable targets with different voting indexes. As depicted in Figure 3, improvable targets identified by a greater number of annotators are more easily detected by the models.\nThen, we design weighted accuracy (WA) to evaluate the detection performance, using the voting index as weighting factors. where s k is the predicted improvable target set of sentence k, s kl is the l-th annotated target in sentence k, w kl is the voting index of s kl , N is the number of total sentences, and M k is the size of annotated improvable target set of sentence k.\nWA det = N k=1 M k l=1 w kl if s kl ∈ s k else 0 N k=1 M k l=1 w kl\nTable 4 shows R det and WA det of baseline methods. Consistent with the trend of R det for different voting indexes, the WA det is relatively higher than R det . These results demonstrate that the baseline methods can detect the highconfidence improvable targets better.\nImprovable Ratio. The improvable ratio (ImpR) is defined as the proportion of the number of detected improvable words to the total number of words in sentences. As shown in To investigate how to control the model to achieve a desired ImpR, we build another distantly supervised dataset for training. Different from dataset construction described in section 3.2, we use the union of PPDB (Pavlick et al., 2015) and Merriam-Webster thesaurus as a large synonym thesaurus. As the thesaurus size increases, the artificial improvable targets in constructed data are increased to 25.4% from 13.2%.\nThe results of BERT trained on two datasets are presented in Table 5. Upon comparison of the two experiments, it is observed that the number of constructed improvable targets in the training set is nearly doubled, while the ImpR of the trained models only increases to 13.6% from 9.3%. It is challenging to control the ImpR. Thus, one direction under research is to control the model to attain a desired ImpR while maintaining a good performance." }, { "figure_ref": [ "fig_1" ], "heading": "Multiple Suggestions Analysis", "publication_ref": [], "table_ref": [], "text": "It may be beneficial for users to have multiple suggestions for each improvable target. Therefore, we design a multiple-suggestion setting that allows the system to provide multiple substitution suggestions for each detected improvable target.\nAs the output suggestions are ranked in order, we propose using Normalized Discounted Cumulative Gain (NDCG), a metric commonly used in search engines, to measure the similarity between a ranked list and a list with weights. DCG\nNDCG m = 1 M M k=1 DCG m (T k ) DCG m (T k )\nm (T k ) = m i=1 i ≤i w i log(1 + i) DCG m (T k ) = m j=1 j ≤j w j log(1 + j) w j = w i if t kj ∈ T k else 0\nIn this formula, M is the total number of true predicted improvable targets, and m is a parameter that specifies the number of suggestions for an improvable target. In the numerator, we accumulate the weights for the predicted suggestions from the first to the last. If recommendation i is not in human annotation, the weight is set to zero. Otherwise, the weight is set to its voting index. The denominator is a list sorted according to the voting index, which represents the optimal condition for giving m predictions. We provide an example of calculating NDCG in appendix D.\nThe average number of substitution suggestions for each improvable target in SWS benchmark is 3.3. When m exceeds the substitution number for a given target, DCG m (T k ) remains constant. Thus, NDCG m is only calculated for m = 1, 2, 3, 4. Figure 4 lists NDCG m for different baselines.\nBERT may perform better than other methods, but as the number of suggestions m increases, the NDCG m of BERT drops significantly. This suggests that BERT struggles when providing multiple suggestions. This could be due to the lack of multiple substitution suggestions in the distantly supervised dataset. Future research could focus on improving the model's ability to provide multiple substitution suggestions. Sentence: Most students don't have sufficient self-control, which would lead to worse situations, like playing video games or watching TV all day, or playing outside for several days.\" Ground Truth: situations → {\"circumstances\": 5, \"conditions\": 2}, BERT Prediction: Target not found\nSentence: It may be true that knowing unrelated events doesn't provide convenience to our lives directly.\nGround Truth: knowing → {\"following\", \"memorizing\", \"recalling\", \"studying\"} BERT Prediction: knowing → understanding " }, { "figure_ref": [ "fig_2" ], "heading": "Case Study", "publication_ref": [], "table_ref": [], "text": "Figure 5 gives two cases of BERT's predictions.\nIn the first case, BERT didn't detect this improvable target. However, in our distantly-supervised training data, there are dozens of cases substituting \"situations\" to \"circumstances\". We think controlling the initiative of detecting is a direction worthy of research.\nIn the second case, BERT give the suggestion of \"understanding\", which is the closest word to \"knowing\" if ignores the context. However, it's not the right meaning in the context of \"knowing events\". We think it's hard to train a model aware of word usage in different contexts with the current distantly-supervised training data. Because we think the one-substitute-one data doesn't provide enough information for model training on word usage. We regard this as a future research direction." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper introduces the first benchmark for Smart Word Suggestions (SWS), which involves detecting improvable targets in context and suggesting substitutions. Different from the previous benchmarks, SWS presents a more realistic representation of a writing assistance scenario. Our experiments and analysis highlight various challenges for future research and suggest opportunities for improvement in future work. We encourage further research on building more realistic training data, designing better data augmentation strategies, and developing unsupervised or self-supervised methods for SWS." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "The SWS benchmark have two limitations: (1) The sentences in the SWS testing set come from students' essays, which limits the system's ability to test its performance in other specific domains such as laws or medicine. (2) the SWS corpus is at the sentence level, but some writing suggestions can only be made after reading the entire article, which are not included in our SWS dataset." }, { "figure_ref": [], "heading": "A Annotation Instructions", "publication_ref": [], "table_ref": [], "text": "We need to find at least 3 \"words/phrases to change\" in a sentence, and give \"substitutes\" for each. Every substitute should be classified as improve-usage or diversify-expression.\nA.1 What is the word/phrase that needs to change?\nOur aim is to find a word/phrase that needs to be better in writing scenarios. Suppose you are the teacher, and now you are helping the language learners to improve their English writing. We define a \"word to change\" as the substitution has influences as follows:\n• To express the original semantic meaning more appropriately. • To make the usage of the word much closer to the native speaker. • To change spoken language into written language. • To diversify the word usage for better expression. The substitution should NOT cause the influence as follows:\n• Rewrite the sentence, instead of words or phrases, into a better expression (e.g. \"it is advisable\" → \"advisably,\"). • Correct the mistakes in the sentence (e.g. \"a lot\" → \"a lot of\" in the sentence \"There are a lot of valuable tips\"). • Substitute the word with a synonym, but not help the English learners with better writing. After the definition, we also give some rules that you could refer to:\n• the word/phrase that needs to change is usually less than 3 words. • the word/phrase that needs to change is usually an adj./adv./noun/verb. • the word/phrase that needs to change is usually not a named entity." }, { "figure_ref": [], "heading": "A.2 How to give the substitutions?", "publication_ref": [], "table_ref": [], "text": "The substitution should:\n• have the same semantic meaning as the \"word to change\". • keep the sentence's meaning unchanged.\nSpecifically, there are two scenarios for substitution:\n• If the word to change is general, and we can clearly understand the sentence's meaning. In this case, the substitution should be more precise. (e.g. \"Schools in north-west China are our primary aiding individuals and we often start from our school when the summer vacation begins.\" \"aiding\"→\"helping\" is a good substitution) • If the word to change is confusing, and we could only guess the sentence's meaning. In this case, the substitution should be more general. (e.g. \"Successful individuals are characterized by various merits including ...\" \"various\"→\"plentiful\" is a bad substitution)\nAfter the substitution, the sentence must be fluent as the original sentence. Errors in preposition collocations, tenses, and mythologies should be avoided. (e.g. \"in a nutshell\", \"nutshell\" → \"essence\" is not right, should be \"in a nutshell\" → \"in essence\")" }, { "figure_ref": [], "heading": "A.3 Annotation Guidelines", "publication_ref": [], "table_ref": [], "text": "• Substitutions in a grid should be connected with \";\" (NOT ',' !). • If the original sentence has grammar or typo problems, just discard the sentence. • In the annotation table, the content in the column \"word to change\" should be EXACTLY THE SAME as the word/phrase in the original sentence, and there should not exist punctuation (except \";\" to connect multiple substitutions) • Substitute the smallest range of words, unless indivisible. (e.g. \"I think you deserve it again\" → \"I think you deserve another chance\" is a bad case, which should be \"it again\" → \"another chance\". \"in a nutshell\" → \"in essence\" is a good case, because \"in a nutshell\" is a phrase). • We don't need to paraphrase the sentence. • Please ensure that the \"substitute\" and \"word to change\" have the same tense, plural forms, and part of speech." }, { "figure_ref": [], "heading": "B Example of Evaluation Metrics", "publication_ref": [], "table_ref": [], "text": "For example, given a sentence: \"I am writing to answer the previous questions you asked.\" The annotation result of the sentence is as follows:\nanswer: {respond to: 3, reply to: 1}, writing: {connecting with: 3}, to answer: {in response to: 2}, questions: {queries: 2}\nIn improvable target detection, S k is {answer, writing, to answer, questions}. If the prediction S k is {answer, previous}, then P det = 1/2 and R det = 1/4.\nIn substitution suggestion metrics, take the true predicted target answer as an example. If the predicted suggestion is in {respond to, reply to, then Acc sug = 1, otherwise Acc sug = 0.\nIn end-to-end evaluation, if the predicted suggestions are {answer: respond, writing: connect with, asked: gave}, then P e2e = 1/3 and R e2e = 1/4." }, { "figure_ref": [], "heading": "C Prompt for ChatGPT", "publication_ref": [], "table_ref": [], "text": "The prompt we use is as follows:\nIn the following sentence, please give some suggestions to improve word usage. Please give the results with the json format of \"original word\": [\"suggestion 1\", \"suggestion 2\"], and the \"original word\" should be directly extracted from the sentence. [s] where [s] is the sentence. Amazingly, ChatGPT can generate substitution suggestions with the keyvalue format. We use regular expression to extract the substitution suggestions. If the result is empty, we will re-generate until getting substitution suggestions." }, { "figure_ref": [], "heading": "D Example of NDCG", "publication_ref": [], "table_ref": [], "text": "Take an example of NDCG 5 : For a detected improvable target, if T j with voting index is {respond to : 3, respond : 2, response : 1, reply to : 1} and T j with order is {respond, respond to, tell, response, solution}, then DCG(T j ) and DCG(T j ) are calculated as follows, and NDCG 5 = 4.4/5.1 = 86.3%.\nOrder Sub.\nGain DCG 5 (T k ) 1 respond 2 2 = 2 × 1 2 respond to 3 3.9 = 2 + 3 × 0.63 3 tell 0 3.9 = 3.9 + 0 × 0. " } ]
Enhancing word usage is a desired feature for writing assistance. To further advance research in this area, this paper introduces "Smart Word Suggestions" (SWS) task and benchmark. Unlike other works, SWS emphasizes end-to-end evaluation and presents a more realistic writing assistance scenario. This task involves identifying words or phrases that require improvement and providing substitution suggestions. The benchmark includes human-labeled data for testing, a large distantly supervised dataset for training, and the framework for evaluation. The test data includes 1,000 sentences written by English learners, accompanied by over 16,000 substitution suggestions annotated by 10 native speakers. The training dataset comprises over 3.7 million sentences and 12.7 million suggestions generated through rules. Our experiments with seven baselines demonstrate that SWS is a challenging task. Based on experimental analysis, we suggest potential directions for future research on SWS. The dataset and related codes is available at https://github.com/ microsoft/SmartWordSuggestions.
Smart Word Suggestions for Writing Assistance
[ { "figure_caption": "Figure 3 :3Figure 3: The number of targets and R det on different voting index.", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: NDCG m on different m of BERT, rulebased method, and LexSubCon.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Case study of the BERT's predictions.", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "is arXiv:2305.09975v1 [cs.CL] 17 May 2023With the help of the intimate cooperation of our group members , we pointed out a new method .With the help of the intimate cooperation of our group members , we pointed out a new method .", "figure_data": "Input SentenceSub-task 1: Improvable Target DetectionImprovable TargetsSub-task 2: Substitution Suggestionsupport /close collaborationdevelopedSubstitution Suggestionsassistance /guide / aidFigure 2: Task definition of Smart Word Suggestions (SWS). SWS consists of two sub-tasks: improvable targetdetection and substitution suggestion. A sentence contains multiple improvable targets, and a target has multiplesubstitution suggestions.", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Statistics of SWS and LS datasets. SWS DS stands for the distantly supervised dataset.", "figure_data": "Benchmark # Sentence# Target # Suggestion# LabelSemEval20102010802512,300COINCO247415,629112,742167,446SWORDS1250125071,813395,175SWS1000702716,03130,293SWSDS3,746,142 12,786,685 12,786,685 12,786,685", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "shows the comparison between SWS and lexical substitution benchmarks. Our SWS dataset consists of 7027 instances of improvable targets and 16031 suggestions in 1000 sentences. The average length of the sentences in this dataset is 27.8 words. The improvable targets in this dataset includes 2601 nouns, 2186 verbs, 1263 adjectives, 367 adverbs, 267 phrases, and 343 other parts of speech. 3.8% of the targets and 3.3% of the suggestions are multi-word phrases. 63.0% of the targets are the type of refine-usage. Table2shows the proportion of refine-usage or diversify-expression targets with different part-of-speech.", "figure_data": "POSnoun verb adj. adv. phrase others totalnumber 2601 2186 1263 367 2673437027RU (%) 57.8 63.7 66.7 64.9 70.876.7-DE (%) 42.2 36.3 33.3 35.1 29.223.3-", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Statistics of targets with different part-ofspeech. RU refers to the proportion of refine-usage targets, and DE refers to the proportion of diversifyexpression.", "figure_data": "", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Evaluation results on SWS. *: As a reference, we offer human performance by taking the average of ten rounds of evaluations. In each round, each annotator is compared to the combined annotations of other annotators.", "figure_data": "", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "R det , WA det are positively correlated with ImpR.", "figure_data": "ModelImpR R det WA detRule-based 0.125 0.3440.382ChatGPT0.224 0.4180.449BERT sp,sv0.021 0.0500.061LexSubCon 0.324 0.6670.694CMLM0.094 0.2220.239BART0.102 0.2430.272BERT0.093 0.2490.278Human0.212--", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Improvable ratio (ImpR), Detection Recall (R det ) and Weighted Accuracy (WA) for improvable targets detection on SWS benchmark sets.", "figure_data": "", "figure_id": "tab_7", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "ImpR WA det Acc sug P e2e R e2e F e2e Comparison of BERT trained on two distantly supervised datasets. The suffix stands for the constructed improvable target ratio of the dataset. The model trained on the dataset with more improvable targets yields a higher ImpR and a higher R det , but a worse performance in substitution suggestions.", "figure_data": "DatasetP det R det F det 0.5", "figure_id": "tab_8", "figure_label": "5", "figure_type": "table" } ]
Chenshuo Wang; Shaoguang Mao; Tao Ge; Wenshan Wu; Xun Wang; Yan Xia; Jonathan Tien; Dongyan Zhao
[ { "authors": "Christopher Bryant; Mariano Felice; Ted Briscoe", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Automatic annotation and evaluation of error types for grammatical error correction", "year": "2017" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Li Dong; Jonathan Mallinson; Siva Reddy; Mirella Lapata", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "Learning to paraphrase for question answering", "year": "2017" }, { "authors": "Fredo Erxleben; Michael Günther; Markus Krötzsch; Julian Mendez; Denny Vrandečić", "journal": "Cham. Springer International Publishing", "ref_id": "b3", "title": "Introducing wikidata to the linked data web", "year": "2014" }, { "authors": "Anthony Fader; Luke Zettlemoyer; Oren Etzioni", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Paraphrase-driven learning for open question answering", "year": "2013" }, { "authors": "Tao Ge; Furu Wei; Ming Zhou", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Fluency boost learning and inference for neural grammatical error correction", "year": "2018" }, { "authors": "Mohit Iyyer; John Wieting; Kevin Gimpel; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Adversarial example generation with syntactically controlled paraphrase networks", "year": "2018" }, { "authors": "Gerhard Kremer; Katrin Erk; Sebastian Padó; Stefan Thater", "journal": "", "ref_id": "b7", "title": "What substitutes tell us -analysis of an \"all-words\" lexical substitution corpus", "year": "2014" }, { "authors": "Mina Lee; Chris Donahue; Robin Jia; Alexander Iyabor; Percy Liang", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "Swords: A benchmark for lexical substitution with improved data coverage and quality", "year": "2021" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", "year": "2020" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge J Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence Zitnick", "journal": "", "ref_id": "b10", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Marjan Ghazvininejad; Omer Levy; Yinhan Liu; Luke Zettlemoyer", "journal": "", "ref_id": "b11", "title": "Mask-Predict: Parallel Decoding of Conditional Masked Language Models", "year": "2019" }, { "authors": "Diana Mccarthy; Roberto Navigli", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "SemEval-2007 task 10: English lexical substitution task", "year": "2007" }, { "authors": "George Michalopoulos; Ian Mckillop; Alexander Wong; Helen Chen", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "LexSubCon: Integrating knowledge from lexical resources into contextual embeddings for lexical substitution", "year": "2022" }, { "authors": "Tou Hwee; Ng; Mei Siew; Ted Wu; Christian Briscoe; Raymond Hendy Hadiwinoto; Christopher Susanto; Bryant", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "The CoNLL-2014 shared task on grammatical error correction", "year": "2014" }, { "authors": "Ellie Pavlick; Pushpendre Rastogi; Juri Ganitkevitch; Benjamin Van Durme; Chris Callison-Burch", "journal": "", "ref_id": "b15", "title": "PPDB 2.0: Better paraphrase ranking, finegrained entailment relations, word embeddings, and style classification", "year": "2015" }, { "authors": "Darsh Shah; Tao Lei; Alessandro Moschitti; Salvatore Romeo; Preslav Nakov", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Adversarial domain adaptation for duplicate question detection", "year": "2018" }, { "authors": "Wei Song; Ziyao Song; Lizhen Liu; Ruiji Fu", "journal": "International Joint Conferences on Artificial Intelligence Organization", "ref_id": "b17", "title": "Hierarchical multi-task learning for organization evaluation of argumentative student essays", "year": "2020" }, { "authors": "Denny Vrandečić; Markus Krötzsch", "journal": "Communications of the ACM", "ref_id": "b18", "title": "Wikidata: a free collaborative knowledgebase", "year": "2014" }, { "authors": "Wangchunshu Zhou; Tao Ge; Ke Xu; Furu Wei; Ming Zhou", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "BERT-based lexical substitution", "year": "2019" } ]
[ { "formula_coordinates": [ 5, 113.64, 238.26, 131.77, 73.83 ], "formula_id": "formula_0", "formula_text": "PA det = 1 |P | (i,j)∈P PA det ij PA det ij = N k=1 1 N |s i k ∩ s j k | |s i k ∪ s j k |" }, { "formula_coordinates": [ 5, 116.18, 414.38, 126.45, 75.23 ], "formula_id": "formula_1", "formula_text": "PA sug = 1 |P | (i,j)∈P PA sug ij PA sug ij = M ij l=1 1 M ij |t i l ∩ t j l | |t i l ∪ t j l |" }, { "formula_coordinates": [ 5, 332.89, 356.37, 164.77, 34.42 ], "formula_id": "formula_2", "formula_text": "TP e2e = N k=1 M k l=1 1 if s kl ∈ S k else 0" }, { "formula_coordinates": [ 5, 339.61, 494.31, 149.64, 27.52 ], "formula_id": "formula_3", "formula_text": "P e2e = TP e2e N P , R e2e = TP e2e N G" }, { "formula_coordinates": [ 5, 352.08, 610.72, 124.7, 26.41 ], "formula_id": "formula_4", "formula_text": "F e2e 0.5 = 1.25 • P e2e • R e2e 0.25 • P e2e + R e2e" }, { "formula_coordinates": [ 5, 306.14, 745.66, 221.81, 32.39 ], "formula_id": "formula_5", "formula_text": "P det = N k=1 |s k ∩ s k | N k=1 |s k | , R det = N k=1 |s k ∩ s k | N k=1 |s k |" }, { "formula_coordinates": [ 6, 117, 148.62, 124.3, 26.41 ], "formula_id": "formula_6", "formula_text": "F det 0.5 = 1.25 • P det • R det 0.25 • P det + R det" }, { "formula_coordinates": [ 6, 74.7, 245.77, 202.57, 34.41 ], "formula_id": "formula_7", "formula_text": "Acc sug = 1 N N k=1 1 M k M k l=1 1 if t l ∈ T l else 0" }, { "formula_coordinates": [ 7, 79.65, 745.19, 199.51, 32.86 ], "formula_id": "formula_8", "formula_text": "WA det = N k=1 M k l=1 w kl if s kl ∈ s k else 0 N k=1 M k l=1 w kl" }, { "formula_coordinates": [ 8, 103.18, 742.98, 152.45, 33.98 ], "formula_id": "formula_9", "formula_text": "NDCG m = 1 M M k=1 DCG m (T k ) DCG m (T k )" }, { "formula_coordinates": [ 8, 346.38, 292.95, 136.6, 105.7 ], "formula_id": "formula_10", "formula_text": "m (T k ) = m i=1 i ≤i w i log(1 + i) DCG m (T k ) = m j=1 j ≤j w j log(1 + j) w j = w i if t kj ∈ T k else 0" } ]
[ { "figure_ref": [ "fig_1" ], "heading": "Introduction", "publication_ref": [ "b15", "b20", "b21", "b46", "b1", "b21", "b7", "b24", "b59", "b61", "b10", "b42", "b8", "b9", "b12", "b34", "b26", "b50", "b9" ], "table_ref": [], "text": "Multiple object tracking (MOT) is a fundamental task in computer vision with applications across domains, including scene understanding and autonomous driving. In many of these applications, tracking plays a safety-critical role for downstream planning and control algorithms. Accurate MOT requires precise detection of one or multiple object categories and correctly associating them throughout object presence in a dynamic scene. This task is challenging not only due to the similarity of object instances in the scene and highly dynamic object motion paths but also the fundamental problem of partial and full occlusions, which from the observer's view, can break object paths into separate segments.\nA large body of work has explored MOT methods during the last years [16,21,22,34,42,55], approaching the task from different viewpoints. For instance, Bergman et al. [2] extend an object detector to a tracker using a regression network that estimates the object displacement, highlighting the importance of object detection in MOT pipelines. In [22], the authors improve on the standard frame-wise data association in MOT by modeling the intraframe relationships between the tracks in the form of an undirected graph and formulating the problem as a graphmatching task. Similarly, [8,26,33,54,58] model the interaction between objects over multiple frames as a graph and utilize graph neural networks (GNN) to globally reason over object interactions and associations. Furthermore, recent directions that aim to tackle occlusion and association over longer time spans include architectures with attention mechanisms that equip the model with global context [39,66,68,70] and integrate memory to provide to utilize long-range information [11,19,64].\nHowever, most MOT methods are supervised and rely on a highly laborious data labeling process, which leads to using relatively small datasets such as KITTI [20] with 21 training sequences. Although the recently released Waymo dataset [48] is significantly larger than KITTI with 798 training videos, it is still small compared to the many hours of available unlabeled video data. As such, this often limits the full potential of tracking architectures, as more data can significantly improve the performance of deep learningbased methods [51]. In this work, we aim to utilize the large amount of unlabeled video data for MOT.\nObject detection is closely related to MOT, as one of the main strategies for solving MOT is Tracking by Detection which is learning to associate between the detected objects [9]. Object detection methods are trained on separate still images; thus, the labeling process is significantly simpler than MOT, which requires annotating sequences. Furthermore, the field has produced very accurate object detectors [28,65] that can be used to generate the detections for MOT. Although recent object detection practices utiliz- We propose S 3 Track, a self-supervised method for learning the object associations throughout a video by learning a robust appearance model. We use optimal transport for computing the soft object assignments, enabling end-to-end training of our model with association pseudo-labels. Our method shows strong performance in challenging scenarios such as occlusion and fast motion in the top row, severe weather conditions, and appearance change in the bottom row (see the objects pointed at with the white arrow). The track IDs are visualized by the bounding box color and the number inside. Data samples are from the nuScenes dataset [10] validation split using the provided detection bounding boxes.\ning transformers [13,37] show promising performance, twostage detection methods relying on region proposals [43] are still faring among the best-performing models in a wide range of detection tasks, thanks to employing techniques such as Region of Interest (RoI) pooling and hierarchical feature processing that has been proven crucial for object detection [23,35]. Nevertheless, it is an open question if we can rely on the accuracy of these existing mature detection models for MOT.\nIn our work, we assume access to a trained object detector for generating the detection bounding boxes and train an MOT model without using video-level association labels. With per-frame detections in hand, we propose a method to obtain association pseudo-labels using motion information over short video sequences. Using the detections and the RoI pooling layer, we extract the object features and compute an affinity matrix between the detections in the source and target frames. We propose differentiable optimal transport for finding the soft assignments between the detections, facilitating end-to-end training of our model using the association pseudo-labels. Thanks to this differentiable training, our model is able to compute robust and discriminative features optimal for object association, as shown in Figures 1 and2. We validate the method on multiple tracking benchmark datasets, including KITTI [20], Argo [59], nuScenes [10], and Waymo datasets [48], outperforming all tested unsupervised MOT methods.\nWe make the following contributions in this work:\n• We introduce a novel self-supervised MOT approach without video-level association annotations, hinging on a soft differentiable object assignment and object association pseudo-labels.\n• We introduce a novel high-resolution (8MP) HDR driving dataset for multi-view self-supervised training and a pseudo-labeling method for this dataset.\n• We validate the method on the KITTI, Argo, nuScenes, and Waymo datasets, outperforming other unsupervised MOT methods. We confirm the effectiveness of all method components in ablation experiments." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b1", "b7", "b13", "b14", "b58", "b8", "b11", "b12", "b34", "b2", "b51", "b52", "b42", "b41", "b17", "b43", "b10", "b10", "b62", "b23", "b44", "b44", "b22", "b23", "b5", "b53", "b0", "b0", "b36" ], "table_ref": [], "text": "Multiple Object Tracking is the task of detecting and associating multiple objects throughout a video sequence. MOT is an active area with different directions [2,8,14,15,25,30,67] that have been proposed for solving this task. In the following, we review existing threads in MOT, followed by recent works on self-supervised techniques for spatiotemporal correspondence learning in video data.\nTracking by detection is a popular paradigm that aims to learn MOT by first detecting the objects and then finding the associations between the detections over multiple frames [5,9,29]. Classical methods that fall into this category mostly rely on simple motion modeling [5, 7] and hence, complementary to our work which utilizes visual cues. The recent OC SORT method [12] makes several modifications to the Kalman-based formulation in [5], resulting in a considerable performance gain and better handling of occlusion. With the progress of deep learning methods and significantly enhanced accuracy in object detection [13,43], this improvement has naturally carried over to tracking by detection methods [3]. Moreover, deep networks have enabled learning of better features which improves the association accuracy [60,61]. In CenterTrack [69], a joint detection and tracking pipeline is developed for first detecting the object centers and then associating between them over consecutive frames via computing the distance between the object centers, taking the object motion offset into account. The follow-up work PermaTrack [51] utilizes the notion of physical object permanence and uses a recurrent module for memorizing the object track history and surmounting occlusion. [50] further improves this method by employing a consistency-based objective for object localization in occluded videos.\nMotivated by the success of transformer-based methods in vision applications [18,52], transformers have also been deployed in tracking algorithms [11,40,47, 66] to allow for better modeling of object relations. Sun et al. [47] were the first to suggest a transformer-based architecture for learning the object and track queries used for detecting objects in succeeding frames and performing association. Trackformer [40] proposes tracking by attention, a model which uses a transformer encoder and decoder to perform the task of set prediction between the object detections and the tracks in an autoregressive manner. MeMOT [11] additionally utilizes an external memory for modeling the temporal context information. In MOTR [66], Zeng et al. extend the deformable DETR [71] by building on the idea of object-to-track and joint modeling of appearance and motion by introducing a query interaction module and temporal context aggregation. Self-supervised Tracking aims to learn spatiotemporal feature correspondences from unlabeled video data using different pretext tasks [27,32,53]. In [53], the authors propose colorization as the proxy task for learning representative features that can be used for spatiotemporal feature matching. Lai et al. [31,32] improve the performance of this work by employing memory in the architecture and cycle consistency during training. Jabri et al. [6,27] formulate the problem of correspondence learning as a contrastive random walk, while alternative approaches formulate training objectives based on time cycle consistency [57], or utilize motion information as the main training signal [62]. However, these algorithms developed for learning pixel-wise correspondences work well for single object tracking and often fail in crowded scenes with many similar object instances. Moreover, the propagation-based formulation results in error accumulation for longer sequences.\nUnlike dense correspondence learning, in our task, we do not require pixel-wise correspondences, but we learn the associations between the objects. Most similar to our work are the following two approaches: Bastani et. al [1] attempt to learn the object associates over multiple frames using a cross-input consistency objective that encourages the same association via visual and location information. Wang et al. [56] learn 3D object association using Kalman filtering to generate the pseudo-labels. Contrary to using a pretext task in [1], we directly optimize our model on the final ob-\nt 0 t 1 a) t 0 t 1 * ✓ ✓ ✗ ✓ ✓ ✗ ✗ b) t 0 t 1 * ✓ ✓ ✓ ✓ ✓ ✓ ✓ Figure 2\n. Heatmaps a and b show the cosine distance between object embeddings from an instance-agnostic model trained for object detection and our model trained for object association using optimal transport soft assignment at frames t0 and t1. The soft assignment mechanism is essential for obtaining instance-aware discriminative object features. Without this, features from different objects are not well separated in the embedding space, resulting in a low distance between multiple object instances and false matches (red in heatmap a shows the false associations: cars 2, 5, 6, 7 at t1). Note that our method correctly matches all detections and adequately initializes a new track ID here for car 6 entering at t1. (*: unmatched detection resulting in a new track ID).\njective of learning to match between the detections. Unlike [56], we do not use the Kalman filter as the motion model but focus on learning the appearance model by acquiring the association pseudo-labels using motion information, specifically optical flow and disparity maps. Finally, our work is also related to SuperGlue [45], a method for key-point correspondence search based on graph neural networks and optimal transport for feature similarity learning.\nIn contrast, we attempt to solve object association between consecutive frames by utilizing features extracted from object region proposals and training our model using association pseudo-labels." }, { "figure_ref": [], "heading": "Tracking with Soft Assignment Flow", "publication_ref": [], "table_ref": [], "text": "In conventional supervised MOT, a model is trained to predict a unique track ID for each object by minimizing a classification loss. In our setting, we do not have access to the track IDs; instead, we approach MOT as finding the\nBackbone I r , B r I t , B t f r f t RoI Pooling RoI Enhancer Module MLP f r,RoI : N × 21 × 21 × 256 f t,RoI : M × 21 × 21 × 256 C sim x ∈ R 256 N M itr = 1 itr = T Row Normalization Col Normalization L train Figure 3\n. The S 3 Track architecture. We use a feature pyramid pooling network as the backbone and an RoI pooling layer for extracting the per-object features fRoI , followed by an RoI Enhancer Module generating the instance-specific discriminative representation for each object. We compute the final embeddings xi using an MLP and find the soft assignments between the embeddings using the differentiable optimal transport layer.\nframe-wise association between detected objects in the reference and target frames. Our goal is to learn an affinity matrix measuring the distance between detected objects in a reference (I r ) and target video image (I t ), using which we predict the unique correspondences between the objects such that objects with the closest distance are matched. At first glance, this resembles the common inference strategy in MOT approaches that formulate a distance matrix based on motion or appearance information and use the Hungarian algorithm to find the unique assignments. However, the bipartite matching via the Hungarian algorithm is nondifferentiable and hence, does not facilitate learning correspondences. To tackle this challenge, we find soft associations between the reference and target objects by posing association as an optimal transport [17] problem. Having defined a differentiable matching step, we learn feature embeddings optimal for matching in an end-to-end training approach. We find this differentiable matching step essential for the proposed method, see Figure 2. We train our model with a negative log-likelihood objective where the ground truth association labels are replaced with assignment pseudo-labels obtained from video motion information. The overall architecture is illustrated in Figure 3. In the following, we discuss all components of the proposed method." }, { "figure_ref": [], "heading": "Optimal Transport for Soft Object Association", "publication_ref": [ "b37", "b36" ], "table_ref": [], "text": "We solve the task of finding the corresponding objects in I r and I t with minimal assignment distance using optimal transport [17,41]. We will see that this approach allows for a fully differentiable matching process. Consider two discrete distributions of a and b and a matrix C, which represents the cost of transporting distribution a to b using a probability matrix P (or transport matrix). Optimal transport is a linear assignment algorithm [17] that finds the P which minimizes the overall transport cost, that is\nd C (a, b) := min P ∈U (a,b) P, C ,(1)\nwhere U (a, b) is the set of possible transport strategies\nU (a, b) := {P ∈ R + | P 1 = a, P T 1 = b}.(2)\nWe are interested in finding the assignments between the detections in I r and I t such that the objects with the highest similarity (lowest distance) are matched. In our work, we learn features optimal for object association. Consider X 1 := {x 1,1 , x 1,2 , . . . , x 1,n1 }, X 2 := {x 2,1 , x 2,2 , . . . , x 2,n2 } as the set of extracted detection embeddings from the reference and target frames, where x i,j ∈ R 256 . We define the following cost matrix for feature similarity\nC sim,ij = 1 - x 1,i x 1,i , x 2,j x 2,j ,(3)\nwhere ., . represents the Frobenius dot product.\nAdding an entropy regularization term to Equation (1) [17] turns the task into a convex optimization problem that can efficiently be solved using Sinkhorn algorithm [46]. This algorithm consists of differentiable operations, namely, iterative normalization of the rows and columns of matrix C until convergence (or with a fixed number of iterations), as shown in Figure 3. With this in hand, Equation (3) allows us to learn the feature embeddings optimal for matching. As n 1 and n 2 may differ due to entering and exiting objects, we augment the cost matrix C sim with an additional row and column, initialized with a learnable parameter γ representing the non-match class, similar to [45]." }, { "figure_ref": [ "fig_4" ], "heading": "Flow-based Pseudo-label Generation", "publication_ref": [], "table_ref": [], "text": "For training our association model, we recover pseudolabels from temporal cues in video sequences and multi- \nr,i = b r,i + M t bi,r ,(4)\nwhere M t bi,r is the computed motion vector at the center of detection box b r,i . In the next step, we assign pseudo-labels based on the Intersection over Union (IoU) between the motion-adjusted object bounding boxes. Assuming aligned bounding boxes b t and b r , we match objects with the highest overlap with the distance matrix\nC IoU ij = 1 -IoU(b r,i , b t,j ).\n(5)\nWe compute the unique and hard object association labels (i, j) using the Hungarian algorithm, which gives us object correspondences with the highest overlap (minimum cost).\nNote that, at this stage, we can employ the Hungarian algorithm since we do not require differentiability in the pseudolabel generation. When using temporal data, the I r and I t are temporally spaced video frames, and motion M is estimated using optical flow. When using stereo data, I r and I t are the left and right images, and M is the disparity between the two views. Occlusion Masks. Changes in camera view (from left to right stereo camera) and dynamic objects can result in occlusions. Although tracking methods should be robust to changes in appearance between frames, extreme occlusions can be detrimental to the training process. Drastic appearance shifts and occlusion can occur for large baselines, as shown in Figure 5. To handle this issue, we use an occlusion mask to discard objects that become heavily occluded. Specifically, we assume that for non-occluded regions, the disparity in one view should be consistent with the disparity in the other view. We first compute the disparity maps, D l and D r , for the left and right views respectively. Next, we warp D l to the right view, obtaining Dr . Subsequently, if the disparity difference for a pixel is above τ occ , that pixel is marked as occluded, that is,\nDr = W(D l , D r )(6)\n∆ r = | Dr -D r |, OM r = 1 ∆ r ≥ τ occ 0 otherwise ,(7)\nwhere the function W(D l , D r ) bi-linearly warps D l to the right view using disparity D r and OM r denotes the occlusion map in the right view. Similarly, we compute the occlusion mask for the left view (OM l ) and discard objects with more than 50% occluded pixels in either OM r or OM l from the training data." }, { "figure_ref": [], "heading": "Discriminative Feature Extraction", "publication_ref": [ "b34", "b26", "b3", "b0" ], "table_ref": [], "text": "With our differentiable assignment in hand, we train a feature extractor tailored to multi-object tracking in an endto-end fashion. We find that extracting features from the detectors fails for object instance association as object detector features are instance-agnostic and not sufficiently discriminative, as illustrated in Figure 2. To this end, we slightly modify existing object detection architectures [43] for our purpose; see Figure 3. As input, we feed the RGB image and the detection boxes from the separate detectors to the model. The RGB image is initially processed as a whole through a feature pyramid network [35] with ResNet50 [24] backbone; then, the RoI pooling layer extracts the contextaware object features using the detection bounding boxes. Note that this contrasts with directly cropping the object region and then extracting the features [1], resulting in the complete loss of informative contextual information. In the next step, we further process the extracted features with an RoI enhancer module consisting of a stack of convolution and non-linearity layers to obtain an instance-specific object representation specialized for the association task. Finally, the enhanced features are projected to an embedding space ∈ R 256 using a small MLP network. The resulting embeddings x i are used to construct the cost matrix in Equation (3)." }, { "figure_ref": [], "heading": "Training Loss", "publication_ref": [], "table_ref": [], "text": "Assuming we have the correspondences between the detections in the reference and the target frame (from pseudolabels), we train our model using negative log-likelihood L N LL and triplet loss L trip , where\nL N LL = - (i,j)∈A log(P i,j ),(8)\nwith A being the set of association pseudo-labels between detections in I r and I t .\nThe additional triplet loss L trip helps in learning more discriminative features. Specifically, this loss minimizes the distance between the anchor and the positive samples and maximizes the distance between the anchor and the negative samples, up to a margin of m, that is\nL trip (a, p, n) = max{d(a i , p i ) -d(a i , n i ) + m, 0},(9)\nwhere a, p, and n stand for anchor, positive and negative samples, and d(x i , y j ) = |x iy j |. For this purpose, we select the anchor from I r and choose the positive and the negative samples from I t . The final loss is the weighted sum of the terms above, that is\nL train = α L trip + β L N LL ,(10)\nwhere α and β are training hyperparameters." }, { "figure_ref": [ "fig_7" ], "heading": "Implementation Details", "publication_ref": [ "b26", "b3", "b27", "b34", "b40" ], "table_ref": [], "text": "We employ a feature pyramid network [35] based on ResNet50 [24] as the backbone with 256 feature channels and initialize it with pre-trained weights on the COCO dataset [36]. The RoI pooling layer resizes the extracted regions to a fixed resolution of 21 × 21. The RoI enhancer module consists of a 4-layer convolutional network with Group Normalization and ReLU non-linearity. The output of this block is flattened and projected to the final embedding x ∈ R 256 using a two-layer MLP network and 2 normalization. We train our model using SGD optimizer with an initial learning rate of 0.0002, a momentum of 0.9, a weight decay of 0.0001, and a batch size of 8 on a single A100 GPU. We pre-train our model on temporal and multiview driving data described in subsection 4.1 where we resize the images to the same width as KITTI [20], keeping the aspect ratio unchanged. Additionally, we fine-tune our model on the training set of the datasets used for evaluation. We did not find additional data augmentation beneficial to the final performance. As training hyperparameters, we set the α and β in Equation ( 10) to 1.0 and 0.5, respectively. During inference, we define the cost matrix as the combination of appearance similarity and IoU, that is\nC inf = σC sim + (1 -σ)C IoU .(11)\nThe IoU information serves as a location prior that helps the model in cases where there are similar objects present in the scene. The relevance of IoU information highly depends on the data frame rate. Therefore, we use σ = 0.7 and σ = 1 for the test data captured at higher (e.g., 10 FPS) and lower (e.g., 2 FPS) frame rates, respectively.\nPseudo-label Generation. We generate the initial detections per frame using a FasterRCNN meta-architecture [43] with ResNet50 as the backbone and trained on an annotated driving dataset. To obtain more accurate pseudo-labels, we apply non-maximum suppression with an IoU threshold of 0.3 to the detections and discard the highly overlapping bounding boxes. We only consider detections with a minimum size of 100 pixels and prediction confidence above 0.9. Furthermore, the object assignments with an IoU below 0.1 are discarded.\nFor the temporal data, we generate optical flow at 5 FPS using the RAFT [49] model trained on the KITTI dataset [20]. For the stereo data, we generate the disparity using the method from Yang et al. [63]. We generate the disparity maps from multiple stereo pairs at multiple scales and merge them to obtain a single high-resolution disparity map. Examples of the generated pseudo-labels used in the pre-training step are shown in Figure 8 and Figure 9." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In this section, we evaluate the proposed method. We describe the datasets, evaluation metrics, and assess S 3 Track and relevant baselines on four autonomous driving datasets. We confirm our architecture choices with several ablation experiments." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b9", "b50", "b11" ], "table_ref": [], "text": "Wide-baseline Stereo Pre-training Data. We capture a training dataset with four 8MP HDR sensors placed at 3m height on a test vehicle with baseline distance (from reference camera (cam0) as 0.7m, 1.3m, and 2m for cam01, cam02, and cam03, respectively. The primary configuration during data capture uses four front-facing 8MP 20-bit HDR sensors (AR0820) with 30 degrees horizontal field of view lenses mounted at the height of approximately 3m from the ground and distributed over a 2m baseline, as shown in Figure 7. During snow and rain captures, the sensors are mounted behind the windshield at a height of around 1.5m (approximate near the red arrows at the bottom of Figure 7). In all cases, the cameras are mounted using a custom-made mounting plate to ensure that the cameras are attached rigidly and that there is no significant orientation difference between each pair. Calibration for the multi-baseline stereo was performed in two phases: lab-based offline intrinsic parameter estimation and on-site calibration using charts with clearly detectable patterns. Calibration , nuScenes [10], and Argoverse [59] datasets. Our method consistently outperforms the other unsupervised methods with a considerable margin of about 4-8 points on AssA across all datasets. As can be seen from the results, our method shows a robust performance when processing low frame rate videos as in nuScenes dataset. This is contrary to the motion-based models [5,7,12], where the association accuracy decreases at lower frame rates with an increased object motion.\ncaptures were done while the vehicle was static and either in neutral or with the engine turned off to reduce any artifacts due to camera vibration and rolling shutter. Data capture was performed over multiple days to collect sufficient variety in weather, illumination, and scene. A total of 52 hours of data were collected with the capture scenes, including downtown and highways, under varying illumination conditions, including noon with the sun directly above, dusk with the sun near the horizon (direct light on the sensor), and night. Moreover, data were collected covering clear, rainy, and snowy weather conditions. We will release a subset of 2 hours of driving data, evenly distributed for different conditions, including data captured during day, night, dusk, day+rain, day+snow, night+rain, and night + snow, in both downtown and highway traffic conditions." }, { "figure_ref": [], "heading": "KITTI 2D tracking dataset [20] consists of 21 training", "publication_ref": [ "b42", "b9", "b50" ], "table_ref": [], "text": "and 29 test videos collected at 10 FPS deploying sensors mounted on top of a driving car. We finetune our model on the train set and evaluate it on the test set using the detections obtained from PermaTrack [51].\nWaymo dataset [48] is a large-scale corpus consisting of 798 training and 202 validation sequences each with a duration of 20 seconds at 10 FPS. We use the data captured by the front camera for fine-tuning and evaluation. nuScenes [10] includes 700 training videos which are annotated for 3D MOT at 2 FPS. Due to lower annotation frequency, this dataset has a larger appearance change compared to the KITTI and Waymo datasets. We extract the 2D tracking labels from the 3D annotations using the scripts provided by the dataset authors and use 70 percent of the data for finetuning and 30 percent for validation.\nArgoverse dataset [59] also provides data for 3D tracking with a training set of 65 and a validation set of 24 videos. Using the script provided by this dataset, we extracted the 2D tracking labels at 5 FPS. We finetune on the training data and report the performance on the validation set." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [ "b35", "b35" ], "table_ref": [], "text": "While a large set of metrics has been proposed to evaluate MOT [4,38,44], some existing metrics, including MOTA, are biased towards detection accuracy, hence not indicative in the context of evaluating association which is the focus of our work. The most relevant metrics for measuring association performance are the Association Accuracy, Precision, Recall (AssA, AssPr, AssRe) [38], and the IDF1 score [44]. For completeness, we report the conventional metrics from the KITTI tracking benchmark, including Detection Accuracy, Precision, Recall (DetA, DetPr, DetRe), and the HOTA score, which combines the detection and association performance into a single number [38]." }, { "figure_ref": [ "fig_5" ], "heading": "Experimental Tracking Results", "publication_ref": [ "b11", "b0", "b11", "b13", "b42", "b11", "b9", "b50", "b11" ], "table_ref": [ "tab_0", "tab_1", "tab_1" ], "text": "We evaluate our method on the four autonomous driving benchmarks discussed above. On KITTI [20], we compare our approach to existing supervised baselines and four unsupervised methods. Like our work, the unsupervised methods do not use video-level association annotations and assume the availability of detection bounding boxes, while the supervised methods are trained using track labels. [5,12] utilize variants of Kalman filtering for modeling the object motion, while the tracker in [7] works purely based on IoU information. In [1], the authors use a motion model based on bounding box information and an appearance model, and the self-supervised objective aims to enforce consistency between the motion and the appearance model outputs. These methods require small object motion to work well, an assumption often violated in driving scenarios, especially with low capture frame rate.\nTable 1 reports tracking evaluations for the 'Car' class on the KITTI [20] test set. Together with OC SORT [12], our method outperforms other unsupervised baselines and even multiple supervised methods such as CenterTrack [69], Ea-gerMOT [30], and DEFT [14] -which have access to track labels -and achieves comparable results with Perma-Track [51] without using any video-level association labels. We highlight that [12] is a purely motion-based approach and can be complementary to our proposed appearancebased method. In Figure 6, qualitative examples show that S 3 Track outperforms PermaTrack in complex occlusion scenarios.\nIn Table 2, we discuss MOT evaluations for the 'Car' category on several recent automotive datasets, namely Waymo [48], nuScenes [10], and Argoverse [59]. The results for other baselines in Table 2 are obtained using the published code from the respective authors. The evaluations validate that our S 3 Track performs well across all datasets and scales well to larger datasets with varying data characteristics such as weather conditions and frame rate. This is in contrast with motion-based models [5,7,12], where the performance considerably drops at lower frame rates." }, { "figure_ref": [], "heading": "Ablation Experiments", "publication_ref": [ "b27", "b27" ], "table_ref": [ "tab_2", "tab_3", "tab_5", "tab_6" ], "text": "We conduct ablation experiments that validate the effectiveness of different components of our method. For all experiments, we train on the proposed wide-baseline stereo driving data and evaluate on the (now unseen) KITTI training set where the detections are available. Table 3 shows the contribution of the main components of S 3 Track. In the first row, we assess the importance of the RoI pooling mechanism for extracting the context-aware object features by first extracting the object patches and then extracting the features using a ResNet50 network. Next, we evaluate the effect of the RoI enhancer module. In this experiment, we directly perform average pooling on the extracted RoI features to obtain the final embeddings (x i ∈ R 256 ). In the third ablation experiment, we inspect the role of the soft assignment, which enables the end-to-end training of our model. Here, we compute the embeddings similar to the previous experiment without further training and use pretrained object detection weights on the COCO dataset [36]. Impact of the Distance Function. For our experiments, we use cosine distance as the measure of closeness between object embeddings. In Table 4, we provide experimental results when training the model with an alternative 2 distance and using a matching network for predicting the similarity score (instead of using a pre-defined similarity/distance function). The architecture of the matching network is an MLP consisting of 3 linear layers with 1024, 256, and 1 output channels, respectively (the output of the last layer is the similarity score). We use ReLU non-linearity between linear layers. The input to the matching network is the con-catenation of different object-pair embeddings; this network is expected to learn the function measuring the embedding similarity. We observe that using the learnable function in the matching network under-performs the 2 and cosine distance functions. In Table 6, we study the impact of frame rate when using temporal pre-training cues. We observe that a high frame rate achieves sub-optimal performance as there is not enough change in object appearance. A very low frame rate also decreases the accuracy due to extreme appearance changes which differ from the testing data. These findings also transfer to the stereo configuration. In Table 7, we study the effect of the pre-training step on the final association performance on the KITTI [20] test set. In S 3 Track + , we first pre-train the model on our driving dataset and then finetune it on the KITTI training set. In S 3 Track -, we initialize the model with pre-trained weights on the COCO dataset [36] and directly train the model on the KITTI training set. " }, { "figure_ref": [ "fig_1", "fig_1", "fig_1", "fig_1", "fig_1", "fig_1", "fig_1", "fig_1", "fig_1" ], "heading": "Additional Qualitative Evaluations on Different Autonomous Driving Datasets", "publication_ref": [ "b9", "b50", "b42", "b9", "b50" ], "table_ref": [], "text": "Next, we present and qualitative results on the KITTI [20] (Figure 10), Waymo [48] (Figure 12), nuScenes [10] (Figure 11), and Argoverse [59] (Figure 13) datasets. For the visualizations on the KITTI [20] test set, we use the detections obtained from PermaTrack [51]. For visual examples on the Waymo [48], nuScenes [10], and Argoverse [59] validation splits, we use the ground-truth detections available in the respective dataset. On nuScenes, we compare our results with CenterTrack [69]. The results for Center-Track [69] are generated using the code base and the trained weights published by the authors.\nWe illustrate challenging tracking scenarios in the qualitative results. In Figure 10, we see that our model is robust to missing detections (first example) and occlusion (examples 2,3) while PermaTrack undergoes multiple fragmentations and ID switches. Indeed across all datasets, we observe robustness to occlusions with stable track IDs, see nuScenes (Figure 11 samples 1 to 4), Waymo (Figure 12 rows 1,2,3,4,5), and Argoverse (Figure 13 rows 1 to 4). The experiments in Figure 12 vaidate that our model generalizes well to different weather and lighting conditions (rows 2,6,7,8), and can properly track smaller objects even when occluded (6th row)." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We propose S 3 Track -a self-supervised method for multiple object tracking that operates without any video-level track labels aiming at alleviating the expensive process of data annotation for MOT. With object bounding boxes from an accurate object detector in hand, our model performs MOT by learning the object associations over the video frames. To this end, we propose a soft differentiable assignment approach, which allows us to train our model end-to-end using the association pseudo-labels acquired from motion information in temporal and multi-view video data. The differentiable assignment makes it possible to learn context-aware object features that are specialized for the association step. We validate our method on four autonomous driving benchmarks and demonstrate favorable performance across different datasets achieving on-par or better performance than other unsupervised methods. Future directions include jointly learning association and motion trajectory and exploring memory-based approaches for merging object appearance over multiple frames. " }, { "figure_ref": [], "heading": "Time", "publication_ref": [ "b12", "b50" ], "table_ref": [], "text": "Figure 13. Additional qualitative evaluation of S 3 Track on Argoverse [59]. The proposed method can successfully perform tracking in occluded scenes. " } ]
In this work, we study self-supervised multiple object tracking without using any video-level association labels. We propose to cast the problem of multiple object tracking as learning the frame-wise associations between detections in consecutive frames. To this end, we propose differentiable soft object assignment for object association, making it possible to learn features tailored to object association with differentiable end-to-end training. With this training approach in hand, we develop an appearance-based model for learning instance-aware object features used to construct a cost matrix based on the pairwise distances between the object features. We train our model using temporal and multi-view data, where we obtain association pseudo-labels using optical flow and disparity information. Unlike most self-supervised tracking methods that rely on pretext tasks for learning the feature correspondences, our method is directly optimized for cross-object association in complex scenarios. As such, the proposed method offers a reidentification-based MOT approach that is robust to training hyperparameters and does not suffer from local minima, which are a challenge in self-supervised methods. We evaluate our proposed model on the KITTI, Waymo, nuScenes, and Argoverse datasets, consistently improving over other unsupervised methods (7.8% improvement in association accuracy on nuScenes).
S 3 Track: Self-supervised Tracking with Soft Assignment Flow
[ { "figure_caption": "1arXiv:2305.09981v1 [cs.CV] 17 May 2023 Time Occlusion + Fast Motion Rainy, Appearance Change", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 .1Figure1. We propose S3 Track, a self-supervised method for learning the object associations throughout a video by learning a robust appearance model. We use optimal transport for computing the soft object assignments, enabling end-to-end training of our model with association pseudo-labels. Our method shows strong performance in challenging scenarios such as occlusion and fast motion in the top row, severe weather conditions, and appearance change in the bottom row (see the objects pointed at with the white arrow). The track IDs are visualized by the bounding box color and the number inside. Data samples are from the nuScenes dataset[10] validation split using the provided detection bounding boxes.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure4. Object association pseudo-label generation process. We align the detection bounding boxes using motion information between a reference and a target frame. We compute a cost matrix based on the IoU between the aligned objects and employ the Hungarian algorithm to find the corresponding objects with maximum bounding box overlap.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Proposed occlusion masks for the stereo data. We generate OM l and OMr based on the consistency assumption that for non-occluded regions, the result of warping the left disparity D l to the right view should match Dr and vice versa.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Qualitative Tracking on KITTI [20]. We compare unsupervised S 3 Track and supervised PermaTrack [51] on unseen sequences.The track IDs are visualized with color coding and the unique number inside each bounding box. Our method shows robust performance under heavy occlusion (see zoom-ins on the occluded regions). In both scenes, S 3 Track correctly handles the heavy occlusion maintaining the track IDs, while PermaTrack[51] suffers from several ID switches and fragmentation.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 Figure 8 .78Figure 7. 4x AR0820 (8MP 20-bit HDR) cameras (top red arrows) with 30 FOV lens mounted on an 80×20 bar to create a large multi-view setup for data capture. We used two different vehicles (a) and (b) for data capture. For vehicle (a), data were captured at two different heights, 3m from ground and behind the windshield with a smaller baseline and about 1.5m from ground.", "figure_data": "", "figure_id": "fig_6", "figure_label": "78", "figure_type": "figure" }, { "figure_caption": "Figure 9 .9Figure 9. Multi-view cues (cam0&1) used during pre-training. When using stereo data, we use disparity (D l ) to align the bounding boxes from the right image (Ir) to the left view (I l ). Additionally, occlusion masks OM l and OMr are utilized to discard objects that are less than 50% visible in one of the views. In the first two rows, we see the left and right RGB images with the association pseudo-labels visualized with bounding box color.", "figure_data": "", "figure_id": "fig_7", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "S", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 10 . 1 [101Figure 10. Additional qualitative comparisons between S 3 Track and PermaTrack[51] on KITTI [20] test set. S 3 Track can properly track objects throughout occlusion and is robust to missing detections. In the first row, we see that PermaTrack suffers from an ID switch due to a missing detection, while our method preserves the correct object ID.", "figure_data": "", "figure_id": "fig_9", "figure_label": "101", "figure_type": "figure" }, { "figure_caption": "Figure 11 .Figure 12 .1112Figure 11. Additional qualitative results on nuScenes [10], where we compare S 3 Track and CenterTrack [69]. All three examples show robustness to occlusion and large orientation change of the vehicles crossing the intersection.", "figure_data": "", "figure_id": "fig_10", "figure_label": "1112", "figure_type": "figure" }, { "figure_caption": "Tracking Evaluation on the KITTI test set [20]. In bold, we only show the metrics relevant for measuring the association", "figure_data": "Unsup.SORT [5] IOU [7] UNS20regress [1]71.2 74.0 62.571.6 77.4 61.171.8 71.5 65.374.8 81.2 67.783.5 85.8 73.874.4 74.0 69.188.2 88.5 83.184.8 86.9 80.3OC SORT [12]76.577.376.480.686.480.387.287.0S 3 Track (ours)76.677.576.581.385.979.688.486.9FAMNet [15]52.661.045.564.478.748.777.481.5CenterTrack [69]73.075.671.280.184.673.889.086.5SupervisedmmMOT [67] LGM [54] EagerMOT [30] DEFT [14]62.1 73.1 74.4 74.272.3 74.6 75.3 75.354.0 72.3 74.2 73.876.2 80.5 78.8 80.084.9 82.1 86.4 84.059.0 76.4 76.2 78.382.4 84.7 91.1 85.286.6 85.9 87.2 86.1PermaTrack [51]78.078.378.481.786.581.189.587.1RAM [50]79.578.880.982.586.384.288.787.1", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Evaluation on Waymo[48]", "figure_data": "Waymo [48]nuScenes [10]Argoverse [59]MethodAssA IDF1 AssRe AssPr AssA IDF1 AssRe AssPr AssA IDF1 AssRe AssPrSORT [5]62.271.963.993.156.566.159.282.163.275.563.993.1IOU [7]72.179.473.294.560.871.569.372.670.180.273.294.5OC SORT [12]72.479.474.193.565.672.371.281.574.182.874.193.5S 3 Track (ours)77.883.778.597.773.481.979.087.777.883.778.593.5", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation Experiments. We confirm the effectiveness of the RoI-based feature extraction, RoI enhancing module, and soft object assignment with optimal transport. To quantify the relevance of each component, we run an experiment without each component and report the change in the association performance.", "figure_data": "MethodAssA AssRe AssPr IDF1S 3 Track96.197.597.797.6w/o RoI Pooling92.094.295.894.6w/o RoI Enhancer92.995.196.295.3w/o soft assignment83.588.290.189.1", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation experiments evaluating the choice of the distance function. An embedding distance using 2 and cosine distance performs better than using a matching network (MLP) for learning the object similarity score. The experiments are conducted using temporal data at 5 FPS.Influence of Pre-training Cues. We investigate the influence of different training cues on the proposed method using the unseen KITTI [20] training set. Table5assesses the method when training with different data types including temporal video data, stereo data, and a combination of both. To evaluate the influence of stereo cues, we tested with data from the three different camera pairs with varying baseline sizes, see the previous paragraph. Training with data from cam0&1 achieves better accuracy. The larger baseline in cam0&2 and cam0&3 results in a higher object appearance shift between the two views, which, in this case, is detrimental to the accuracy due to the domain gap with the KITTI data used for evaluation. Moreover, we find that the combination of temporal and stereo data is beneficial for training a better appearance model.", "figure_data": "Distance Function AssA AssRe AssPr IDF1Cosine94.896.596.896.9294.496.296.696.7MLP90.792.495.793.9", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablation experiments for temporal and stereo pre-training cues. Here, the temporal pre-training data is sampled at 5 FPS.", "figure_data": "", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Ablation experiments that investigate the impact of frame rate for temporal pre-training data.", "figure_data": "FPS AssA AssRe AssPr IDF11583.285.394.988.5594.896.596.896.9193.494.897.395.7", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Ablation on the impact of pre-training on our driving dataset, evaluating on KITTI [20] test set.", "figure_data": "MethodPre-training HOTA AssA AssRe AssPrS 3 Track +Yes76.676.579.688.4S 3 Track -No75.273.976.388.0", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "3 ", "figure_data": "t 0 t 0 rack S 3 Track PermaTrack Zoom-in S Track t 0 0 t 1 S 3 t 1 t 1 Track PermaTrack Track PermaTrack Zoom-in t 0 S 3 Track PermaTrack t 0 Zoom-in PermaTrack Zoom-in Track rack t 2 Zoom-in 0 t 1 S 3 Track t 0 Zoom-in t 1 0 t 1 S Track aTrack t 1 PermaTrackt 1 t 1 S 3 Track S 3 Track t 0 t 2 t 0 Track t 1 S 3 Track t 1 S 3 Track S 3 PermaTrack t 1 t 2 t 1 PermaTrack t 2 S 3 Track t 1 t 1 S 3 Track t 0 t 1 S 3 Track PermaTrack t 0 t 1 t 2 t 1 S 3 Track Zoom-in S 3 Track PermaTrack t 0 t 1 Track t 2 S 3 Track S 3 PermaTrack PermaTrack t 2 t 2 PermaTrack PermaTrack S 3 Track t 1 t 2 S 3 Track PermaTrack t 1 t 2 PermaTrack t 2 PermaTrack t 2 t 2 PermaTrack t 1 S 3 Track t 2 t PermaTrack PermaTrack t 1 t 2 t t 1 S 3 Track PermaTrack t 2 PermaT t 1 t 2 PermaTrack t 1 t 2 t 2 PermaTrack t 2 Zoom-in S 3 Track S 3 Track PermaTrack", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" } ]
Fatemeh Azimi; Fahim Mannan; Felix Heide
[ { "authors": "Favyen Bastani; Songtao He; Samuel Madden", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b0", "title": "Selfsupervised multi-object tracking with cross-input consistency", "year": "2021" }, { "authors": "Philipp Bergmann; Tim Meinhardt; Laura Leal-Taixé", "journal": "", "ref_id": "b1", "title": "Tracking without bells and whistles", "year": "2019-10" }, { "authors": "Philipp Bergmann; Tim Meinhardt; Laura Leal-Taixe", "journal": "", "ref_id": "b2", "title": "Tracking without bells and whistles", "year": "2019" }, { "authors": "Keni Bernardin; Rainer Stiefelhagen", "journal": "EURASIP Journal on Image and Video Processing", "ref_id": "b3", "title": "Evaluating multiple object tracking performance: the clear mot metrics", "year": "2008" }, { "authors": "Alex Bewley; Zongyuan Ge; Lionel Ott; Fabio Ramos; Ben Upcroft", "journal": "IEEE", "ref_id": "b4", "title": "Simple online and realtime tracking", "year": "2016" }, { "authors": "Zhangxing Bian; Allan Jabri; Alexei A Efros; Andrew Owens", "journal": "", "ref_id": "b5", "title": "Learning pixel trajectories with multiscale contrastive random walks", "year": "2022" }, { "authors": "Erik Bochinski; Thomas Volker Eiselein; Sikora", "journal": "IEEE", "ref_id": "b6", "title": "Highspeed tracking-by-detection without using image information", "year": "2017" }, { "authors": "Guillem Brasó; Laura Leal-Taixé", "journal": "", "ref_id": "b7", "title": "Learning a neural solver for multiple object tracking", "year": "2020" }, { "authors": "Fabian Michael D Breitenstein; Bastian Reichlin; Esther Leibe; Luc Koller-Meier; Van Gool", "journal": "IEEE", "ref_id": "b8", "title": "Robust tracking-bydetection using a detector confidence particle filter", "year": "2009" }, { "authors": "Holger Caesar; Varun Bankiti; Alex H Lang; Sourabh Vora; Venice Erin Liong; Qiang Xu; Anush Krishnan; Yu Pan; Giancarlo Baldan; Oscar Beijbom", "journal": "", "ref_id": "b9", "title": "nuscenes: A multimodal dataset for autonomous driving", "year": "2020" }, { "authors": "Jiarui Cai; Mingze Xu; Wei Li; Yuanjun Xiong; Wei Xia; Zhuowen Tu; Stefano Soatto", "journal": "", "ref_id": "b10", "title": "Memot: Multi-object tracking with memory", "year": "2022" }, { "authors": "Jinkun Cao; Xinshuo Weng; Rawal Khirodkar; Jiangmiao Pang; Kris Kitani", "journal": "", "ref_id": "b11", "title": "Observation-centric sort: Rethinking sort for robust multi-object tracking", "year": "2022" }, { "authors": "Nicolas Carion; Francisco Massa; Gabriel Synnaeve; Nicolas Usunier; Alexander Kirillov; Sergey Zagoruyko", "journal": "Springer", "ref_id": "b12", "title": "End-toend object detection with transformers", "year": "2020" }, { "authors": "Mohamed Chaabane; Peter Zhang; Ross Beveridge; Stephen O' Hara", "journal": "", "ref_id": "b13", "title": "Deft: Detection embeddings for tracking", "year": "2021" }, { "authors": "Peng Chu; Haibin Ling", "journal": "", "ref_id": "b14", "title": "Famnet: Joint learning of feature, affinity and multi-dimensional assignment for online multiple object tracking", "year": "2019" }, { "authors": "Gioele Ciaparrone; Francisco Luque Sánchez; Siham Tabik; Luigi Troiano; Roberto Tagliaferri; Francisco Herrera", "journal": "Neurocomputing", "ref_id": "b15", "title": "Deep learning in video multi-object tracking: A survey", "year": "2020" }, { "authors": "Marco Cuturi", "journal": "Advances in neural information processing systems", "ref_id": "b16", "title": "Sinkhorn distances: Lightspeed computation of optimal transport", "year": "2013" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b17", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Zhihong Fu; Qingjie Liu; Zehua Fu; Yunhong Wang", "journal": "", "ref_id": "b18", "title": "Stmtrack: Template-free visual tracking with space-time memory networks", "year": "2021" }, { "authors": "Andreas Geiger; Philip Lenz; Raquel Urtasun", "journal": "", "ref_id": "b19", "title": "Are we ready for autonomous driving? the kitti vision benchmark suite", "year": "2012" }, { "authors": "Song Guo; Jingya Wang; Xinchao Wang; Dacheng Tao", "journal": "", "ref_id": "b20", "title": "Online multiple object tracking with cross-task synergy", "year": "2021" }, { "authors": "Jiawei He; Zehao Huang; Naiyan Wang; Zhaoxiang Zhang", "journal": "", "ref_id": "b21", "title": "Learnable graph matching: Incorporating graph partitioning with deep feature learning for multiple object tracking", "year": "" }, { "authors": "Zihang Lai; Erika Lu; Weidi Xie", "journal": "", "ref_id": "b22", "title": "Mast: A memoryaugmented self-supervised tracker", "year": "2020" }, { "authors": "Z Lai; W Xie", "journal": "", "ref_id": "b23", "title": "Self-supervised learning for video correspondence flow", "year": "2019" }, { "authors": "Jiahe Li; Xu Gao; Tingting Jiang", "journal": "", "ref_id": "b24", "title": "Graph networks for multiple object tracking", "year": "2020" }, { "authors": "Shuai Li; Yu Kong; Hamid Rezatofighi", "journal": "", "ref_id": "b25", "title": "Learning of global objective for network flow in multi-object tracking", "year": "2022" }, { "authors": "Tsung-Yi Lin; Piotr Dollár; Ross Girshick; Kaiming He; Bharath Hariharan; Serge Belongie", "journal": "", "ref_id": "b26", "title": "Feature pyramid networks for object detection", "year": "2017" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "Springer", "ref_id": "b27", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Ze Liu; Yutong Lin; Yue Cao; Han Hu; Yixuan Wei; Zheng Zhang; Stephen Lin; Baining Guo", "journal": "", "ref_id": "b28", "title": "Swin transformer: Hierarchical vision transformer using shifted windows", "year": "2021" }, { "authors": "Jonathon Luiten; Aljosa Osep; Patrick Dendorfer; Philip Torr; Andreas Geiger; Laura Leal-Taixé; Bastian Leibe", "journal": "International journal of computer vision", "ref_id": "b29", "title": "Hota: A higher order metric for evaluating multiobject tracking", "year": "2021" }, { "authors": "Fan Ma; Mike Zheng Shou; Linchao Zhu; Haoqi Fan; Yilei Xu; Yi Yang; Zhicheng Yan", "journal": "", "ref_id": "b30", "title": "Unified transformer tracker for object tracking", "year": "2022" }, { "authors": "Tim Meinhardt; Alexander Kirillov; Laura Leal-Taixe; Christoph Feichtenhofer", "journal": "", "ref_id": "b31", "title": "Trackformer: Multi-object tracking with transformers", "year": "2022" }, { "authors": "James Munkres", "journal": "Journal of the society for industrial and applied mathematics", "ref_id": "b32", "title": "Algorithms for the assignment and transportation problems", "year": "1957" }, { "authors": "Lionel Rakai; Huansheng Song; Shijie Sun; Wentao Zhang; Yanni Yang", "journal": "Expert Systems with Applications", "ref_id": "b33", "title": "Data association in multiple object tracking: A survey of recent techniques", "year": "2021" }, { "authors": "Kaiming Shaoqing Ren; Ross He; Jian Girshick; Sun", "journal": "Advances in neural information processing systems", "ref_id": "b34", "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "year": "2015" }, { "authors": "Ergys Ristani; Francesco Solera; Roger Zou; Rita Cucchiara; Carlo Tomasi", "journal": "Springer", "ref_id": "b35", "title": "Performance measures and a data set for multi-target, multi-camera tracking", "year": "2016" }, { "authors": "Paul-Edouard Sarlin; Daniel Detone; Tomasz Malisiewicz; Andrew Rabinovich", "journal": "", "ref_id": "b36", "title": "Superglue: Learning feature matching with graph neural networks", "year": "2020" }, { "authors": "Richard Sinkhorn; Paul Knopp", "journal": "Pacific Journal of Mathematics", "ref_id": "b37", "title": "Concerning nonnegative matrices and doubly stochastic matrices", "year": "1967" }, { "authors": "Peize Sun; Jinkun Cao; Yi Jiang; Rufeng Zhang; Enze Xie; Zehuan Yuan; Changhu Wang; Ping Luo", "journal": "", "ref_id": "b38", "title": "Transtrack: Multiple object tracking with transformer", "year": "2020" }, { "authors": "Pei Sun; Henrik Kretzschmar; Xerxes Dotiwalla; Aurelien Chouard; Vijaysai Patnaik; Paul Tsui; James Guo; Yin Zhou; Yuning Chai; Benjamin Caine", "journal": "", "ref_id": "b39", "title": "Scalability in perception for autonomous driving: Waymo open dataset", "year": "2020" }, { "authors": "Zachary Teed; Jia Deng", "journal": "Springer", "ref_id": "b40", "title": "Raft: Recurrent all-pairs field transforms for optical flow", "year": "2020" }, { "authors": "Pavel Tokmakov; Allan Jabri; Jie Li; Adrien Gaidon", "journal": "", "ref_id": "b41", "title": "Object permanence emerges in a random walk along memory", "year": "2022" }, { "authors": "Pavel Tokmakov; Jie Li; Wolfram Burgard; Adrien Gaidon", "journal": "", "ref_id": "b42", "title": "Learning to track with object permanence", "year": "2021" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b43", "title": "Attention is all you need", "year": "2017" }, { "authors": "Carl Vondrick; Abhinav Shrivastava; Alireza Fathi; Sergio Guadarrama; Kevin Murphy", "journal": "", "ref_id": "b44", "title": "Tracking emerges by colorizing videos", "year": "2018" }, { "authors": "Gaoang Wang; Renshu Gu; Zuozhu Liu; Weijie Hu; Mingli Song; Jenq-Neng Hwang", "journal": "", "ref_id": "b45", "title": "Track without appearance: Learn box and tracklet embedding with local and global motion patterns for vehicle tracking", "year": "2021" }, { "authors": "Gaoang Wang; Mingli Song; Jenq-Neng Hwang", "journal": "", "ref_id": "b46", "title": "Recent advances in embedding methods for multi-object tracking: A survey", "year": "2022" }, { "authors": "Jianren Wang; Siddharth Ancha; Yi-Ting Chen; David Held", "journal": "IEEE", "ref_id": "b47", "title": "Uncertainty-aware self-supervised 3d data association", "year": "2020" }, { "authors": "Xiaolong Wang; Allan Jabri; Alexei A Efros", "journal": "", "ref_id": "b48", "title": "Learning correspondence from the cycle-consistency of time", "year": "2019" }, { "authors": "Yongxin Wang; Kris Kitani; Xinshuo Weng", "journal": "IEEE", "ref_id": "b49", "title": "Joint object detection and multi-object tracking with graph neural networks", "year": "2021" }, { "authors": "Benjamin Wilson; William Qi; Tanmay Agarwal; John Lambert; Jagjeet Singh; Siddhesh Khandelwal; Ratnesh Bowen Pan; Andrew Kumar; Jhony Hartnett; Deva Kaesemodel Pontes; Peter Ramanan; James Carr; Hays", "journal": "", "ref_id": "b50", "title": "Argoverse 2: Next generation datasets for self-driving perception and forecasting", "year": "2021" }, { "authors": "Nicolai Wojke; Alex Bewley", "journal": "IEEE", "ref_id": "b51", "title": "Deep cosine metric learning for person re-identification", "year": "2018" }, { "authors": "Nicolai Wojke; Alex Bewley; Dietrich Paulus", "journal": "IEEE", "ref_id": "b52", "title": "Simple online and realtime tracking with a deep association metric", "year": "2017" }, { "authors": "Charig Yang; Hala Lamdouar; Erika Lu; Andrew Zisserman; Weidi Xie", "journal": "", "ref_id": "b53", "title": "Self-supervised video object segmentation by motion grouping", "year": "2021" }, { "authors": "Gengshan Yang; Joshua Manela; Michael Happold; Deva Ramanan", "journal": "", "ref_id": "b54", "title": "Hierarchical deep stereo matching on highresolution images", "year": "2019" }, { "authors": "Tianyu Yang; Antoni B Chan", "journal": "", "ref_id": "b55", "title": "Learning dynamic memory networks for object tracking", "year": "2018" }, { "authors": "Syed Sahil; Abbas Zaidi; Mohammad Samar Ansari; Asra Aslam; Nadia Kanwal; Mamoona Asghar; Brian Lee", "journal": "Digital Signal Processing", "ref_id": "b56", "title": "A survey of modern deep learning based object detection models", "year": "2022" }, { "authors": "Fangao Zeng; Bin Dong; Yuang Zhang; Tiancai Wang; Xiangyu Zhang; Yichen Wei", "journal": "", "ref_id": "b57", "title": "Motr: End-to-end multipleobject tracking with transformer", "year": "2022" }, { "authors": "Wenwei Zhang; Hui Zhou; Shuyang Sun; Zhe Wang; Jianping Shi; Chen Change Loy", "journal": "", "ref_id": "b58", "title": "Robust multi-modality multi-object tracking", "year": "2019" }, { "authors": "Zelin Zhao; Ze Wu; Yueqing Zhuang; Boxun Li; Jiaya Jia", "journal": "Springer", "ref_id": "b59", "title": "Tracking objects as pixel-wise distributions", "year": "2022" }, { "authors": "Xingyi Zhou; Vladlen Koltun; Philipp Krähenbühl", "journal": "Springer", "ref_id": "b60", "title": "Tracking objects as points", "year": "2020" }, { "authors": "Xingyi Zhou; Tianwei Yin; Vladlen Koltun; Philipp Krähenbühl", "journal": "", "ref_id": "b61", "title": "Global tracking transformers", "year": "2022" }, { "authors": "Xizhou Zhu; Weijie Su; Lewei Lu; Bin Li; Xiaogang Wang; Jifeng Dai", "journal": "", "ref_id": "b62", "title": "Deformable detr: Deformable transformers for end-to-end object detection", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 308.86, 74.45, 217.46, 264.97 ], "formula_id": "formula_0", "formula_text": "t 0 t 1 a) t 0 t 1 * ✓ ✓ ✗ ✓ ✓ ✗ ✗ b) t 0 t 1 * ✓ ✓ ✓ ✓ ✓ ✓ ✓ Figure 2" }, { "formula_coordinates": [ 4, 50.11, 76.7, 460.84, 163.42 ], "formula_id": "formula_1", "formula_text": "Backbone I r , B r I t , B t f r f t RoI Pooling RoI Enhancer Module MLP f r,RoI : N × 21 × 21 × 256 f t,RoI : M × 21 × 21 × 256 C sim x ∈ R 256 N M itr = 1 itr = T Row Normalization Col Normalization L train Figure 3" }, { "formula_coordinates": [ 4, 368.64, 314.8, 176.47, 16.06 ], "formula_id": "formula_2", "formula_text": "d C (a, b) := min P ∈U (a,b) P, C ,(1)" }, { "formula_coordinates": [ 4, 340.27, 356.64, 204.84, 18.44 ], "formula_id": "formula_3", "formula_text": "U (a, b) := {P ∈ R + | P 1 = a, P T 1 = b}.(2)" }, { "formula_coordinates": [ 4, 356.65, 482.92, 188.46, 23.89 ], "formula_id": "formula_4", "formula_text": "C sim,ij = 1 - x 1,i x 1,i , x 2,j x 2,j ,(3)" }, { "formula_coordinates": [ 5, 133.4, 373.81, 152.96, 12.69 ], "formula_id": "formula_5", "formula_text": "r,i = b r,i + M t bi,r ,(4)" }, { "formula_coordinates": [ 5, 113.34, 475.1, 109.79, 18.44 ], "formula_id": "formula_6", "formula_text": "C IoU ij = 1 -IoU(b r,i , b t,j )." }, { "formula_coordinates": [ 5, 392.94, 333.05, 152.17, 19.55 ], "formula_id": "formula_7", "formula_text": "Dr = W(D l , D r )(6)" }, { "formula_coordinates": [ 5, 336.15, 358.08, 208.97, 24.56 ], "formula_id": "formula_8", "formula_text": "∆ r = | Dr -D r |, OM r = 1 ∆ r ≥ τ occ 0 otherwise ,(7)" }, { "formula_coordinates": [ 6, 110.36, 220.64, 176, 21.44 ], "formula_id": "formula_9", "formula_text": "L N LL = - (i,j)∈A log(P i,j ),(8)" }, { "formula_coordinates": [ 6, 56.52, 347.64, 229.85, 17.29 ], "formula_id": "formula_10", "formula_text": "L trip (a, p, n) = max{d(a i , p i ) -d(a i , n i ) + m, 0},(9)" }, { "formula_coordinates": [ 6, 108.6, 439.69, 177.76, 17.29 ], "formula_id": "formula_11", "formula_text": "L train = α L trip + β L N LL ,(10)" }, { "formula_coordinates": [ 6, 363.55, 130.85, 181.56, 17.29 ], "formula_id": "formula_12", "formula_text": "C inf = σC sim + (1 -σ)C IoU .(11)" } ]
2023-05-17
[ { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Inroduction", "publication_ref": [ "b2", "b50", "b49", "b51", "b73", "b70", "b48", "b37", "b35", "b26", "b80", "b73", "b79", "b26", "b77", "b48", "b50", "b49", "b74", "b51", "b69", "b4", "b20", "b40", "b42", "b52", "b18", "b61", "b66", "b82" ], "table_ref": [], "text": "Unmanned system enjoys the merits and presents conduciveness to applications in both military and civilian areas, e.g., unmanned aerial vehicle (UAV), unmanned combat vehicle (UCV), and autonomous driving [3]. However, singlemodality vision technologies equipped by some unmanned systems struggle to cope with challenging scenarios in the wild, resulting in the failure to find hidden objects and low-precise localization. To this end, multimodal sensors are introduced, among which infrared sensors are the most widely-used. Infrared sensors image by thermal radiation emitted from objects and have properties of anti-interference and anti-occlusion, so infrared images can highlight salient objects. In contrast, visible images captured by RGB sensors embrace rich textures and details due to reflective light information. Therefore, infrared and visible image fusion (IVIF) that extracts the complementary information from them to generate a fused image, enables hard-to-find objects apparent and benefits object localization, which is promising to unmanned systems.\nPrevailing IVIF methods [51,50,52,74,71,49,38,36,27,81] perform extremely well on both highlighting salient objects and manifesting abundant textures. Typically, those fusion methods [74,80,27,78] based on the proportional preservation of texture and intensity of source images, spatial-guided methods [49] from annotated salient object masks, and generative adversarial network (GAN)based methods [51,50,75,52] are included. However, these methods are one-sided efforts to obtain fusion images with better visual quality, and seldom discuss the adaptability and linkage with downstream high-level vision tasks in practical applications. Coupled with existing IVIF methods struggle to transmit precise semantic information into downstream high-level tasks, resulting in a severe performance decline in downstream applications (see Figure 1(a)). Meanwhile, there are some low-level vision methods [70,5,21,41,43,53] on which high-level semantic information acts. But, they only roughly embed semantic probability maps as a condition into some specific layers, rather than an adaptive bridge between two tasks.\nMotivated by the above discussion, we tend to design a joint paradigm to bridge infrared and visible image fusion with a downstream visual task, acting as an adapter. Considering the common characteristic between infraredvisible salient object detection (SOD) and fusion tasks that seek complementary cues from two source images to predict final results, we make a preliminary attempt to explore the collaborative relationship between them. There are three main obstacles to cut through: (i) designing a fusion network that can effectively transmit saliency-related semantic features to cater to saliency object detection. Because infraredvisible SOD [19,62,67,83] aims to extract and fuse hierarchically complementary cues from two source images, thus predicting an accurate binary location map for the most distinctive objects. This relies on semantic information such as the salient structures of objects. (ii) developing a seamless and efficient bridging manner to push the image fusion to play facilitation in the downstream SOD task. Typical multimodal SOD methods often adopt separate feature extractors for each source image and then aggregate the extracted modality-specific features through a complementary fusion module. If we follow this mode, it will inevitably cause a heavyweight model, unexploited modality-shared features, and complex feature aggregation. (iii) devising a collaborative learning strategy and making the two tasks tightly coupled and mutually reinforced. Most of the pioneer multitask methods either follow the low-to-high paradigm or the high-to-low paradigm, often unilaterally ascending one or the other.\nIn this work, we construct an interactively reinforced paradigm to bridge infrared and visible image fusion and saliency object detection, termed IRFS (see Figure 1(b)). Its overall framework consists of a bidirectional optimization stream between image fusion and SOD. For the low-tohigh stream, we specifically design a feature screening-based image fusion network termed FSFNet to screen out interfering features and maintain saliency-related and textureinformative features. And then, to bridge the two tasks in a more efficient way, the fused image generated by FSFNet is treated as the third modality to guide the downstream SOD. More specifically, we introduce a fusion-guided saliencyenhanced (FGSE) module to perform the reweighting and cross-complementation of source features guided by the fused image. We then embed the FGSE module into each scale of the shared backbone and build a Fusion-guided Cross-Complementary SOD network termed FGC 2 Net, used to keep the guidance effect of the fused image throughout the whole SOD process, thus realizing a seamless bridge between the two tasks. For the high-to-low stream, we use the labeled saliency map to supervise the saliency object detector and establish a semantic loss, which is then backpropagated to the fusion sub-network, thereby forcing the generation of fused images with rich semantic information. In addition, we also develop an interactive loop learning strategy to interactively optimize image fusion and SOD tasks, which finally achieves the optimal results of both with a shorter training period and fewer network parameters. Our major contributions are concluded as follows:\n• An interactively reinforced paradigm for joint infraredvisible image fusion and saliency object detection is constructed, in which the collaborative relationship between the two tasks is explored for the first time.\n• A feature screening-based image fusion network is proposed to highlight the saliency-related semantic features catering to SOD. Meanwhile, a fusion-guided saliency-enhanced module is introduced to transmit the guidance of the upstream fused results throughout the downstream SOD task. Thus, the seamless bridge between the two tasks is achieved.\n• We devise an interactive loop learning strategy to tightly couple the fusion and SOD tasks on infraredvisible images, attaching an optimal balance of them with the least training costs.\nExperiment results show that the proposed IRFS can seamlessly bridge infrared-visible image fusion and SOD tasks, and both of them benefit from each other and perform superior capabilities." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [], "table_ref": [], "text": "Multimodal image fusion and multimodal salient object detection are the supporting technologies of this work. In this section, We survey adequately existing technologies and review their development." }, { "figure_ref": [], "heading": "Multimodal Image Fusion", "publication_ref": [], "table_ref": [], "text": "In the past few decades, multimodal image fusion has made significant progress, which roughly falls into two categories, i.e., traditional and deep learning-based methods." }, { "figure_ref": [], "heading": "Traditional image fusion methods", "publication_ref": [ "b3", "b24", "b5", "b11", "b3", "b72", "b68", "b14", "b45", "b76", "b1", "b44", "b22", "b0", "b5", "b54", "b31", "b56", "b7" ], "table_ref": [], "text": "Typical traditional methods contain multi-scale transform (MST)-based [4], sparse representation-based [25], subspace-based [6], and saliency-based [12] fusion methods.\nMST-based image fusion methods comprise three steps, i.e., the decomposition of multi-scale features, the fusion of multi-scale features, and the inversion of fused features. Chen et al. [4] followed the pipeline and leveraged laplacian pyramid transformation to conduct multi-scale feature decomposition. In addition to the laplacian pyramid, wavelet transform [73], contourlet transform [69], and edgepreserving filters [15] are also widely used in MST-based image fusion.\nSparse representation-based image fusion methods [46,77] usually rely on an over-complete dictionary learned from numerous high-quality natural images. Concretely, sparse representations are encoded from each source image through the learned over-complete dictionary, and then the encoded fused sparse coefficients are transformed into a fused image via this over-complete dictionary. Moreover, sparse coding patterns are also various. For instance, Bin et al. [2] proposed an approximate sparse representation using a multi-selection strategy to obtain sparse coefficients from source images. Liu et al. [45] utilized convolutional sparse representation to complete the acquisition of sparse coefficients.\nSubspace-based image fusion methods are to capture the native structures of the source image by mapping higherdimensional inputs into a lower-dimensional space or subspace, which involves different dimensionality reduction manners, e.g., Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Non-negative Matrix Factorization (NMF). For instance, Li et al. [23] leveraged PCA to fuse decomposed low-frequency images, while Bavirisetti et al. [1] adopted PCA to fuse high-frequency detail images. Cvejic et al. [6] proposed to segment source images into different regions and then obtain ICA coefficients for each region through a set of bases pretrained from natural images. Mou et al. [55] proposed to extract features from infrared and visible images by NMF, which is capable of preserving textures of visible images and high-contrast structures of infrared images while removing noise.\nSaliency-based image fusion methods are inspired by human visual attention, and they are conducive to highlighting salient objects in fused images. There are two types of saliency-based fusion methods, i.e., salient object extraction and weight calculation. The first type of methods, extracting the salient regions from source images and mapping them to the fused images, are capable of preserving dominant information, such as [32] and [57]. The other type of methods need to first obtain salient weight maps for the base and detail image layers, respectively, and then obtain base and detail images through the weighted combination between image layers with their weight maps, such as [8].\nAlthough visually favorable fused images are generated, these traditional image fusion methods are still inferior to deep learning-based methods." }, { "figure_ref": [], "heading": "Deep-learning based image fusion methods", "publication_ref": [ "b23", "b73", "b37", "b50", "b49", "b51", "b23", "b25", "b79", "b73", "b70", "b34", "b41", "b38", "b39", "b50", "b50", "b51", "b49", "b74", "b36", "b6", "b47", "b27", "b6", "b27", "b46", "b47", "b46" ], "table_ref": [], "text": "Deep learning-based methods are divided into four categories: autoencoder (AE)-based [24], deep CNN-based [74,38], generative adversarial network (GAN)-based [51,50,52], and transformer-based image fusion methods.\nAE-based image fusion methods follow a common Encoder-Decoder paradigm to accomplish the extraction of multimodal features and the reconstruction of the fused image. As a pioneer, Li et al. [24] proposed dense blocks as the fundamental components of the auto-encoder to perform feature extraction, then fuse extracted multimodal features via rough addition fusion and 1 -norm fusion rules. The fused features are reconstructed as a fused image by a plain decoder. Since successive downsampling operations in auto-encoders leads to loss of effective information, Li et al. [26] introduced residual connections into the Encoder-Decoder paradigm to alleviate this problem. To explore the interpretability of the auto-encoder, Zhao et al. [80] proposed to disentangle background and detail features by implementing two-scale decomposition and extracting lowand high-frequency features in an encoder.\nDeep CNN-based image fusion methods mainly focus on the design of a variety of network architectures and fusion strategies. Zhang et al. [74] introduced a dual-path dense network to learn intensity and gradient features from multimodal images separately, and then devised specific losses to maintain the balance of intensity and gradient information in fused images. Still following dense network structures, Xu et al. [71] combined a dense network and an information measurement in a unified framework, which is capable of estimating the importance of different image sources adaptively and can be used to solve various fusion problems. However, simple dense networks do not perform well in preserving high-quality structures and details. Subsequently, Liu et al. [35] applied a coarse-to-fine network structure to extract multi-scale features, which has a stronger feature representation ability than the plain dense network. Liu et al. also designed an edge-guided attention module to push the network to highlight prominent structures and preserve abundant details better. The above manual networks exhibit a lack of flexibility when faced with different types of image data. To this end, Liu et al. [42] first utilized Neural Architecture Search (NAS) methodology to build a hierarchically aggregated fusion architecture, which is more flexible and effective for different fusion demands. Some relevant follow-up studies, e.g., [39] and [40] applied NAS to search a Modality-orient network and a lightweight targetaware network for infrared and visible image fusion.\nGAN-based image fusion methods aim to build constraints from the perspective of a probability distribution, so as to achieve sharp targets and rich textures of fused images while reaching a balance of information transmission from source images. Ma et al. [51] first introduced the generative adversarial network into the field of image fusion, modeling this task in an adversarial manner between a generator and a discriminator. However, such single-discriminator models [51,52] force the generator to equally treat different modalities, resulting in over-smoothed fused images. Thus, several dual-discriminator image fusion networks are proposed, such as [50,75,37], which are capable of highlighting high-contrast structures and fine-grained textures.\nTransformer-based image fusion methods [7,48,28] have emerged in the past two years since the transformer embraces a global receptive field and is able to model longrange dependencies of neighboring pixels. Due to the fact that existing transformers neglect the local spatial correlation between pixels, Fu et al. [7] first proposed a Patch Pyramid Transformer (PPT) framework, in which the patch transformer is used to model local feature representations and pyramid transformer is used to model non-local feature representations. Subsequently, Li et al. [28] proposed a convolution-guided transformer aimed to first leverage a convolutional feature extractor to learn local features and then use them to guide the transformer-based feature extractor to capture long-range interdependencies of features. With the prevalence of Swin Transformer [47], Ma et al. [48] designed a unified multi-task fusion framework based on the shifted windows mechanism from [47] and self-and cross-attention mechanisms. Such methods perform well in preserving image structures and details.\nUnfortunately, the aforementioned methods pay onesided attention to the visual quality while failing to establish a connection with high-level vision tasks. Considering the comprehensive understanding ability of the unmanned system in the wild, it is imperative to exploit a joint framework for infrared-visible image fusion collaborated with the highlevel vision task." }, { "figure_ref": [], "heading": "Multimodal Salient Object Detection", "publication_ref": [ "b62", "b63", "b65", "b75", "b17", "b61", "b66", "b65", "b63", "b62", "b75", "b61", "b30" ], "table_ref": [], "text": "Recent years have witnessed great progress in thermal infrared and visible saliency object detection (SOD) with the popularity of thermal infrared sensors in the field of multimodal SOD, including traditional [63,64,66] and deep learning-based methods [76,18,62,67].\nTraditional thermal infrared and visible SOD methods comprise ranking-based and graph learning-based methods. Wang et al. [66] first applied a ranking algorithm to the multimodal SOD task and proposed a multi-task manifold ranking pattern. And, they built a thermal infrared and visible benchmark termed VT821. Graph learning-based SOD methods, representatively, Tu et al. [64] proposed a collaborative graph learning model, in which source images are segmented into superpixels as graph nodes and then learn graph affinity and node saliency. Subsequently, Tu et al. [63] also proposed to conduct a graph-based manifold ranking model in a set of multi-scale superpixels by combining graph learning and ranking theory, and optimizing the model based on ADMM optimizer.\nOwning to the representation ability of CNNs, a basic idea of deep learning-based thermal infrared-visible SOD methods is to extract complementary information from source images to predict accurate saliency maps of conspicuous objects. Tu et al. [76] built a large-scale thermal infraredvisible benchmark termed VT5000, and proposed a dualencoder framework to extract complementary features from different modalities. To better explore interactions between multimodal features, Tu et al. [62] proposed a dual-decoder framework to perform interactions of multi-scale features, which is more favorable for challenging scenarios. Seeing that such dual-encoder and dual-decoder methods have a larger model size, Liao et al. [31] devised a single-encoder paradigm to use a shared encoder to extract complementary features from thermal infrared and visible images.\nHowever, either traditional or deep infrared-visible SOD methods only strive for cross-modal interaction and fusion on feature space, but never pixel-level fusion. In practice, the fused image can highlight structures of objects, which also play a critical role in distinguishing salient objects. Therefore, it is natural to consider the combination of image fusion and SOD tasks in a single framework to achieve mutual benefit." }, { "figure_ref": [], "heading": "Joint Frameworks of Multiple Vision Tasks", "publication_ref": [ "b10", "b32", "b58", "b71", "b28", "b57", "b69", "b81", "b21", "b43", "b53", "b4", "b16", "b67", "b60", "b33", "b33", "b60", "b59" ], "table_ref": [], "text": "Recently, some practical demands have promoted the incorporation of low-and high-level vision tasks. One route is to establish a low-to-high cascaded pipeline to allow the low-level task to facilitate the high-level task [11,33,59,72,29,58]. Another route is embedding semantic probability maps as a condition into some specific layers to provide highto-low guidance for some low-level restoration tasks, e.g., image super-resolution [70,82], image enhancement [22,44,54], and image HDR [5]. A third route is to build a parallel framework to equally treat the low-and high-level vision tasks [17,68]. Unfortunately, none discusses the adaptive bridge between tasks, resulting in overfitting one of the tasks and deviating from the optimal balance between them. Two recent studies [61,34], in fact, explored the relationship between image fusion and high-level tasks. However, [34] only considers the high-level task-oriented joint training, and [61] only focuses on the trade-off between low-and high-level losses. The latest study, termed SuperFusion [60], integrated image registration, image fusion, and semantic segmentation into a unified framework. In the framework, image registration and fusion are jointly optimized in a symmetric scheme, enabling the two tasks to mutually promote each other. A pretrained semantic segmentation model then was deployed to guide the fusion network to focus more on semantic-related features. However, these methods ignore the intrinsic relation between pixel-level fused results and multimodal high-level vision tasks, which becomes the focus of this work." }, { "figure_ref": [ "fig_2" ], "heading": "The Proposed Method", "publication_ref": [], "table_ref": [], "text": "The overview of the proposed interactively reinforced paradigm for joint infrared and visible image fusion and saliency object detection is shown in Figure 2. This paradigm contains two sub-tasks, where image fusion is regarded as the dominant task, while the multimodal saliency object detection task is treated as a downstream task of image fusion and facilitates saliency-oriented image fusion as an auxiliary tool. The overall network structure contains a feature screening-based image fusion subnetwork (FSFNet) and a fusion-guided cross-complementary SOD subnetwork (FGC 2 Net)." }, { "figure_ref": [ "fig_2" ], "heading": "Feature Screening-based Image Fusion", "publication_ref": [ "b15" ], "table_ref": [], "text": "Aiming at the goal that fused images promote salient object detection in our IRFS framework, a specific fusion network is designed, as shown in Figure 2(a), which can not only generate high-quality fused images, but also keep the semantic information in fused images. Given a pair of visible image ∈ ℝ × ×3 and infrared image ∈ ℝ × ×1 , we first utilize a coarse feature extractor  (⋅) consisting of two convolution layers and a Leaky ReLU activation function to extract coarse features F = F , F . Noted that the visible image needs to be converted to YCbCr color space, and then taking the Y-channel image as the input of the visible branch. Next, it is necessary to study how to screen out interference features and preserve saliency-related features from F to facilitate the subsequent SOD task. Due to attention mechanisms [16] can model the feature correlations in both channel and spatial dimensions, and are conducive to capturing finegrained texture features, we are determined to deploy a dual attention-guided feature screening module (DAFS) to screen out useless features and preserve saliency-related and texture-informative precise features, thus catering to the requirement of SOD task. Therefore, these precise features can be formulated as\nF p = 1×1 F , F,\nF p = 1×1 F , F .(1)\nThen, we fuse preserved features from source images by\nF u = F p ⊕ F p ,(2)\nwhere ⊕ denotes element-wise summation operation. We adopt serial residual blocks to reconstruct the fused Ychannel image as . We then convert to the RGB image ∈ ℝ × ×3 so that it serves the subsequent SOD task." }, { "figure_ref": [ "fig_2", "fig_3", "fig_4" ], "heading": "Fusion-Guided Cross-Complementary SOD", "publication_ref": [ "b61", "b8" ], "table_ref": [], "text": "Benefiting from the fused images with sharp objects and the high contrast between objects and surroundings, we treat the fused image as the third type of modality to guide the infrared-visible SOD task. This is the first attempt at breaking out of the standard multi-modality SOD configurations. As shown in Figure 2(b), we propose a fusion-guided cross-complementary SOD network, termed as FGC 2 Net, which takes a group of images , , as input and is supposed to predict a precise saliency map M for the most conspicuous objects.\nDifferent from some impressive works [62,9] that assign an individual backbone for infrared and visible images, respectively, to extract cross-modal hierarchical features and then fuse them step-by-step in an extra branch, our FGC 2 Net employs a siamese encoder to alternately perform feature extraction and cross-modality feature aggregation. In particular, we introduce a fusion-guided saliency-enhanced (FGSE) module to conduct cross-modality feature aggregation and embed it behind each feature scale of the backbone. The purpose is to reweight the infrared and visible images using the high contrast differences of objects and backgrounds of the fused image, further enhancing saliency-related features and suppressing surrounding interference features. As shown in Figure 3, the FGSE is divided into three steps. Given the feature set { -1 , -1 , -1 } from the ( -1)-th scale of the backbone, saliency-enhanced infrared and visible image features are first obtained under the guidance of the fused feature and formulated as\n= -1 ⊕  ⊗  , = -1 ⊕  ⊗  ,(3)\nwhere,  (⋅) denotes convolution and ReLU function, and  (⋅) denotes convolution and sigmoid function. The ⊕ and ⊗ represent element-wise summation and multiplication, respectively. Based on the self-attention mechanism, we then introduce a cross-complementary feature transformer layer (C 2 FTL) to learn cross-modality structure-sharp and texture-informative features from and . Before that, is transformed into Q , K , and V , while doing the same for . Then, we adopt the C 2 FTL to generate complementary cross-modal features by\ñ =  Q ⊗ K ⊗ V ⊕ -1 , ̃ =  Q ⊗ K ⊗ V ⊕ -1 , (4\n)\nwhere,  (⋅) is sigmoid function.\nTo prevent further forward propagation of interfering information in ̃ and ̃ , we deploy a learnable feature selector (LFS) to suppress saliency-irrelevant features. The core of the feature selection is to generate a weight vector based on the global average pooling operation and the feature squeeze-and-excitation (SE) operation, and then generate two learnable parameters by the softmax function. This process can be formulated as\n, = ( ̃ ), ( ̃ ) ,(5)\nwhere,  (⋅) refers to the global average pooling and [⋅] refers to the concatenation along with channel dimension.\nImmediately, we utilize and to reweight ̃ and ̃ to reconstruct saliency-enhanced cross-modality feature as\n= ⊗ ̃ , ⊗ ̃ ⊕ ̃ , = ⊗ ̃ , ⊗ ̃ ⊕ ̃ . (6\n)\nNoted that, after the concatenation, a followed 1×1 convolution to reduce the dimension of features to match them with the inputs (i.e., ̃ and ̃ ). Through the siamese encoder, a hierarchical feature set { , , | ∈ {1, 2, … , 5}} is learned. Next, we introduce a modality-specific group decoder (MSGD) to predict saliency maps, as shown in Figure 4. To reduce the computational burden, we only input the features { , , | ∈ {3, 4, 5}} into MSGD. The group decoder consists of three modality-specific decoding branches, i.e., infrared-, visible-, and fusion-modality decoders. Where the fusion-modality decoding branch only predicts a coarse saliency map by a Conv+BN+ReLU (CBR) layer, while infrared-and visible-modality decoding branches simultaneously predict both coarse maps (e.g., ,\n) and precise maps (e.g., ,\n). Finally, the precise modality-specific saliency maps (i.e., , ) are aggregated to generate the final precise saliency map . Noted that the all of features to be decoded go through a global context module and cascaded CBR layers to generate saliency maps in the MSGD module." }, { "figure_ref": [], "heading": "Loss Functions", "publication_ref": [ "b38", "b33", "b9" ], "table_ref": [], "text": "Fusion Losses. In the fusion phase, we improve the visual quality of the fused results from intensity and gradient perspectives, and the fusion loss is defined as\n fusion =  int +  grad , (7\n)\nwhere is a trade-off parameter. To retain the saliency objects from RGB and thermal images, we build saliencyrelated intensity loss inspired by [39,34]. The intensity loss is defined as\n int = ‖ ⊗ + ⊗ , ‖ 1 + 1 -SSIM ⊗ + ⊗ , .(8)\nIt relies on 1 norm and MS-SSIM to measure the pixel-level similarity between the fused image and source images. Here, and are weighted maps of RGB and thermal images, which are obtained through = ∕ -, and = 1 -, respectively. stands for the saliency matrix computed by [10].\nTo preserve fine-grained textures of the fused images, we impose a constraint on the gradient distribution of the fused image and source images as\n grad = ‖∇ , ∇ , ∇ ‖ 1 ,(9)\nwhere ∇ refers to the Laplacian gradient operator.\n(⋅) denotes the maximum aggregation of fine-grained textures of two source images.\nSOD Losses. In the SOD phase, we use the weighted binary cross entropy (wBCE) loss and weighted IoU (wIoU) to supervise the FGC 2 Net. Given coarse saliency maps M = , , from the top layer of the siamese encoder, the coarse loss is computed by\n coarse =  M , +  M , .(10)\nThen, given precise saliency maps M = , from the end of the MSGD, the precise loss is computed by\n precise =  M , +  M , .(11)\nThe SOD loss can be formulated as\n sod =  coarse +  precise . (12\n)\nTherefore, the overall loss is defined as\n overall =  fusion +  sod , (13\n)\nwhere and are trade-off weights. is initially set to 1 and increases with the loops of interactive learning, while is identically set to 1. We discuss the influence of different values of in Subsection 4.6." }, { "figure_ref": [], "heading": "Interactive Loop Learning Strategy", "publication_ref": [], "table_ref": [], "text": "In the multi-task frameworks, in general, they are accustomed to adopting a one-stage training manner to find optimal results. For instance, taking the multi-modal images into the fusion network, the generated fused image is then passed through the SOD network, and the joint loss is calculated to update the two parts of the framework at the same time. However, this training manner often struggles to achieve a balance between tasks. To solve this problem, we devise an interactive loop learning strategy. Specifically, when optimizing the fusion part, in order to reinforce the semantic information of the fused image and thus cater to the subsequent SOD task, we force the fused image to pass FSFNet and use Eq. ( 13) to update the parameters of the FSFNet. In this case, the SOD network is frozen without updating parameters. Interactively, when optimizing the SOD part, we use the generated fused image to guide the extraction of saliency-related multimodal features from source images, and the parameters of FGC2 Net are updated under the constraint of Eq. ( 12). In the meantime, the gradients are truncated at the end of the fusion part to prevent gradient back-propagation from interfering with the optimization of the fusion part. The interactive loop training process is performed times in total. In each loop, the fusion network goes through epochs and the SOD network goes through epochs. In this way, the performance of image fusion and SOD tasks can reach an optimal balance in the shortest possible training period." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "This chapter first provides detailed descriptions of implementation details, datasets, and evaluation metrics. Then, the quantitative and qualitative evaluations of our IRFS in the joint multimodal image fusion and SOD task are performed. Additionally, we evaluate the generalization ability of the IRFS in the aforementioned two sub-tasks. Lastly, we undertake ablation studies on each key component of our IRFS." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b75", "b33", "b63", "b65" ], "table_ref": [], "text": "To validate the effectiveness of the proposed IRFS framework, we jointly evaluate fusion and SOD results on VT5000 [76] dataset. The VT5000 dataset collects 5,000 pairs of thermal infrared and visible images, as well as corresponding binary labels dedicated to SOD, of which 2,500 pairs are used for training and the other 2,500 pairs are used for testing. Therefore, it is convincing that apply the VT5000 dataset to the evaluation of joint multimodal image fusion and SOD tasks. In addition, we also intend to separately validate the generalization of the proposed IRFS on thermal infrared and visible image fusion and SOD tasks. In the image fusion task, we use three public fusion datasets including TNO1 , RoadScene 2 , and M 3 FD [34] to directly generate fused images by the pretrained fusion subnetwork (i.e., FSFNet) of our IRFS framework, instead of finetuning it with the aforementioned datasets. Similarly, in the SOD task, we use two benchmarks (i.e., VT1000 [64] and VT821 [66]) in addition to VT5000 to directly predict final saliency maps taking fused images generated by FSFNet as the third input." }, { "figure_ref": [], "heading": "Implement Details", "publication_ref": [ "b13" ], "table_ref": [], "text": "The proposed IRFS scheme is implemented on an NVIDIA 1080Ti GPU using PyTorch framework, in which FSFNet and FGC 2 Net are interactively trained. In the training phase, we randomly select 8 images and resize them to 352 × 352 to form an input batch. Random horizontal flipping is used to reduce over-fitting. The parameters in the overall IRFS are updated with the Adam optimizer. In each loop of interactive training for FSFNet and FGC 2 Net, the learning rate of the former is initially set to 1 -3 and keeps unchanged, while that of the latter is initially set to 5 -5 and gradually decreases to 1 -6 following the cosine annealing strategy. In addition, the siamese encoder of FGC 2 Net is dependent on the pretrained ResNet-34 [14] backbone. Regarding the setting of hyper-parameters, in Eq. ( 7) is set to 0.5, and in Eq. ( 8) is set to 20.0. During the interactive loop learning, is set to 10, and and are set to 3 and 10, respectively. Accordingly, in Eq. ( 13) increases from 1 to 10." }, { "figure_ref": [], "heading": "Evaluation metrics", "publication_ref": [ "b55", "b12" ], "table_ref": [], "text": "We adopt three common metrics to evaluate the quality of the fused image including MI [56], VIF [13], and CC.\nMI. Mutual Information (MI), derived from information theory, measures the amount of information transmitted from input images to the final fused image, which is calculated by\nMI = MI , + MI , ,(14)\nwhere MI , and MI , denote the amount of information that is transmitted from source infrared image and visible image to the fused image , respectively. MI ∕ , is calculated by the Kullback-Leibler measure as follows:\nMI ∕ , = ∑ ( ∕ , ) log ( ∕ , ) ( ∕ ) ( ) , (15\n)\nwhere ( ∕ , ) refers to the joint histogram of source images ∕ and the fused image . The ( ∕ and the compute the marginal histograms of source images and the fused image. Typically, the larger the MI value, the greater the amount of information transferred from the source images to the fusion image, and the better the fusion performance.\nVIF. Visual Information Fidelity (VIF) measures the information fidelity of the fused image in contrast to source images. Its purpose is to calculate the distortion between the fused image and source images, which is consistent with the perception of the human visual system. \nCC = ( , ) + ( , ) 2 , (16\n)\nin which, ( , ) and ( , ) can be calculated by\n( , ) = -( ) ⊙ -( ) √ -( ) 2 √ -( ) 2 . (17\n)\nHere, [⋅] denotes the calculation of the expected value of images. ( ) and ( ) are the mean values of source image and the fused image , and ⊙ is the Hadamard product. A higher CC means that the two images are highly similar.\nAlthough image fusion is evaluated by more than a dozen metrics, it is reasonable to adopt the above three metrics to evaluate the fusion image generated by our IRFS, considering the total information transmission, similarity, and information fidelity that conforms to the human vision for the fused image with source images." }, { "figure_ref": [ "fig_5" ], "heading": "Joint Image Fusion and SOD Evaluation", "publication_ref": [ "b50", "b79", "b73", "b34", "b25", "b70", "b49", "b51", "b64", "b78", "b78", "b78", "b50", "b79", "b73", "b34", "b25", "b70", "b49", "b51", "b64", "b50", "b79", "b73", "b34", "b25", "b70", "b49", "b51", "b64" ], "table_ref": [ "tab_0" ], "text": "We evaluate the joint multimodal image fusion and SOD performance of our IRFS on the VT5000 dataset. The stateof-the-arts image fusion methods, i.e., FGAN [51], DID-Fuse [80], PMGI [74], MFEIF [35], RFN [26], U2F [71], DDcGAN [50], GANMcC [52], and UMF [65] are employed for comparison. To comprehensively evaluate the effectiveness of our IRFS, we combine the aforementioned fusion models with the proposed FGC 2 Net and a recent SOD method CTDNet [79] to form some temporary multitask frameworks. CTDNet is a mono-modal SOD method aimed at RGB images. Accordingly, we exclusively take the fusion result of each fusion method as the input of CTDNet to perform joint fusion and SOD learning on the VT5000 dataset. To be fair, we keep the original settings of CTDNet unchanged throughout the training process.\nFor quantitative comparisons, Table 1 reports the intermediate fusion results and final SOD results on the VT5000 dataset, which are obtained through temporary multi-task frameworks consisting of the aforementioned fusion models with CTDNet and our FGC 2 Net. By comparison, our IRFS consistently performs more favorably than existing SOTA methods both on common image fusion metrics (i.e., MI, VIF, and CC) and SOD metrics (i.e., , , , and MAE). Concretely, our IRFS outperforms the second-best methods by 45.8% in the VIF metric. In MI and CC metrics, our IRFS also gains 5.2% and 2.18% improvements compared with the second place, respectively. In contrast to the temporary multi-task frameworks formed by the fusion models and CTDNet, the proposed IRFS performs well, and the final predicted saliency maps rank first for four commonly-used metrics in the field of SOD. In contrast to the other temporary multi-task frameworks formed with the FGC 2 Net, our IRFS is still able to rank first. Compared with the second-best method, the gain reaches 2.08% in mean , 5.56% in mean MAE score. In addition, we observe that these temporary multi-task frameworks formed with the FGC 2 Net perform better than those formed with the recent CTDNet [79]. Analysis of the above results quantitatively illustrates the superiority of the proposed IRFS paradigm.\nFor qualitative results, we show two examples in Figure 5. In each example, a group of fused images generated from existing fusion models, as well as two groups of corresponding saliency maps derived from CTDNet [79] and the FGC 2 Net of our IRFS framework are exhibited. By comparing these results, we can find that the contrast difference between the object and the background is more pronounced in fused images generated by our IRFS, and overexposure is suppressed, which contributes to more accurate predictions of the saliency maps than other temporary multi-task frameworks, as shown in Figure 5.\nQuantitative and qualitative results indicated that, on the one hand, a tightly cooperative relationship between thermal infrared-visible image fusion and SOD is existing. On the other hand, the effectiveness of our interactively reinforced paradigm for joint image fusion and SOD is supported. IR FGAN [51] DIDFuse [80] PMGI [74] VIS MFEIF [35] RFN [26] U2Fusion [71] DDcGAN [50] GANMcC [52] UMF [65] IRFS (Ours) IR FGAN [51] DIDFuse [80] PMGI [74] VIS FGAN [35] RFN [26] U2Fusion [71] DDcGAN [50] GANMcC [52] UMF [65] IRFS (Ours) " }, { "figure_ref": [ "fig_7", "fig_6", "fig_8" ], "heading": "Generalization Analysis on Image Fusion", "publication_ref": [ "b50", "b79", "b73", "b34", "b25", "b70", "b49", "b51", "b64", "b50", "b25", "b49", "b51", "b79", "b70" ], "table_ref": [ "tab_1" ], "text": "We evaluate the intermediate fusion results of our IRFS through the comparison with existing fusion methods including FGAN [51], DIDFuse [80], PMGI [74], MFEIF [35], RFN [26], U2F [71], DDcGAN [50], GANMcC [52], and UMF [65].\nAs reported in Table 2, numerically, our IRFS outperforms existing infrared-visible image fusion methods by large margins on VIF and CC metrics. Indicated that, under the reverse push of the SOD task, the fused image generated by our IRFS can better preserve the information transferred from the source images. Although the quantitative results on the TNO dataset fail to rank first on the MI metric, they are still favorable considering the direct generalization evaluation of our IRFS. To give an intuitive evaluation of our IRFS, we exhibit fused results of all discussed methods on three fusion datasets (i.e., TNO, RoadScene, and M 3 FD), which are shown in Figure 7, Figure 6, and Figure 8, respectively. Observing the local enlarged regions of each image, we can find that the results of FGAN [51], RFN [26], DDcGAN [50], and GANMcC [52] all have a serious blurry object and over-smoothed background. DIDFuse [80] and U2F [71] bring some observable noise and artifacts in their fused results. In contrast, the fused image generated by IRFS shows a more salient and sharper object that is beneficial to the subsequent SOD task, and it also exhibits cleaner background without extra interference." }, { "figure_ref": [ "fig_9" ], "heading": "Generalization Analysis on SOD", "publication_ref": [ "b65", "b62", "b63", "b75", "b61", "b17", "b8", "b19", "b82", "b29" ], "table_ref": [ "tab_2" ], "text": "We conduct the generalization analysis on the SOD task through the comparison with 10 state-of-the-art thermal infrared and visible SOD methods, including MTMR [66], M3S-NIR [63], SGDL [64], ADF [76], MIDD [62], CSRN [18], MMNet [9], OSRNet [20], ECFFN [83], and MIA [30]. Table 3 objectively shows the quantitative results measured by four common metrics. It can be seen that the proposed IRFS ranks either first or second on these three datasets. For instance, compared with the second best method OSRNet on the VT5000 dataset, the gain reaches 2.1% for mean , 1.6% for mean , and 13.16% for MAE score. In terms of mean metric, our result is only 0.11% less than OSRNet. To better reflect the superiority of our IRFS, as shown in Figure 9, we visualize the saliency maps predicted by all the aforementioned methods. It is clearly observed that our IRFS obtains more accurate saliency maps with less false detection compared with OSRNet, the latest study. Taking the second image as an example, Although other SOD methods also localize the ring of the image and predict an accurate circular outline, they have difficulty identifying the small object inside the ring. In contrast, our IRFS not only highlights the structural integrity of salient objects but also guarantees internal consistency within the object contours. It is indicated that our IRFS is more robust to some challenging scenarios, such as thermal crossover, small objects, and low contrast." }, { "figure_ref": [], "heading": "Analysis of Model Efficiency", "publication_ref": [ "b65", "b63", "b50", "b79", "b73", "b34", "b25", "b70", "b49", "b51", "b64", "b62", "b17" ], "table_ref": [ "tab_2" ], "text": "For a multi-task framework, the efficiency of the model is crucial due to the requirement of real-time application. We analyze the model size (Mb) and inference time (s) of all comparison methods, including several state-of-theart (SOTA) fusion methods and multimodal SOD based on thermal infrared and visible images. Due to the multi-task settings of our IRFS framework, we ensure a fair comparison by evaluating the fusion module, FSFNet, against SOTA image fusion methods, and concurrently evaluating the SOD module, FGC 2 Net, against SOTA thermal infrared-visible SOD methods. Noted that each model is conducted on a single NVIDIA 1080Ti GPU with input being resized to 640 × 480. According to Table 4, our FSFNet ranks second in terms of model size, exhibiting a relatively low parameter count of 0.06Mb. Our FGC 2 Net has 39.71Mb parameters and ranks third. Noted that MTMR [66], SGDL [64], and IR FGAN [51] DIDFuse [80] PMGI [74] VIS MFEIF [35] RFN [26] U2Fusion [71] DDcGAN [50] GANMcC [52] UMF [65] IRFS (Ours) [63] are traditional SOD methods, we fail to provide their model sizes. Combined with Table 3, CSRN [18] has the smallest model size, but it does come at the cost of performance. Although not the most compact model, our FSFNet and FGC 2 Net have the fastest inference speed. The core reason, the one is that a dual attention-guided feature screening module of FSFNet replaces frequently-used dense network, and the other is that a siamese encoder based on the lightweight ResNet-34 model is used as the backbone of FGC 2 Net. Overall, the proposed IRFS framework reaches a better balance between performance and efficiency." }, { "figure_ref": [], "heading": "Discussion for Weight of Fusion Loss", "publication_ref": [], "table_ref": [ "tab_4", "tab_4" ], "text": "To examine the influence of different trad-off weights of fusion loss  fusion on image fusion and SOD tasks, we set to 0.1, 0.5, 1.0, 5.0, and 10 to train our IRFS framework on the VT5000 dataset, respectively. Table 5 reports the quantitative results of both tasks. The first two rows show fused results evaluated on the CC and VIF metrics. The last two rows show SOD accuracy measured by the and metrics. According to the analysis in Table 5, we can find that, when = 1.0, fusion results exhibit a noticeable superiority compared to those achieved with other alternative values. For the evaluation of the SOD task, although the precision of reaches its peak when = 0.5, it only improves by 0.004 compared to when = 1.0. Therefore, we empirically set to 1.0 in Eq. (13)." }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_11", "fig_11", "fig_11" ], "heading": "Interactive training vs. One-stage training", "publication_ref": [], "table_ref": [ "tab_5", "tab_6", "tab_5", "tab_6" ], "text": "To illustrate the effectiveness of the interactive loop learning strategy, we implement the comparisons between our interactive training strategy and one-stage training strategy from two perspectives of fusion and SOD, respectively. Since our IRFS interactively implements FSFNet and FGC 2 Net for 10 loops, denoted from 0 to 9, we present the quantitative results of fusion and SOD at three intervals, i.e., 1-, 5ℎ, and 9ℎ, as shown in Table 6 andTable 4 Efficiency analysis of our IRFS framework. To be fair, the efficiency evaluations against state-of-the-art thermal infrared-visible fusion methods and SOD models are implemented, respectively. * indicates that the current model is part of our IRFS framework. 7. We can find that the proposed interactive loop learning strategy is indeed more conducive to the mutual promotion of infrared-visible image fusion and SOD tasks. And, with the increase of loops, the respective performance of both fusion and SOD keeps improving. It is suggested that there is a collaborative relationship between infraredvisible image fusion and SOD tasks, and both of them are expected to be deployed in a single framework to reach the purpose of locating salient objects and returning a highquality fused image. To explicitly evaluate the effectiveness of the proposed interactive loop learning strategy, we exhibit fusion and sod results of different interactive loops. In our implementation, the interactive learning process lasts for 10 epoch from 0 to the 9-th epoch. We show the fusion and SOD results of the 1-st, 5-th, and 9-th epoch in Figure 10, which is corresponding to the Table 6 and Table 7. Observing these locally enlarged regions of fusion results in Figure 10(a), as the interactive loop training deepens, the salient objects are highlighted and the interference of background is weakened progressively, which is conducive to localizing these salient objects. This conclusion is supported by these SOD results in Figure 10(b)." }, { "figure_ref": [], "heading": "FGAN", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1" ], "heading": "Effectiveness of pixel-level fusion for SOD", "publication_ref": [], "table_ref": [ "tab_7", "tab_7" ], "text": "One core motivation is to explore the relationship between pixel-level fusion and SOD tasks. To demonstrate whether the fused image has a boosting effect on SOD performance, we utilize source images , , and their weighted-fusion variant + ∕2 instead of the fused image generated by our FSFNet in IRFS framework, respectively, and treat them as the third input of the FGC 2 Net to perform the SOD task. As shown in Table 8, when the rough fused image + ∕2 from source RGB and thermal infrared images is used as a guidance of the SOD task, the quantitative values in four metrics including , , , and MAE outperform those of the implementations regarding RGB or thermal infrared images as the guidance of FGC 2 Net. And further, when our fused image generated by FSFNet is adopted to guide the subsequent FGC 2 Net, these quantitative results achieve more desirable improvements. Therefore, we believe that image fusion is beneficial to facilitating SOD networks to predict more accurate saliency maps. Figure 11 shows the qualitative results corresponding to Table 8 to explicitly investigate the effectiveness of the pixel-level fusion for the subsequent SOD task. It is easy to observe that using our fusion image as a third modality to guide the SOD task can predict more precise saliency maps with coherent edges and complete objects. Coupled with the quantitative results, we can argue that the pixel-level fusion results are beneficial to facilitating SOD." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Effectiveness of FGSE", "publication_ref": [], "table_ref": [ "tab_8", "tab_8" ], "text": "We mainly focus on the investigations of key components of the FGSE module (shown in Figure 12). To verify the effectiveness of C 2 FTL and LFS, we successively remove them from the FGSE module, denoted as #2 and #1 in Table 9. It can be seen that using the C 2 FTL brings an improvement of 12.1% in the MAE score and using the LFS brings an improvement of 1.9% in mean metric on the VT5000 dataset. Assuming that both C 2 FTL and LFS are absent, denoted as #0, the performance drops by more than 2.5% in and 9.0% in MAE. The results comprehensively reveal the contribution of our FGSE module for the enhancement of the saliency-related features and the suppression of the interference information on the SOD task. Figure 12 shows the qualitative results corresponding to Table 9 to explicitly examine the effectiveness of key components of the FGSE module in the SOD task. We can observe that the removal of the C 2 FTL or LSF leads to the adhesion of salient objects in predicted saliency maps. By contrast, our IRFS predicts more precise saliency maps with sharp object outlines. The results reveal the effectiveness of the proposed FGSE module." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper first proposed an interactively reinforced paradigm for joint infrared-visible image fusion and saliency object detection tasks, specifically designed for the unmanned system to search and parse objects in the wild. In this paradigm, image fusion focuses on highlighting saliencyrelated features to suit the requirements of the infraredvisible SOD task, while the SOD task propagates semantic loss back to the fusion part and implements supervision to prevent the generated fused images from losing semantic information. The comprehensive experimental results revealed that infrared-visible image fusion and SOD tasks can maintain a collaborative relationship in a single framework.\nThe proposed paradigm has two notable strengths. Firstly, the resulting interactive reinforcement between the two tasks leads to improved performance in both infrared-visible image fusion and SOD tasks. Secondly, our paradigm represents a significant contribution to the visual perception of unmanned systems. Nevertheless, there is still room for improvement. Specifically, the paradigm has yet to be adapted to dynamically adjust image fusion to meet the practical requirements of SOD in challenging scenarios such as low-light or adverse weather conditions. Addressing this limitation is an important area for future research, as it would significantly enhance the utility of the framework in realworld applications." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work is partially supported by the National Key R&D Program of China (2020YFB1313503 and 2022YFA1-004101), the National Natural Science Foundation of China (Nos. U22B2052 and 61922019)." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "//github.com/wdhudiekou" } ]
interactively reinforced paradigm interactive loop learning strategy A B S T R A C T This research focuses on the discovery and localization of hidden objects in the wild and serves unmanned systems. Through empirical analysis, infrared and visible image fusion (IVIF) enables hard-to-find objects apparent, whereas multimodal salient object detection (SOD) accurately delineates the precise spatial location of objects within the picture. Their common characteristic of seeking complementary cues from different source images motivates us to explore the collaborative relationship between Fusion and Salient object detection tasks on infrared and visible images via an Interactively Reinforced multi-task paradigm for the first time, termed IRFS. To the seamless bridge of multimodal image fusion and SOD tasks, we specifically develop a Feature Screening-based Fusion subnetwork (FSFNet) to screen out interfering features from source images, thereby preserving saliency-related features. After generating the fused image through FSFNet, it is then fed into the subsequent Fusion-Guided Cross-Complementary SOD subnetwork (FC 2 Net) as the third modality to drive the precise prediction of the saliency map by leveraging the complementary information derived from the fused image. In addition, we develop an interactive loop learning strategy to achieve the mutual reinforcement of IVIF and SOD tasks with a shorter training period and fewer network parameters. Comprehensive experiment results demonstrate that the seamless bridge of IVIF and SOD mutually enhances their performance, and highlights their superiority.
An Interactively Reinforced Paradigm for Joint Infrared-Visible Image Fusion and Saliency Object Detection
[ { "figure_caption": "Figure 1 :1Figure 1: Comparison between existing fusion methods and the proposed interactively reinforced paradigms for joint infraredvisible image fusion and saliency object detection (IRFS).", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Overview of the proposed interactively reinforced paradigm for joint infrared-visible image fusion and saliency object detection called IRFS. The paradigm is a cascaded framework that consists of a feature screening-based image fusion subnetwork termed FSFNet (as shown in (a)) and a fusion-guided cross-complementary SOD subnetwork termed FGC 2 Net (as shown in (b)).In this framework, the image fusion facilitates the SOD task from the bottom up, and conversely, SOD facilitates the image fusion task from the top down.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The details of the proposed FGSE module. This module consists of three components including a saliency-enhanced module (SEM), a cross-complementary feature transformer layer (C 2 FTL), and a learnable feature selector (LFS).", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Schematic diagram of the modality-specific group decoder (MSGD).", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Qualitative evaluations of joint infrared-visible image fusion and SOD on VT5000 dataset. The first row refers to the fusion results, while the second and last rows refer to the saliency maps predicted by our FGC 2 Net and CTDNet [79], respectively.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Qualitative fusion results of our IRFS versus state-of-the-art infrared-visible image fusion methods on TNO dataset.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Qualitative fusion results of our IRFS versus state-of-the-art infrared-visible image fusion methods on RoadScene dataset.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Qualitative fusion results of our IRFS versus state-of-the-art infrared-visible image fusion methods on M 3 FD dataset.", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Visual comparisons versus state-of-the-art SOD methods in some challenging cases: thermal crossover and low contrast.", "figure_data": "", "figure_id": "fig_9", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "<short title of the paper for running head> (a) Fusion results of different interactive loops (b) SOD results of different interactive loops", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Qualitative analysis of the interactive loop learning strategy from fusion and SOD perspectives on the VT5000 dataset, respectively.", "figure_data": "", "figure_id": "fig_11", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :Figure 12 :1112Figure 11: Qualitative analysis of the effectiveness of the fused image for the SOD task on VT5000 dataset.", "figure_data": "", "figure_id": "fig_12", "figure_label": "1112", "figure_type": "figure" }, { "figure_caption": "Quantitative evaluations of joint thermal infrared-visible image fusion and SOD on VT5000 dataset. ↑/↓ for a metric denotes that a larger/smaller value is better. The best results are bolded and the second-best results are highlighted in underline.", "figure_data": "MethodMetricFGANDIDFusePMGIMFEIFRFNU2FDDcGANGANMcCUMFIRFSFusionMI↑ VIF↑1.799 0.6481.692 0.6341.893 0.6481.769 1.0461.962 0.7281.780 0.7441.798 0.9301.743 0.7551.741 0.9812.064 1.525CC↑1.2331.3781.3111.3681.3991.3831.4191.3771.3911.450↑0.8470.8530.8520.8570.8570.8530.8460.8540.8240.871↑0.7850.8020.7970.8050.8030.7990.7890.8000.7890.835+ CTDNet↑0.8870.8980.8930.8980.8980.8930.8870.8970.8990.927 ↓0.0510.0460.0480.0460.0460.0490.0510.0470.0490.036↑0.8740.8700.8650.8700.8660.8720.8590.8670.8730.877↑0.8160.7980.8080.8180.8130.8150.7920.8010.8310.835+ FGC 2 Net↑0.9130.9020.9090.9150.9140.9090.9040.9040.9170.922 ↓0.0360.0390.0370.0360.0370.0380.0410.0380.0350.034CC. Correlation Coefficient (CC) measures the linearcorrelation degree between the fused image and sourceimages. It is mathematically defined as", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Quantitative fusion results. ↑/↓ for a metric denotes that a larger/smaller value is better. The best results are bolded and the second-best results are highlighted in underline.", "figure_data": "MetricFGANDIDFusePMGIMFEIFRFNU2FusionDDcGANGANMcCUMFIRFSParams. (Mb)↓0.0740.2610.0420.15810.9360.6591.0981.8640.800.06MI ↑1.6211.8881.9201.9681.6711.6181.4681.7961.8181.843TNOVIF ↑0.7980.7780.7791.1260.8360.8250.5670.9141.0781.230CC ↑1.2131.3061.3721.3491.4271.4301.2581.4131.3831.456MI ↑2.0471.9252.2692.1701.9991.8121.7231.8762.0192.472RoadVIF ↑0.7030.8810.8741.1641.0430.7680.6100.9521.1411.173CC ↑1.4381.5331.4821.5951.6131.6021.4141.5731.6051.616MI ↑1.5891.8031.9481.5741.8771.6761.3331.6201.9601.999M 3 FDVIF ↑0.8000.6950.7831.1400.9070.8190.4440.9441.1001.158CC ↑0.7740.7940.9200.8340.8510.8930.8310.9360.9390.967FusionCTDNetNetFGC 2FGANDIDFusePMGIMFEIFRFNU2FDDcGAN GANMcCUMFIRFSGTFusionCTDNetNetFGC 2FGANDIDFusePMGIMFEIFRFNU2FDDcGAN GANMcCUMFIRFSGT", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Quantitative SOD results. ↑/↓ for a metric denotes that a larger/smaller value is better. The best results are bolded and the second-best results are highlighted in underline.", "figure_data": "MTMR M3S-NIR SGDL ADF MIDD CSRN MMNet OSRNet ECFFN MIA IRFS↑ 0.7250.7230.765 0.810 0.871 0.8790.8730.8750.877 0.844 0.879↑ 0.6620.7340.730 0.716 0.804 0.8300.7940.8130.810 0.740 0.833VT821↑ 0.8150.8590.847 0.842 0.895 0.9080.8920.8960.902 0.850 0.917 ↓ 0.1080.1400.085 0.077 0.045 0.0380.0400.0430.034 0.070 0.029↑ 0.7060.7260.787 0.910 0.907 0.9180.9140.9260.923 0.924 0.924↑ 0.7150.7170.764 0.847 0.871 0.8770.8610.8920.876 0.868 0.901VT1000↑ 0.8360.8270.856 0.921 0.928 0.9250.9230.9350.930 0.926 0.943 ↓ 0.1190.1450.090 0.034 0.029 0.0240.0270.0220.021 0.025 0.019↑ 0.6800.6520.750 0.863 0.856 0.8680.8620.8750.874 0.878 0.877↑ 0.5950.5750.672 0.778 0.789 0.8100.7800.8230.806 0.793 0.835VT5000↑ 0.7950.7800.824 0.891 0.891 0.9050.8870.9080.906 0.893 0.922 ↓ 0.1140.1680.089 0.048 0.046 0.0420.0430.0400.038 0.040 0.034M3S-NIR", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Discussion", "figure_data": "Metric= 0.1= 0.5= 1.0= 2.0= 5.0= 10CC ↑1.4311.3761.4501.4171.3711.369VIF ↑0.9771.0101.6840.9740.8920.940↑0.8710.8760.8770.8730.8750.872↑0.8370.8390.8350.8340.8300.821", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Investigation of the interactive loop learning strategy from the fusion perspective on RoadScene dataset.", "figure_data": "MetricOne-stage1-Interaction 5-ℎ9-ℎMI ↑2.1492.1782.1932.301VIF ↑1.0471.1411.1751.223CC ↑1.1361.1581.1621.164Table", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Investigation of the interactive loop learning strategy from the SOD perspective on VT5000 dataset.", "figure_data": "MetricOne-stage1-Interaction 5-ℎ9-ℎ↑0.8630.8690.8740.877↑0.8030.8240.8300.835↑0.9070.9130.9190.922 ↓0.0380.0370.0350.034", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Effectiveness analysis of the fused image for the SOD task on VT5000 dataset.", "figure_data": "Metric+∕2↑0.8700.8690.8720.877↑0.7990.8080.8300.835↑0.9030.8020.9200.922 ↓0.0380.0360.0360.034", "figure_id": "tab_7", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Ablation studies of key components of the proposed FGSE module.", "figure_data": "# SEM C 2 FTL LFS↑VT5000 ↑↑ ↓0✓0.868 0.819 0.915 0.0361✓✓0.869 0.826 0.914 0.0372✓✓0.872 0.825 0.918 0.0353✓✓✓0.877 0.835 0.922 0.034", "figure_id": "tab_8", "figure_label": "9", "figure_type": "table" } ]
Di Wang; Jinyuan Liu; Risheng Liu; Xin Fan
[ { "authors": "D P Bavirisetti; G Xiao; G Liu", "journal": "IEEE FUSION", "ref_id": "b0", "title": "Multi-sensor image fusion based on fourth order partial differential equations", "year": "2017" }, { "authors": "Y Bin; Y Chao; H Guoyu", "journal": "Int. J. Wavelets Multiresolution Inf. Process", "ref_id": "b1", "title": "Efficient image fusion with approximate sparse representation", "year": "2016" }, { "authors": "D Bogdoll; M Nitsche; J M Zöllner", "journal": "", "ref_id": "b2", "title": "Anomaly detection in autonomous driving: A survey", "year": "2022" }, { "authors": "J Chen; X Li; L Luo; X Mei; J Ma", "journal": "Inf. Sci", "ref_id": "b3", "title": "Infrared and visible image fusion based on target-enhanced multiscale transform decomposition", "year": "2020" }, { "authors": "X Chen; Y Liu; Z Zhang; Y Qiao; C Dong", "journal": "", "ref_id": "b4", "title": "Hdrunet: Single image hdr reconstruction with denoising and dequantization", "year": "2021" }, { "authors": "N Cvejic; J J Lewis; D R Bull; C N Canagarajah", "journal": "", "ref_id": "b5", "title": "Regionbased multimodal image fusion using ICA bases", "year": "2006" }, { "authors": "Y Fu; T Xu; X Wu; J Kittler", "journal": "", "ref_id": "b6", "title": "PPT fusion: Pyramid patch transformerfor a case study in image fusion", "year": "2021" }, { "authors": "W Gan; X Wu; W Wu; X Yang; C Ren; X He; K Liu", "journal": "Infrared Physics & Technology", "ref_id": "b7", "title": "Infrared and visible image fusion with the use of multi-scale edgepreserving decomposition and guided image filter", "year": "2015" }, { "authors": "W Gao; G Liao; S Ma; G Li; Y Liang; W Lin", "journal": "IEEE Trans. Circuits Syst. Video Technol", "ref_id": "b8", "title": "Unified information fusion network for multi-modal rgb-d and rgb-t salient object detection", "year": "2021" }, { "authors": "S Ghosh; R G Gavaskar; K N Chaudhury", "journal": "", "ref_id": "b9", "title": "Saliency guided image detail enhancement", "year": "2019" }, { "authors": "M Guo; M Chen; C Ma; Y Li; X Li; X Xie", "journal": "", "ref_id": "b10", "title": "High-level task-driven single image deraining: Segmentation in rainy days", "year": "2020" }, { "authors": "J Han; E J Pauwels; P M De Zeeuw", "journal": "Neurocomputing", "ref_id": "b11", "title": "Fast saliency-aware multi-modality image fusion", "year": "2013" }, { "authors": "Y Han; Y Cai; Y Cao; X Xu", "journal": "Inf. Fusion", "ref_id": "b12", "title": "A new image fusion performance metric based on visual information fidelity", "year": "2013" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b13", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "H Hu; J Wu; B Li; Q Guo; J Zheng", "journal": "IEEE Trans. Multim", "ref_id": "b14", "title": "An adaptive fusion algorithm for visible and infrared videos based on entropy and the cumulative distribution of gray levels", "year": "2017" }, { "authors": "J Hu; L Shen; G Sun", "journal": "", "ref_id": "b15", "title": "Squeeze-and-excitation networks", "year": "2018" }, { "authors": "S Huang; T Le; D Jaw", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b16", "title": "Dsnet: Joint semantic learning for object detection in inclement weather conditions", "year": "2021" }, { "authors": "F Huo; X Zhu; L Zhang; Q Liu; Y Shu", "journal": "IEEE Trans. Circuits Syst. Video Technol", "ref_id": "b17", "title": "Efficient contextguided stacked refinement network for rgb-t salient object detection", "year": "2021" }, { "authors": "F Huo; X Zhu; L Zhang; Q Liu; Y Shu", "journal": "IEEE Trans. Circuits Syst. Video Technol", "ref_id": "b18", "title": "Efficient context-guided stacked refinement network for RGB-T salient object detection", "year": "2022" }, { "authors": "F Huo; X Zhu; Q Zhang; Z Liu; W Yu", "journal": "IEEE Trans. Instrum. Meas", "ref_id": "b19", "title": "Real-time onestream semantic-guided refinement network for rgb-thermal salient object detection", "year": "2022" }, { "authors": "Z Jiang; Z Li; S Yang; X Fan; R Liu", "journal": "IEEE Trans. Circuits Syst. Video Technol", "ref_id": "b20", "title": "Target oriented perceptual adversarial fusion network for underwater image enhancement", "year": "2022" }, { "authors": "Z Jiang; Z Zhang; X Fan; R Liu", "journal": "", "ref_id": "b21", "title": "Towards all weather and unobstructed multi-spectral image stitching: Algorithm and benchmark", "year": "2022" }, { "authors": "H Li; L Liu; W Huang; C Yue", "journal": "Infrared Physics & Technology", "ref_id": "b22", "title": "An improved fusion algorithm for infrared and visible images based on multi-scale transform", "year": "2016" }, { "authors": "H Li; X Wu", "journal": "IEEE Trans. Image Process", "ref_id": "b23", "title": "Densefuse: A fusion approach to infrared and visible images", "year": "2019" }, { "authors": "H Li; X Wu; J Kittler", "journal": "IEEE Trans. Image Process", "ref_id": "b24", "title": "Mdlatlrr: A novel decomposition method for infrared and visible image fusion", "year": "2020" }, { "authors": "H Li; X Wu; J Kittler", "journal": "Inf. Fusion", "ref_id": "b25", "title": "Rfn-nest: An end-to-end residual fusion network for infrared and visible images", "year": "2021" }, { "authors": "J Li; J Liu; S Zhou; Q Zhang; N K Kasabov", "journal": "Infrared Physics & Technology", "ref_id": "b26", "title": "Infrared and visible image fusion based on residual dense network and gradient loss", "year": "2023" }, { "authors": "J Li; J Zhu; C Li; X Chen; B Yang", "journal": "IEEE Trans. Instrum. Meas", "ref_id": "b27", "title": "CGTF: convolutionguided transformer for infrared and visible image fusion", "year": "2022" }, { "authors": "Z Li; H Tang; Z Peng; G J Qi; J Tang", "journal": "IEEE Trans. Neural Networks Learn. Syst", "ref_id": "b28", "title": "Knowledgeguided semantic transfer network for few-shot image recognition", "year": "2023" }, { "authors": "Y Liang; G Qin; M Sun; J Qin; J Yan; Z Zhang", "journal": "Neurocomputing", "ref_id": "b29", "title": "Multimodal interactive attention and dual progressive decoding network for rgb-d/t salient object detection", "year": "2022" }, { "authors": "G Liao; W Gao; G Li; J Wang; S Kwong", "journal": "IEEE Trans. Circuits Syst. Video Technol", "ref_id": "b30", "title": "Crosscollaborative fusion-encoder network for robust rgb-thermal salient object detection", "year": "2022" }, { "authors": "C Liu; Y Qi; W Ding", "journal": "Infrared Physics & Technology", "ref_id": "b31", "title": "Infrared and visible image fusion method based on saliency detection in sparse domain", "year": "2017" }, { "authors": "D Liu; B Wen; J Jiao; X Liu; Z Wang; T S Huang", "journal": "IEEE Trans. Image Process", "ref_id": "b32", "title": "Connecting image denoising and high-level vision tasks via deep learning", "year": "2020" }, { "authors": "J Liu; X Fan; Z Huang; G Wu; R Liu; W Zhong; Z Luo", "journal": "", "ref_id": "b33", "title": "Target-aware dual adversarial learning and a multi-scenario multimodality benchmark to fuse infrared and visible for object detection", "year": "2022" }, { "authors": "J Liu; X Fan; J Jiang; R Liu; Z Luo", "journal": "IEEE Trans. Circuits Syst. Video Technol", "ref_id": "b34", "title": "Learning a deep multi-scale feature ensemble and an edge-attention guidance for image fusion", "year": "2022" }, { "authors": "J Liu; R Lin; G Wu; R Liu; Z Luo; X Fan", "journal": "", "ref_id": "b35", "title": "Coconet: Coupled contrastive learning network with multi-level feature ensemble for multi-modality image fusion", "year": "2022" }, { "authors": "J Liu; J Shang; R Liu; X Fan", "journal": "IEEE Trans. Circuits Syst. Video Technol", "ref_id": "b36", "title": "Attention-guided globallocal adversarial learning for detail-preserving multi-exposure image fusion", "year": "2022" }, { "authors": "J Liu; G Wu; J Luan; Z Jiang; R Liu; X Fan", "journal": "Inf. Fusion", "ref_id": "b37", "title": "Holoco: Holistic and local contrastive learning network for multi-exposure image fusion", "year": "2023" }, { "authors": "J Liu; Y Wu; Z Huang; R Liu; X Fan", "journal": "IEEE Signal Process. Lett", "ref_id": "b38", "title": "Smoa: Searching a modality-oriented architecture for infrared and visible image fusion", "year": "2021" }, { "authors": "J Liu; Y Wu; G Wu; R Liu; X Fan", "journal": "IEEE Signal Process. Lett", "ref_id": "b39", "title": "Learn to search a lightweight architecture for target-aware infrared and visible image fusion", "year": "2022" }, { "authors": "R Liu; Z Jiang; S Yang; X Fan", "journal": "IEEE Trans. Image Process", "ref_id": "b40", "title": "Twin adversarial contrastive learning for underwater image enhancement and beyond", "year": "2022" }, { "authors": "R Liu; Z Liu; J Liu; X Fan", "journal": "", "ref_id": "b41", "title": "Searching a hierarchically aggregated fusion architecture for fast multi-modality image fusion", "year": "2021" }, { "authors": "R Liu; L Ma; T Ma; X Fan; Z Luo", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b42", "title": "Learning with nested scene modeling and cooperative architecture search for lowlight vision", "year": "2022" }, { "authors": "R Liu; L Ma; J Zhang; X Fan; Z Luo", "journal": "", "ref_id": "b43", "title": "Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement", "year": "2021" }, { "authors": "Y Liu; X Chen; R K Ward; Z J Wang", "journal": "IEEE Signal Process. Lett", "ref_id": "b44", "title": "Image fusion with convolutional sparse representation", "year": "2016" }, { "authors": "Y Liu; S Liu; Z Wang", "journal": "Inf. Fusion", "ref_id": "b45", "title": "A general framework for image fusion based on multi-scale transform and sparse representation", "year": "2015" }, { "authors": "Z Liu; Y Lin; Y Cao; H Hu; Y Wei; Z Zhang; S Lin; B Guo", "journal": "", "ref_id": "b46", "title": "Swin transformer: Hierarchical vision transformer using shifted windows", "year": "2021" }, { "authors": "J Ma; L Tang; F Fan; J Huang; X Mei; Y Ma", "journal": "IEEE CAA J. Autom. Sinica", "ref_id": "b47", "title": "Swinfusion: Cross-domain long-range learning for general image fusion via swin transformer", "year": "2022" }, { "authors": "J Ma; L Tang; M Xu; H Zhang; G Xiao", "journal": "IEEE Trans. Instrum. Meas", "ref_id": "b48", "title": "Stdfusionnet: An infrared and visible image fusion network based on salient target detection", "year": "2021" }, { "authors": "J Ma; H Xu; J Jiang; X Mei; X P Zhang", "journal": "IEEE Trans. Image Process", "ref_id": "b49", "title": "Ddcgan: A dualdiscriminator conditional generative adversarial network for multiresolution image fusion", "year": "2020" }, { "authors": "J Ma; W Yu; P Liang; C Li; J Jiang", "journal": "Inf. Fusion", "ref_id": "b50", "title": "Fusiongan: A generative adversarial network for infrared and visible image fusion", "year": "2019" }, { "authors": "J Ma; H Zhang; Z Shao; P Liang; H Xu", "journal": "IEEE Trans. Instrum. Meas", "ref_id": "b51", "title": "Ganmcc: A generative adversarial network with multi-classification constraints for infrared and visible image fusion", "year": "2021" }, { "authors": "L Ma; R Liu; J Zhang; X Fan; Z Luo", "journal": "IEEE Trans. Neural Networks Learn. Syst", "ref_id": "b52", "title": "Learning deep context-sensitive decomposition for low-light image enhancement", "year": "2021" }, { "authors": "L Ma; T Ma; R Liu; X Fan; Z Luo", "journal": "", "ref_id": "b53", "title": "Toward fast, flexible, and robust low-light image enhancement", "year": "2022" }, { "authors": "J Mou; W Gao; Z Song", "journal": "", "ref_id": "b54", "title": "Image fusion based on non-negative matrix factorization and infrared feature extraction", "year": "2013" }, { "authors": "G Qu; D Zhang; P Yan", "journal": "Electronics letters", "ref_id": "b55", "title": "Information measure for performance of image fusion", "year": "2002" }, { "authors": "X Qu; J Yan; H Xiao; Z Zhu", "journal": "Acta Automatica Sinica", "ref_id": "b56", "title": "Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain", "year": "2008" }, { "authors": "H Tang; Z Li; Z Peng; J Tang", "journal": "", "ref_id": "b57", "title": "Blockmix: Meta regularization and self-calibrated inference for metric-based meta-learning", "year": "2020" }, { "authors": "H Tang; C Yuan; Z Li; J Tang", "journal": "Pattern Recognit", "ref_id": "b58", "title": "Learning attention-guided pyramidal features for few-shot fine-grained recognition", "year": "2022" }, { "authors": "L Tang; Y Deng; Y Ma; J Huang; J Ma", "journal": "IEEE CAA J. Autom. Sinica", "ref_id": "b59", "title": "Superfusion: A versatile image registration and fusion network with semantic awareness", "year": "2022" }, { "authors": "L Tang; J Yuan; J Ma", "journal": "Inf. Fusion", "ref_id": "b60", "title": "Image fusion in the loop of highlevel vision tasks: A semantic-aware real-time infrared and visible image fusion network", "year": "2022" }, { "authors": "Z Tu; Z Li; C Li; Y Lang; J Tang", "journal": "IEEE Trans. Image Process", "ref_id": "b61", "title": "Multi-interactive dualdecoder for rgb-thermal salient object detection", "year": "2021" }, { "authors": "Z Tu; T Xia; C Li; Y Lu; J Tang", "journal": "", "ref_id": "b62", "title": "M3S-NIR: multi-modal multi-scale noise-insensitive ranking for RGB-T saliency detection", "year": "2019" }, { "authors": "Z Tu; T Xia; C Li; X Wang; Y Ma; J Tang", "journal": "IEEE Trans. Multim", "ref_id": "b63", "title": "RGB-T image saliency detection via collaborative graph learning", "year": "2020" }, { "authors": "D Wang; J Liu; X Fan; R Liu", "journal": "", "ref_id": "b64", "title": "Unsupervised misaligned infrared and visible image fusion via cross-modality image generation and registration", "year": "2022" }, { "authors": "G Wang; C Li; Y Ma; A Zheng; J Tang; B Luo", "journal": "IGTA", "ref_id": "b65", "title": "RGB-T saliency detection benchmark: Dataset, baselines, analysis and a novel approach", "year": "2018" }, { "authors": "J Wang; K Song; Y Bao; L Huang; Y Yan", "journal": "IEEE Trans. Circuits Syst. Video Technol", "ref_id": "b66", "title": "Cgfnet: Crossguided fusion network for RGB-T salient object detection", "year": "2022" }, { "authors": "L Wang; D Li; Y Zhu; L Tian; Y Shan", "journal": "", "ref_id": "b67", "title": "Dual superresolution learning for semantic segmentation", "year": "2020" }, { "authors": "X Wang; L Yao; R Song; H Xie", "journal": "", "ref_id": "b68", "title": "A new infrared and visible image fusion algorithm in NSCT domain", "year": "" }, { "authors": "X Wang; K Yu; C Dong; C C Loy", "journal": "", "ref_id": "b69", "title": "Recovering realistic texture in image super-resolution by deep spatial feature transform", "year": "2018" }, { "authors": "H Xu; J Ma; J Jiang; X Guo; H Ling", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b70", "title": "U2fusion: A unified unsupervised image fusion network", "year": "2022" }, { "authors": "Z Zha; H Tang; Y Sun; J Tang", "journal": "IEEE Trans. Circuits Syst. Video Technol", "ref_id": "b71", "title": "Boosting few-shot finegrained recognition with background suppression and foreground alignment", "year": "2023" }, { "authors": "L Zhan; Z Yi; L Huang", "journal": "Journal of Computers", "ref_id": "b72", "title": "Infrared and visible images fusion method based on discrete wavelet transform", "year": "2017" }, { "authors": "H Zhang; H Xu; Y Xiao; X Guo; J Ma", "journal": "", "ref_id": "b73", "title": "Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity", "year": "2020" }, { "authors": "H Zhang; J Yuan; X Tian; J Ma", "journal": "IEEE Trans. Computational Imaging", "ref_id": "b74", "title": "GAN-FM: infrared and visible image fusion using GAN with full-scale skip connection and dual markovian discriminators", "year": "2021" }, { "authors": "Q Zhang; N Huang; L Yao; D Zhang; C Shan; J Han", "journal": "IEEE Trans. Image Process", "ref_id": "b75", "title": "RGB-T salient object detection via fusing multi-level CNN features", "year": "2020" }, { "authors": "Q Zhang; Y Liu; R S Blum; J Han; D Tao", "journal": "Inf. Fusion", "ref_id": "b76", "title": "Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: A review", "year": "2018" }, { "authors": "Z Zhao; H Bai; J Zhang; Y Zhang; S Xu; Z Lin; R Timofte; L V Gool", "journal": "", "ref_id": "b77", "title": "Cddfuse: Correlation-driven dual-branch feature decomposition for multi-modality image fusion", "year": "2022" }, { "authors": "Z Zhao; C Xia; C Xie; J Li", "journal": "", "ref_id": "b78", "title": "Complementary trilateral decoder for fast and accurate salient object detection", "year": "2021" }, { "authors": "Z Zhao; S Xu; C Zhang; J Liu; J Zhang; P Li", "journal": "", "ref_id": "b79", "title": "Didfuse: Deep image decomposition for infrared and visible image fusion", "year": "2020" }, { "authors": "Z Zhao; S Xu; J Zhang; C Liang; C Zhang; J Liu", "journal": "IEEE Trans. Circuits Syst. Video Technol", "ref_id": "b80", "title": "Efficient and model-based infrared and visible image fusion via algorithm unrolling", "year": "2022" }, { "authors": "Z Zhao; J Zhang; S Xu; Z Lin; H Pfister", "journal": "", "ref_id": "b81", "title": "Discrete cosine transform network for guided depth map super-resolution", "year": "2022" }, { "authors": "W Zhou; Q Guo; J Lei; L Yu; J Hwang", "journal": "IEEE Trans. Circuits Syst. Video Technol", "ref_id": "b82", "title": "Ecffnet: Effective and consistent feature fusion network for RGB-T salient object detection", "year": "2022" } ]
[ { "formula_coordinates": [ 5, 76.21, 440.94, 157.08, 12.24 ], "formula_id": "formula_0", "formula_text": "F p = 1×1 F , F," }, { "formula_coordinates": [ 5, 76.21, 449.94, 212.46, 20.95 ], "formula_id": "formula_1", "formula_text": "F p = 1×1 F , F .(1)" }, { "formula_coordinates": [ 5, 76.21, 502.53, 212.46, 12.24 ], "formula_id": "formula_2", "formula_text": "F u = F p ⊕ F p ,(2)" }, { "formula_coordinates": [ 5, 345.43, 681.45, 198.54, 30.74 ], "formula_id": "formula_3", "formula_text": "= -1 ⊕  ⊗  , = -1 ⊕  ⊗  ,(3)" }, { "formula_coordinates": [ 6, 78.31, 168.75, 206.49, 28.56 ], "formula_id": "formula_4", "formula_text": "̃ =  Q ⊗ K ⊗ V ⊕ -1 , ̃ =  Q ⊗ K ⊗ V ⊕ -1 , (4" }, { "formula_coordinates": [ 6, 284.8, 178.3, 3.87, 9.96 ], "formula_id": "formula_5", "formula_text": ")" }, { "formula_coordinates": [ 6, 86.55, 327.55, 202.13, 11.32 ], "formula_id": "formula_6", "formula_text": ", = ( ̃ ), ( ̃ ) ,(5)" }, { "formula_coordinates": [ 6, 89.4, 412.44, 195.4, 33.42 ], "formula_id": "formula_7", "formula_text": "= ⊗ ̃ , ⊗ ̃ ⊕ ̃ , = ⊗ ̃ , ⊗ ̃ ⊕ ̃ . (6" }, { "formula_coordinates": [ 6, 284.8, 425, 3.87, 9.96 ], "formula_id": "formula_8", "formula_text": ")" }, { "formula_coordinates": [ 6, 331.51, 195.4, 208.59, 10.96 ], "formula_id": "formula_9", "formula_text": " fusion =  int +  grad , (7" }, { "formula_coordinates": [ 6, 540.1, 195.4, 3.87, 9.96 ], "formula_id": "formula_10", "formula_text": ")" }, { "formula_coordinates": [ 6, 342.9, 268.99, 201.07, 25.37 ], "formula_id": "formula_11", "formula_text": " int = ‖ ⊗ + ⊗ , ‖ 1 + 1 -SSIM ⊗ + ⊗ , .(8)" }, { "formula_coordinates": [ 6, 331.51, 425.19, 212.46, 12.47 ], "formula_id": "formula_12", "formula_text": " grad = ‖∇ , ∇ , ∇ ‖ 1 ,(9)" }, { "formula_coordinates": [ 6, 331.51, 548.09, 212.46, 11.07 ], "formula_id": "formula_13", "formula_text": " coarse =  M , +  M , .(10)" }, { "formula_coordinates": [ 6, 331.51, 600.72, 212.46, 11.07 ], "formula_id": "formula_14", "formula_text": " precise =  M , +  M , .(11)" }, { "formula_coordinates": [ 6, 331.51, 639.94, 208.31, 11.07 ], "formula_id": "formula_15", "formula_text": " sod =  coarse +  precise . (12" }, { "formula_coordinates": [ 6, 539.82, 639.94, 4.15, 9.96 ], "formula_id": "formula_16", "formula_text": ")" }, { "formula_coordinates": [ 6, 331.51, 679.16, 208.31, 11.07 ], "formula_id": "formula_17", "formula_text": " overall =  fusion +  sod , (13" }, { "formula_coordinates": [ 6, 539.82, 679.16, 4.15, 9.96 ], "formula_id": "formula_18", "formula_text": ")" }, { "formula_coordinates": [ 7, 331.51, 469.38, 212.46, 11.34 ], "formula_id": "formula_19", "formula_text": "MI = MI , + MI , ,(14)" }, { "formula_coordinates": [ 7, 325.66, 561.65, 214.16, 26.67 ], "formula_id": "formula_20", "formula_text": "MI ∕ , = ∑ ( ∕ , ) log ( ∕ , ) ( ∕ ) ( ) , (15" }, { "formula_coordinates": [ 7, 539.82, 569.88, 4.15, 9.96 ], "formula_id": "formula_21", "formula_text": ")" }, { "formula_coordinates": [ 8, 76.21, 299.18, 208.31, 24.47 ], "formula_id": "formula_22", "formula_text": "CC = ( , ) + ( , ) 2 , (16" }, { "formula_coordinates": [ 8, 284.52, 305.22, 4.15, 9.96 ], "formula_id": "formula_23", "formula_text": ")" }, { "formula_coordinates": [ 8, 71.64, 359.11, 217.03, 53.49 ], "formula_id": "formula_24", "formula_text": "( , ) = -( ) ⊙ -( ) √ -( ) 2 √ -( ) 2 . (17" }, { "formula_coordinates": [ 8, 284.52, 402.64, 4.15, 9.96 ], "formula_id": "formula_25", "formula_text": ")" } ]
2024-01-16
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b6", "b8", "b10", "b11", "b12", "b4", "b13", "b14", "b15" ], "table_ref": [], "text": "Self-supervised speech representation learning techniques have been a game changer in recent years. Learning from unlabeled data has been shown to be effective for many downstream tasks such as speech recognition, translation, and language modeling [1,2]. Among the flourishing self-supervised learning techniques, we are particularly interested in three methods: masked language modeling, self-distillation, and clustering.\nMasked language modeling (MLM; [3]) predicts the masked part of a sentence based on the unmasked context and was first developed for training language models with bidirectional self-attention models [4]. The strong performance in various natural language processing tasks has enabled representation learning with MLM to quickly succeed in the field. Unsurprisingly, the MLM concept also applies to speech [5,6] as it shares a similar structure to text in a more complex form.\nSelf-distillation representation learning has recently come into the spotlight with outstanding results for computer vision [7,8] and speech tasks [9]. In contrast to the conventional supervised knowledge distillation method [10], self-supervised distillation does not require labeled data to train a teacher model to guide the student model. Instead, both models are trained with unlabeled data using paired relations augmented by data augmentation [7] or masking [9].\nClustering algorithms like K-means have been well-known unsupervised techniques long before deep learning methods arose. In the deep learning era, researchers have found clustering mechanisms beneficial to self-supervised models in a differentiable form known as vector quantization [11]. Driven by the nature of speech, which is a continuous signal containing a spoken form of discrete text, vector quantization is an ideal match for representation learning as many studies [12,13,5] have discovered. Besides serving as an information bottleneck that filters out unnecessary content in high-dimensional spaces and improves performance, clustering also provides a glimpse of the characteristic of the latent embedding produced by the model by categorizing them [14].\nIn this paper, we introduce self-distillation and online clustering for self-supervised speech representation learning (DinoSR) which leverages the positive aspects of the aforementioned methods. We show that these concepts complement each other and result in a strong representation learning model for speech. In brief, DinoSR first extracts contextualized embeddings from the input audio with a teacher network, then runs an online clustering system on the embeddings to yield a machinediscovered phone inventory, and finally uses the discretized tokens to guide the student network. Quantitatively, DinoSR surpasses the state-of-the-art in speech recognition with limited resources on LibriSpeech [15] and unsupervised acoustic unit discovery [16]. Moreover, DinoSR demonstrates strong interpretability by discretizing the high-dimensional embedding space into clusters closely aligned to human-defined phonetic units." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b10", "b16", "b17", "b18", "b4", "b12", "b19", "b20", "b21", "b4", "b12", "b19", "b22", "b6", "b7", "b8", "b23", "b24", "b25", "b26" ], "table_ref": [], "text": "Self-supervised speech representation learning with deep neural networks first emerged in the form of autoregressive models [11,[17][18][19] where the goal is to predict the future based on past observations. Subsequently, bidirectional models [5,13,[20][21][22] relaxed the unidirectional limitation to achieve better results. A common learning paradigm for bidirectional models is MLM -masking part of the input and training the model to recover the missing information using unmasked targets. These targets can be derived from the audio signal using different strategies, such as surface features [5] or contrastive learning [13].\nFollowing the MLM training scheme, HuBERT [20] proposed targeting discrete units generated by vanilla acoustic unit discovery systems. Such a system can be as simple as K-means clustering over MFCC features, or even random linear projections over spectrograms [23]. Interestingly, HuBERT found that the acoustic unit discovery system can be iteratively refined by running offline K-Means clustering on the output of a specific layer of the pre-trained model. However, several important hyper-parameters are required to obtain the best performance, such as the number of updates, the layer whose output is to be clustered, and the number of clusters for each iteration. While the proposed method is conceptually similar to HuBERT -MLM with discovered acoustic units, our method can be trained end-to-end with fewer heuristics by leveraging the self-distillation framework and online clustering.\nOur method is also closely related to self-distillation methods for representation learning. These methods originated from image representation learning [7,8], training a pair of identical models named student and teacher networks. The key to this framework is to provide different views of the same input by image augmentation to each model, and also to update them in different policies -gradient descent for the student model and exponential moving average for the teacher model. Following the self-distillation framework, Baevski et al. [9] generalized the method to speech processing by replacing image augmentation with the MLM masking strategy and found it effective.\nThe key difference between this work and prior work is the online clustering mechanism that derives discrete targets instead of using continuous embeddings from the teacher model as targets. We also note that our method differs from studies in knowledge distillation from pre-trained speech representation models [24][25][26][27] which focus on inference efficiency and model compression." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Self-distillation Paradigm", "publication_ref": [ "b7", "b3", "b8" ], "table_ref": [], "text": "As illustrated in Figure 1, our method shares the same framework as recent self-supervised learning methods with self-distillation such as DINO [8]. The goal is to train a student network θ student guided by a teacher network θ teacher where both models share the same architecture, which, in our work, is a K-layer transformer encoder [4]. The teacher network in the self-distillation framework is simply a copy of the randomly initialized student network at the beginning of training.\nTo train the framework, we need to generate different views of the same input data for each model to avoid a trivial solution (θ student = θ teacher ). While this is often done by data augmentation in computer Figure 1: An overview of DinoSR: the teacher network is an exponential moving average of the student network and takes unmasked speech as input to extract target features. Online clustering is applied to multiple layers of the teacher, each with a separate codebook. The student network is trained to predict the corresponding clusters of masked input. Both teacher network and online clustering (shadowed regions) do not require gradients. vision, we followed Baevski et al. [9] to use input masking as an alternative for speech. The input speech is partially masked for the student model to generate the masked representation z K t ∈ R D where t = 1, ..., T is the sequence length. For the teacher model, the input is unmasked, and we denote the output representation zK t . Besides the different views of the same input, the parameter update policies of the two models are also different. While the student network is updated with gradient descent (with an objective function detailed later in §3.2), the teacher network parameter is updated via tracking the student network parameter with an exponential moving average (EMA):\nθ teacher ←-λ θ teacher + (1 -λ) θ student ,(1)\nwhere λ is the decay rate of the teacher model in each training step." }, { "figure_ref": [], "heading": "Self-supervised Learning with DinoSR", "publication_ref": [ "b19", "b12", "b10", "b27", "b17", "b13", "b4", "b11", "b28", "b17", "b11", "b12", "b28", "b29" ], "table_ref": [], "text": "Acoustic Unit Discovery with Online Clustering. Under the self-distillation framework, our key contribution is to derive a good target from the teacher network to guide the student network.\nPrior work on self-supervised speech representation investigated acoustic unit discovery by either performing offline clustering of contextualized representations [20] or online clustering of noncontextualized representations [13]. DinoSR uses an online acoustic unit discovery system on top of the teacher network, providing contextualized discrete units. Unlike prior work using K-means clustering over MFCC features or pre-trained representations, our model's unit discovery system cannot be fixed since the teacher model evolves with the student model. As a solution, we propose performing online clustering at multiple layers of the teacher network.\nFor the k-th layer of the teacher model within the top N layers (i.e., k ∈ (K -N, K]), we introduce a codebook (set of centroids)\nE k = {e k 1 , ..., e k V } with V codewords (centroids) e k i ∈ R D .\nWe update the codebook as follows: for each codebook entry v, we first create a set Zk v of the teacher output frames closest to the current representation of v as per the codebook\nZk v = zk t v = argmin i∈V zk t -e k i 2 ,(2)\nwhere the set index v will be used as a pseudo label to train the student model. Each codeword is then updated using a weighted sum of the embeddings in this set using EMA:\ns k v ←-τ s k v + (1 -τ ) Zk v , n k v ←-τ n k v + (1 -τ ) Zk v , e k v ←- s k v n k v .(3)\nFor each codeword e k v , the first term s k v tracks the sum of all neighboring teacher representations (i.e., Zk v from Eq. 2), and the second term n k v tracks the amount of the neighbors. With both terms approximated by EMA using the decay rate τ , we have the codeword e k v which is the moving average of its neighbor set. In practice, we found performing online clustering on the subset of the frames where t ∈ M is effective while reducing computation. For initialization, we set s k v to e k v and n k v to 1. More details and discussions on online clustering are available in §A.2.\nSince we define codewords by their neighboring representations, we can treat codewords as acoustic units discovered from the teacher model in an unsupervised manner and use them for training the student network. The clustering process creates discrete labels for frames based on their context in an end-to-end fashion. In §4.6, we show that these codewords possess similarities to human-defined acoustic units.\nOnline Clustering v.s. Vector Quantization. Van Den Oord et al. [11] first introduced vector quantization (VQ) to speech representation learning, encoding input audio signals into a sequence of discrete units. Later studies [28,18,14,5] found that discretizing embedding spaces not only reduced the dimensionality of the model but also lead to performance improvements in downstream tasks. Another benefit of VQ to speech representation learning is better model interpretability. Previous work [12,29] showed that the discretized representation could be viewed as model-discovered acoustic units which often aligned with human-defined units such as phonemes.\nWhile there are similarities between VQ and the online clustering mechanism introduced here, they are also conceptually different. Prior works [18,12,13,29] adopted VQ layer to serve as an efficacious discrete information bottleneck in the forward pass of the model; DinoSR leverages online clustering on gradient-free embedding space of the teacher model to mine acoustic units that can be treated as pseudo-labels. The most significant advantages of our method are 1) reducing computational costs; 2) bypassing estimations that are required by the non-differentiable nature of VQ, e.g., approximating the gradient with straight-through gradient estimator [30]; 3) mitigating problems in practice such as code collapse as shown in §4.6." }, { "figure_ref": [], "heading": "Self-supervised Learning via Cluster Prediction", "publication_ref": [], "table_ref": [], "text": "For each output frame of the student model z K t , the training objective is to predict the codeword index v of the corresponding frame from the teacher model (i.e., zk t ∈ Zk v ) across all targeted layers,\nt∈M k∈(K-N,K] log p ϕ k (v|z K t ),(4)\nwhere M denotes the set of all masked timesteps and ϕ k is the prediction head composed of a linear projection R D×V followed by a softmax activation for each target layer k. Note that the prediction head is at the last layer K of the student model regardless of the target layer k. In §A.3, we summarize the pre-training of DinoSR with pseudo-code to provide a complete view of our method." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Pre-training", "publication_ref": [ "b19", "b8", "b14", "b3", "b12", "b4", "b11", "b30" ], "table_ref": [], "text": "Following Hsu et al. [20] and Baevski et al. [9], we use 960 hours of speech from the LibriSpeech [15] corpus to pre-train our model. We focus on the BASE sized transformer [4] with K = 12 layers and embedding dimension D = 768 due to resource constraints, with the batch size of 63 minutes of audio in total across 16 GPUs. The 16 kHz input waveform is first downsampled to 50Hz with a convolutional feature encoder [13]. For the student model, we randomly masked M = 80% of the 50Hz input features before feeding them into the transformer, with each masked span no shorter than 10 frames. For the teacher model, the input feature is not masked, and online clustering is performed at the top N = 8 layers (i.e., k ∈ [5,12]), each with a codebook with V = 256 codewords. The codebook decay rate τ is fixed at 0.9.\nThe student model is trained for 400k steps with the Adam optimizer [31] with a learning rate ramped up linearly to 0.0005 within the first 12k steps, held for the following 188k steps, and exponentially decayed to 0.00005 for the final 200k steps. The teacher model decay rate λ increases linearly from 0.999 to 0.9999 within the first 30k updates, held for the next 200k steps, and increased to 1.0 for the remaining steps. Pre-training the model takes about 180 hours on 16 Nvidia V100 GPUs. After pre-training, the student model is evaluated on different downstream tasks." }, { "figure_ref": [], "heading": "Acoustic Unit Discovery", "publication_ref": [ "b15", "b32", "b31" ], "table_ref": [ "tab_0" ], "text": "To examine the effectiveness of the online clustering mechanism used in DinoSR, we consider the acoustic unit discovery benchmark introduced in the Zero Resource Speech Challenge 2021 [16].\nIn this task, the speech representation extracted from a frozen pre-trained model is used for unit discovery. The task is an ABX discrimination test: given a pair of spoken triphones (A and B, e.g., 'aba' and 'apa'), the model must decide which triphone a new input (X, e.g., 'apa') corresponds to.\nThe new triphone can be spoken by the same speaker as A and B in the same-speaker setup, or by a different speaker in a more challenging cross-speaker setup. The evaluation metric is the decision error rate on the dev set.\nTo measure the similarity between two sequences of a speech representation, the task introduced a pseudo-distance defined as the average framewise distance over the dynamic time warping path.\nA common choice of framewise distance is the cosine distance between two embedding vectors. Different from cosine similarity, we define the framewise distance as the JS-divergence between framewise probability over the codebook as defined in Eq. 4 to take advantage of the learned discrete units.\nResults are shown in Table 1 with three important observations. First, it can be shown that previous self-supervised methods do not surpass methods specialized for acoustic unit discovery [33,32]. DinoSR, however, outperforms all other methods by a margin except in the easiest same-speaker cleanspeech setup. Second, DinoSR performs better than HuBERT, which also leverages representation clustering for training. Finally, in this task, the continuous self-distillation method data2vec lags both DinoSR and HuBERT. With these observations, we conclude that the codebook design in DinoSR is effective for audio clustering, leading to its superior performance in acoustic unit discovery." }, { "figure_ref": [ "fig_4" ], "heading": "Fine-tuning DinoSR for Speech Recognition", "publication_ref": [ "b12", "b19", "b21", "b8", "b34", "b35", "b14", "b12", "b19", "b21", "b8" ], "table_ref": [ "tab_1", "tab_1" ], "text": "Following the protocol proposed by Baevski et al. [13] and adopted by prior work [20,22,9], we fine-tune the student model using CTC [35] using labeled speech data under four different setups, using 10 minutes / 1 hour / 10 hours from LibriLight [36] or 100 hours from LibriSpeech [15]. After fine-tuning, we measure the word error rate (WER) on LibriSpeech by decoding test sets using the We compare DinoSR to four recent works that all adopted MLM with the BASE sized transformer, and followed the same fine-tuning regime: 1) wav2vec 2.0 [13], a method relying on contrastive learning with VQ over local target representations; 2) HuBERT [20], an iterative method with offline clustering over global target representations; 3) WavLM [22], an iterative method guided by 1st iteration HuBERT and an auxiliary denoising task; and 4) data2vec [9], a self-distillation method with regression loss over contextualized target representations. In Table 2, we summarize the results and compare them to prior work using the same setup. We also list the total pre-training steps and batch size used for each method to indicate the computation needed.\nCompared to other methods that rely on discrete units, our method is significantly stronger while reducing the batch size (vs. contrastive method wav2vec 2.0) and the training steps (vs. iterative offline clustering methods HuBERT and WavLM). This demonstrates the advantage of learning discrete units with online clustering instead of contrastive learning or offline clustering. An improvement over data2vec, the previous state-of-the-art method, is observed in most setups. This result shows that using discrete units as a learning target benefits speech representation learning. Despite being on par or slightly worse in a few setups, this benchmark has been thoroughly studied; thus, progress is not easily attained. Moreover, we show that DinoSR consistently outperforms data2vec in other benchmarks later in this section.\nBeyond recognition performance, we examined the data efficiency of each method, as shown in Figure 4.3. We introduce the metric hours of speech processed that reflects the amount of speech one model needs to \"hear\" during pre-training. The metric is defined as the number of updates required to train the model × batch size in hours, using attributes available in Table 2. By comparing DinoSR against prior work, we see the advantage of being more data efficient, requiring less training yet performing better." }, { "figure_ref": [], "heading": "Downstream Evaluation", "publication_ref": [ "b36", "b38", "b21", "b21" ], "table_ref": [ "tab_2" ], "text": "We further evaluate the effectiveness of DinoSR representations using the Speech Processing Universal PERformance Benchmark (SUPERB) [37,39]. SUPERB is a benchmark consisting of ten speechprocessing tasks spanning content, semantics, speaker, and paralinguistics tasks. To better understand the capabilities of modeling content and semantics, we report the results of our model on phoneme recognition (PR), automatic speech recognition (ASR), keyword spotting (KS), intent classification (IC), slot filling (SF), and speech translation (ST).\nIn SUPERB, each pre-trained SSL model is frozen and serves as a feature extractor. In each task, a set of learnable weights are used for weighted-summing all layers' features. Then, the weighted-summed features are fed into a lightweight prediction head to generate outputs. Thus, only the learnable weights and the prediction head are fine-tuned with labeled data.\nThe SUPERB results are shown in Table 3. In content tasks, the DinoSR surpasses prior art on PR and ASR, showing its capability of capturing better phonetic information. For semantic tasks like IC and SF, DinoSR has similar performance as WavLM [22] and HuBERT.\nThough DinoSR falls slightly behind the state-of-the-art model WavLM on SUPERB, it is worth pointing out that WavLM is a second iteration model based on HuBERT with a large batch size, requiring significantly more computational resources for pre-training. Moreover, WavLM has done a hyper-parameter search for each task in SUPERB (see Appendix A in Chen et al. [22]) whereas DinoSR is tested with no more than five runs in each downstream task due to resource limitations." }, { "figure_ref": [ "fig_3" ], "heading": "Impact of Codebook Hyper-parameters", "publication_ref": [ "b12" ], "table_ref": [ "tab_3" ], "text": "To study the impact of several hyper-parameters used by DinoSR, we vary different options, including the codebook size V (default 8), the top N layers to apply online clustering (default 8), and the codebook decay rate of τ (default 0.9). To reduce computation, we use the 10-hour subset to fine-tune the teacher network after 200k steps of pre-training. WERs are reported by decoding the dev-other subset with a fixed language model weight of 2, and word insertion penalty of -1, following Baevski et al. [13]. Results are presented in Figure 3 and Table 4. Surprisingly, varying the codebook size V from 64 to 2048 only changed the resulting WER by a small margin. Compared to codebook size V , the choice of the top N layers to cluster has a larger impact on the results, with the best choices ranging from 6 to 10. For the codebook decay rate τ , we found values between 0.5 to 0.99 worked well in general. Since the teacher network decay λ anneals throughout the training, we also tested and found annealing the codebook decay τ to 0.99 or 0.999 is unnecessary. We suspect the stability originates from the slow-changing property of the teacher network updated via EMA." }, { "figure_ref": [ "fig_4" ], "heading": "Analysis", "publication_ref": [ "b19", "b39", "b11", "b28", "b12" ], "table_ref": [ "tab_4" ], "text": "In this section, we took a closer look at the properties of the discrete units. We focused on the fifth layer of DinoSR and leave more analysis and comparisons against prior works in the appendix §A.4.\nCluster quality. To measure the quality of the discrete units learned by DinoSR, we adopt the three metrics proposed in HuBERT [20] as well as codebook perplexity [40]:\n• Cluster purity (Cls Pur.) measures the purity of the set of associated codewords of each phone.\n• Phone purity (Phn Pur.) measures the purity of the set of associated phones of each codeword.\n• Phone-normalized mutual information (PNMI) measures the uncertainty reduction for the underlying phone when observing the codeword of a frame.\n• Codebook perplexity (Code Ppl.) 2 -V p(v) log 2 p(v) measures the diversity of codewords being used by the model with p(v) being the frequency distribution over the dataset. For example, code ppl.= codebook size indicates all codewords are being used equally.\nTo compute these metrics, forced alignment is used to acquire the ground truth phone of each feature frame on LibriSpeech dev-clean and dev-other sets. The maximum cluster size for all methods is fixed to 256 for a fair comparison except VQ-APC [12]. Note that for online clustering methods, the number of active clusters might be lower due to the defect of vector quantization, and we report the number of active clusters. VQ-APC suffers from code collapse with vanilla VQ which leads to lower code usage and code ppl., so we use the model with a larger 512 codewords instead. Co-training APC [29] can be viewed as an improved version of VQ-APC which solved the problem by penalizing low codebook perplexity during training. Wav2vec 2.0 [13] is not applicable to this test since it used multiple codebooks that partitioned feature dimensions into 16 groups. Results are listed in Table 5.\nThe MFCC clusters, which are used to train the first iteration HuBERT, provided a baseline for purity and PNMI. The first and second iterations of HuBERT, which served as the teacher in HuBERT's iterative pre-training procedure, show a significant improvement over MFCCs. The results show performing K-means clustering on DinoSR, which does not require an iterative process, produces slightly better quality clusters. DinoSR makes better use of codewords compared to prior VQ works, having 217 active clusters out of 256 despite running online clustering. Better codebook usage results in a notable improvement in cluster quality since each cluster can be finer-grained. DinoSR achieved a comparable phone purity and PNMI compared to offline methods while being more efficient. Interestingly, the codebook's cluster purity surpasses offline clustering methods, which further supports the effectiveness of the proposed method. phones to codewords. To demonstrate the quality of the learned codebook, we visualize the conditional probability P (phone|code) accumulated over the LibriSpeech dev sets in Figure 4.\nWe highlight two interesting findings: 1) Each codeword is typically concentrated on one phone, reflecting the high phone purity obtained in the quality test. In the case where two phones shared high usage of the same codeword, we observed the sharing phones are acoustically similar such as /sp/ (short pause) and /sil/ (silence) in the upper left corner. 2) The overall usage of codewords captures the long-tail nature of phone distribution. The more frequent phones (upper part in figure) occupied significantly more codewords. The top 10 most frequent phones (/sp/ to /L/) held over 50% of the active codewords. This phenomenon, again, supports our claim that the proposed online clustering method is a good acoustic unit discovery system. As a reference, using the mapping for classification (by assigning each codeword to the dominant phone and treating all other phones assigned to the codeword as error) in the figure results in a frame-wised phone error rate of 58.2%. Additional visualization of the discrete embedding can be found in Section A.5." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we introduced DinoSR -a new self-supervised method motivated by the continuousto-discrete nature of speech understanding, leveraging recent advances in representation learning. The key innovation of DinoSR is to introduce a gradient-free online clustering method that leads to meaningful acoustic units. Our main contributions include advancing the state-of-the-art in different benchmarks with end-to-end training and providing a closer look at embeddings from speech transformers via the discrete unit. Future work includes structural learning with the codebook, scaling to larger models, and extending the model to different modalities.\nSimplifying codeword update policy. A simplified version of online clustering described in Eq.3 is to only track the averaged embedding without the size\ne k v ←-τ e k v + (1 -τ ) Zk v Zk v .(6)\nThe simplified version enforces equal momentum for each step regardless of the size of the neighbor set Zv . In practice, we found using Eq. 3 more stable and results in slightly better performance with a negligible cost. " }, { "figure_ref": [ "fig_4" ], "heading": "A.4 Additional results and analysis", "publication_ref": [ "b28", "b11" ], "table_ref": [], "text": "Visualizing phone-code correlation. To further demonstrate the difference between DinoSR and prior works with online codebook learning, we visualized the conditional probability P (phone|code) computed using Co-training APC3 [29] and Vector-Quantized Autoregressive Predictive Coding (VQ-APC4 ; [12]) following the exact same setup used in Figure 4. Clearly, DinoSR is able to capture the long-tail distribution better, and the codewords tend to be more concentrated to the most correlated phone when compared against the prior works. Finding the best codebook. By examining phone-normalized mutual information, code perplexity, and ABX score in acoustic unit discovery in each layer, we can see that the 5th layer of the teacher model consistently performed the best. However, this is only considering phones as the ideal discrete unit. A more throughout study on the content of each layer is left as future work. " }, { "figure_ref": [ "fig_13" ], "heading": "A.5 t-SNE visualization of codewords", "publication_ref": [ "b40" ], "table_ref": [], "text": "Besides quantitative evaluations, we provide a qualitative result in Figure 11 by using t-SNE [41] to visualize the codebook in 2-dimensional space. By labeling each codeword using articulation manner classes in English, we revealed the fact that some of the acoustic attributes are embedded in" }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.1 Broader Impact and Limitations", "publication_ref": [], "table_ref": [], "text": "This work, alone with the related prior works in self-supervised speech representation learning discussed, focused on English speech, thereby implying the potential risks associated with neglecting other languages, especially the low-resource languages. Nevertheless, we provide a preliminary result on other languages in §A.6 to mitigate the problem as much as possible. In addition, the design of DinoSR focused on learning acoustic units to models the phonetic contents in speech. Consequently, other contents (e.g., speaker/paralinguistic information) might be neglected by the model." }, { "figure_ref": [], "heading": "A.2 Discussion on online clustering", "publication_ref": [ "b10", "b8" ], "table_ref": [], "text": "Codebook initialization. The initialization of codebook E k ∈ R V ×D can be critical in vector quantization methods [11]. We tried different initializations including\nIn practice, we found different initialization leads to different codebook perplexity at the beginning of training. Nevertheless, the methods all lead to similar codebook perplexity at the end of training and downstream performance. This also demonstrated the stability of gradient-free online VQ in oppose to standard VQ.\nInput for quantization. We found normalizing the teacher model representation zk t is necessary for stable clustering. We briefly summarized the results using different normalization methods:\n• Instance normalization (IN; default): this can be interpreted as a parameter-free utterancewise normalization for each channel, we found it stable.\n• Batch normalization (BN): this can be viewed as a dataset-wise version of IN which yields a similar result but introduces additional parameters tracking the running stats.\n• L2 normalization: a frame-wise normalization along the feature dimension, results in a more unstable codebook perplexity and code collapse occasionally.\nIn addition, we also tried combining targets from different layers (i.e, k zk t with a single codebook across all layers) before clustering following Baevski et al. [9]. This model performed significantly worse in the downstream fine-tuning task.\nDealing with inactive codewords. Note that the codebook update policy\nupdates each codeword e k v regardless of whether the codeword activate (i.e., having at least one neighbor Zk v ≥ 1) or not. In the extreme case where a codeword remains inactive for a long period, we have s k v → ⃗ 0 and n k v → 0 which results in numerical instability or code collapse. In practice, we found freezing the inactivate codewords with\nleads to a slightly better codebook perplexity but the improvement diminishes as the batch size increases.\nthe high-dimensional space. For example, both vowels and slilences demonstrated a high degree of concentration." }, { "figure_ref": [], "heading": "A.6 Preliminary Result on Multi-lingual Speech", "publication_ref": [ "b41", "b42", "b43", "b44" ], "table_ref": [], "text": "We conducted a preliminary experiment under limited computing budget to showcase that our method can be generalized to other languages. We followed the setting in multi-lingual speech representation learning [42,43] to pre-train DinoSR on 10 different languages (Bengali, Cantonese, Georgian, Haitian, Kurmanji, Pashto, Tamil, Turkish, Tokpisin, Vietnamese) on the BABEL dataset [44] and fine-tune on 4 different unseen languages (Assamese, Tagalog, Swahili, Lao). In this setup we trained our model for 200k steps on 4 GPUs with a total batch size of 16 minutes. We report Character Error Rate (CER) on the fine-tuning languages in Table 6. Inspired by the International Phonetic Alphabet [45] which defined a universal set of acoustic units across different languages, we take a look into how DinoSR acoustic units derived from the 10 pre-training languages are distributed. Interestingly, we found the acoustic units are more likely to be shared across different languages as shown in Table 7. 0.0% 0.0% 1.9% 0.6% 0.0% 0.6% 0.0% 2.5% 1.9% 92.5% 12th 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.6% 1.2% 0.6% 97.6%\nIn conclusion, preliminary results on BABEL show that DinoSR can be applied to languages other than English. Moreover, learning acoustic units from different languages is possible even with a shared vocabulary." } ]
In this paper, we introduce self-distillation and online clustering for self-supervised speech representation learning (DinoSR) which combines masked language modeling, self-distillation, and online clustering. We show that these concepts complement each other and result in a strong representation learning model for speech. DinoSR first extracts contextualized embeddings from the input audio with a teacher network, then runs an online clustering system on the embeddings to yield a machine-discovered phone inventory, and finally uses the discretized tokens to guide a student network. We show that DinoSR surpasses previous state-of-the-art performance in several downstream tasks, and provide a detailed analysis of the model and the learned discrete units. Code available at https://github.com/
DinoSR: Self-Distillation and Online Clustering for Self-supervised Speech Representation Learning
[ { "figure_caption": "Figure 2 :2Figure 2: The trade-off between performance (WER on LibriSpeech dev-other) and data efficiency (hours of speech the model processed in total during pretraining) for different methods.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Varying codebook size V and the number of codebooks N .", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: The conditional probability P (phone|code) on LibriSpeech dev set visualized. The y-axis is the phone set sorted by the number of occurrences, the x-axis is the 217 active codewords sorted by the most correlated phone. A larger figure for clarity is provided in §A.4.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "A. 3 131Pseudo-code for DinoSR training Algorithm PyTorch pseudocode for DinoSR # teacher , student : student and teacher networks # phi [ k ]: DxV cluster prediction matrix for k -th layer codebook # codebook [ k ]: VxD codebook matrix for k -th layer # code_sum [ k ]: VxD unnormalized codebook matrix for k -th layer # code_cnt [ k ]: Vx1 codeword counter for k -th layer # lbd , tau : decay rates of teacher network , codebook teacher . weight = student . weight for x in dataset : # mini audio batch BxT # Eq . 1 : teacher EMA teacher . weight = lbd * teacher . weight + \\ ( 1 -lbd ) * student . weight z = student ( mask ( x ) ) # BxTxD , last layer only z = z [ masked_position ] # MxD with torch . no_grad () : # gradient -free syntax z_tilde = teacher ( x ) # KxBxTxD , all K layers z_tilde = z_tilde [ : , masked_position ] # KxMxD loss = 0 for k in range ( K -N , K ) : with torch . no_grad () : # Eq . 2 : online clustering d = -framewiseL2 ( z_tilde [ k ] , codebook [ k ] ) target_cls = hardmax (d , dim = -1 ) # MxV # Eq . 3 : codebook learning code_sum [ k ] = tau * code_sum [ k ] + \\ ( 1 -tau ) * matmul ( target_cls .T , z_tilde ) code_cnt [ k ] = tau * code_cnt [ k ] + \\ ( 1 -tau ) * target_cls . sum ( dim = 0 ) codebook [ k ] = code_sum [ k ] / code_cnt [ k ] # Eq . 4 : cluster prediction p_v = phi [ k ] ( z ) # MxV loss + = cross_entropy ( p_v , target_cls ) loss . backward () student . step ()", "figure_data": "", "figure_id": "fig_5", "figure_label": "31", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: P (phone|code) from DinoSR with 217 codewords activated out of 256.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: P (phone|code) from Co-training APC [29] with 164 codewords activated out of 256.", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: P (phone|code) from VQ-APC [12] with 98 codewords activated out of 512.", "figure_data": "", "figure_id": "fig_8", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 88Figure8and figure9provided another view of the codeword distribution over the phone set. Each codeword is assigned to a single phone based on the most correlated phone (i.e., argmax phone P (phone|code)). We derive the learned phone distribution by accumulating the occurrence of all codewords and compare to the ground truth. Results show the distribution of codewords from DinoSR is very close to the ground truth, while other methods failed to capture the underlying distribution by over-assigning codewords to the more frequent phones and dropping the less frequent phones.", "figure_data": "", "figure_id": "fig_9", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 8 :Figure 9 :89Figure 8: Histogram of phones and codewords.", "figure_data": "", "figure_id": "fig_10", "figure_label": "89", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Layer-wise phone-normalized mutual information, code perplexity, and ABX score in acoustic unit discovery.", "figure_data": "", "figure_id": "fig_12", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Visualizing codebook using t-SNE [41]. Each codeword is categorized into an articulation manner class by the most correlated phone.", "figure_data": "", "figure_id": "fig_13", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Acoustic unit discovery results on ZeroSpeech 2021 challenge[16] in ABX error rate.", "figure_data": "MethodTarget same-speaker cross-speaker Average layer clean other clean otherBest challenge participants 1Nguyen et al. [32]-3.263.814.005.914.25Chorowski et al. [33]-2.953.544.507.054.51Self-supervised speech representation models 2wav2vec 2.0 [13]64.155.224.827.385.39HuBERT [20]113.073.903.716.194.22data2vec [9]44.035.094.726.975.20ContentVec [34]122.983.703.445.173.82DinoSR53.083.433.424.423.591 Results from https://zerospeech.com/tasks/task_1/results/2 Evaluating official model released by the authors.", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Word Error Rate (WER) on LibriSpeech standard dev/test sets. All models are BASE size (12-layer) transformer encoders pre-trained on the full LibriSpeech dataset (960 hours) and decoded with 4-gram language model. The best result in each setup is bolded and the second best is underlined.", "figure_data": "ModelPre-training Batch size steps (minutes) clean other clean other dev test10 minutes labeled datawav2vec 2.0 [13]400k968.915.79.115.6HuBERT [20]250k + 400k479.115.09.715.3data2vec [9]400k637.311.67.912.3DinoSR400k636.610.87.311.81 hr labeled datawav2vec 2.0 [13]400k965.010.85.511.3HuBERT [20]250k + 400k475.610.96.111.3WavLM [22]250k + 400k187--5.710.8data2vec [9]400k634.08.54.69.1DinoSR400k634.18.14.68.710 hr labeled datawav2vec 2.0 [13]400k963.89.14.39.5HuBERT [20]250k + 400k473.99.04.39.4WavLM [22]250k + 400k187--4.39.2data2vec [9]400k633.37.53.98.1DinoSR400k633.17.03.67.6100 hr labeled datawav2vec 2.0 [13]400k962.77.93.48.0HuBERT [20]250k + 400k472.77.83.48.1WavLM [22]250k + 400k187--3.47.7data2vec [9]400k632.26.42.86.8DinoSR400k632.36.42.96.7official 4-gram language model. The decoding hyper-parameter is searched with Ax 2 following theprior works.18DinoSRdata2vec16HuBERT(iter1+iter2)wav2vec 2.014WER12108100k200k300k400k500k600khours of speech processed", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Results on Speech Processing Universal PERformance Benchmark[37] (SUPERB). The tasks include phoneme recognition (PR), automatic speech recognition (ASR), keyword spotting (KS), intent classification (IC), slot filling (SF), and speech translation (ST). Metrics include accuracy (Acc%), phoneme error rate (PER%), word error rate (WER%), F1 score (F1%), concept error rate (CER%), and bilingual evaluation understudy score (BLEU). The best result in each task is bolded and the second best is underlined.", "figure_data": "ContentSemanticModel 1PRASRKSICSFSTPER↓ WER↓ Acc↑ Acc↑F1↑CER↓ BLEU↑wav2vec 2.0 [13]5.746.4396.23 92.35 88.30 24.7714.81CCC-wav2vec 2.0 [38] 5.956.3096.72 96.47 88.08 24.3416.20HuBERT 2 [20]5.416.4296.30 98.34 88.53 25.2015.53WavLM 2,3 [22]4.846.3196.79 98.63 89.38 22.8620.74data2vec [9]4.694.9496.56 97.63 88.59 25.2717.42DinoSR3.214.7196.69 98.02 88.83 23.5717.68", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Varying codebook decay τ .", "figure_data": "τWER0.58.570.68.300.78.540.88.880.98.400.998.730.9999.430.9 -→ 0.998.710.9 -→ 0.9998.60", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Discrete unit quality on LibriSpeech dev set measured by Codebook Perplexity (Code Ppl.), Cluster purity (Cls Pur.), Phone purity (Phn Pur.), and Phone-normalized mutual information (PNMI). Results are compared to HuBERT[20], VQ-APC[12], and co-training APC[29] using code and models released by the authors.", "figure_data": "MethodActive Code Cls cluster Ppl. Pur. Pur. Phn PNMIK-means (offline clustering)MFCC256228.2 0.06 0.300.28HuBERT-iter1 L6256231.8 0.15 0.600.60HuBERT-iter2 L9256228.6 0.15 0.610.61DinoSR L5256242.4 0.17 0.630.62Codebook (online clustering)VQ-APC9872.1 0.08 0.240.19Co-training APC164135.0 0.09 0.310.29DinoSR L5217179.2 0.19 0.580.57spahsilstnihiydlraeerzayehmkhheyfwdhpowaoaavuwbngawshgspnchthyjhuhoyzh", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" } ]
Alexander H Liu; Heng-Jui Chang; Michael Auli; Wei-Ning Hsu; James Glass
[ { "authors": "Abdelrahman Mohamed; Hung-Yi Lee; Lasse Borgholt; Jakob D Havtorn; Joakim Edin; Christian Igel; Katrin Kirchhoff; Shang-Wen; Karen Li; Lars Livescu; Maaløe", "journal": "IEEE Journal of Selected Topics in Signal Processing", "ref_id": "b0", "title": "Self-supervised speech representation learning: A review", "year": "2022" }, { "authors": "Shuo Liu; Adria Mallol-Ragolta; Emilia Parada-Cabaleiro; Kun Qian; Xin Jing; Alexander Kathan; Bin Hu; Björn W Schuller", "journal": "Patterns", "ref_id": "b1", "title": "Audio self-supervised learning: A survey", "year": "2022" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b2", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b3", "title": "Attention is all you need", "year": "2017" }, { "authors": "Shaoshi Ling; Yuzong Liu", "journal": "", "ref_id": "b4", "title": "Decoar 2.0: Deep contextualized acoustic representations with vector quantization", "year": "2020" }, { "authors": "Andy T Liu; Shu-Wen Yang; Po-Han Chi; Po-Chun Hsu; Hung-Yi Lee", "journal": "", "ref_id": "b5", "title": "Mockingjay: Unsupervised speech representation learning with deep bidirectional transformer encoders", "year": "2020" }, { "authors": "Jean-Bastien Grill; Florian Strub; Florent Altché; Corentin Tallec; Pierre Richemond; Elena Buchatskaya; Carl Doersch; Bernardo Avila Pires; Zhaohan Guo; Mohammad Gheshlaghi Azar", "journal": "", "ref_id": "b6", "title": "Bootstrap your own latent-a new approach to self-supervised learning", "year": "2020" }, { "authors": "Mathilde Caron; Hugo Touvron; Ishan Misra; Hervé Jégou; Julien Mairal; Piotr Bojanowski; Armand Joulin", "journal": "", "ref_id": "b7", "title": "Emerging properties in self-supervised vision transformers", "year": "2021" }, { "authors": "Alexei Baevski; Wei-Ning Hsu; Qiantong Xu; Arun Babu; Jiatao Gu; Michael Auli", "journal": "", "ref_id": "b8", "title": "data2vec: A general framework for self-supervised learning in speech, vision and language", "year": "2022" }, { "authors": "Geoffrey Hinton; Oriol Vinyals; Jeff Dean", "journal": "", "ref_id": "b9", "title": "Distilling the knowledge in a neural network", "year": "2015" }, { "authors": "Aaron Van Den; Oriol Oord; Vinyals", "journal": "", "ref_id": "b10", "title": "Neural discrete representation learning", "year": "2017" }, { "authors": "Yu-An Chung; Hao Tang; James Glass", "journal": "", "ref_id": "b11", "title": "Vector-Quantized Autoregressive Predictive Coding", "year": "2020" }, { "authors": "Alexei Baevski; Yuhao Zhou; Abdelrahman Mohamed; Michael Auli", "journal": "", "ref_id": "b12", "title": "wav2vec 2.0: A framework for self-supervised learning of speech representations", "year": "2020" }, { "authors": "Tao Alexander H Liu; Hung-Yi Tu; Lin-Shan Lee; Lee", "journal": "", "ref_id": "b13", "title": "Towards unsupervised speech recognition and synthesis with quantized speech representation learning", "year": "2020" }, { "authors": "Vassil Panayotov; Guoguo Chen; Daniel Povey; Sanjeev Khudanpur", "journal": "", "ref_id": "b14", "title": "Librispeech: an asr corpus based on public domain audio books", "year": "2015" }, { "authors": "Anh Tu; Maureen Nguyen; Patricia De Seyssel; Morgane Rozé; Evgeny Rivière; Alexei Kharitonov; Ewan Baevski; Emmanuel Dunbar; Dupoux", "journal": "", "ref_id": "b15", "title": "The zero resource speech benchmark 2021: Metrics and baselines for unsupervised spoken language modeling", "year": "2020" }, { "authors": "Aaron Van Den Oord; Yazhe Li; Oriol Vinyals", "journal": "", "ref_id": "b16", "title": "Representation learning with contrastive predictive coding", "year": "2018" }, { "authors": "Alexei Baevski; Steffen Schneider; Michael Auli", "journal": "", "ref_id": "b17", "title": "vq-wav2vec: Self-supervised learning of discrete speech representations", "year": "2020" }, { "authors": "Yu-An Chung; Wei-Ning Hsu; Hao Tang; James Glass", "journal": "", "ref_id": "b18", "title": "An Unsupervised Autoregressive Model for Speech Representation Learning", "year": "2019" }, { "authors": "Wei-Ning Hsu; Benjamin Bolte; Hubert Yao-Hung; Kushal Tsai; Ruslan Lakhotia; Abdelrahman Salakhutdinov; Mohamed", "journal": "IEEE/ACM TASLP", "ref_id": "b19", "title": "Hubert: Self-supervised speech representation learning by masked prediction of hidden units", "year": "2021" }, { "authors": "Yu-An Chung; Yu Zhang; Wei Han; Chung-Cheng Chiu; James Qin; Ruoming Pang; Yonghui Wu", "journal": "IEEE", "ref_id": "b20", "title": "W2v-bert: Combining contrastive learning and masked language modeling for self-supervised speech pre-training", "year": "2021" }, { "authors": "Sanyuan Chen; Chengyi Wang; Zhengyang Chen; Yu Wu; Shujie Liu; Zhuo Chen; Jinyu Li; Naoyuki Kanda; Takuya Yoshioka; Xiong Xiao", "journal": "IEEE Journal of Selected Topics in Signal Processing", "ref_id": "b21", "title": "Wavlm: Large-scale self-supervised pretraining for full stack speech processing", "year": "2022" }, { "authors": "Chung-Cheng Chiu; James Qin; Yu Zhang; Jiahui Yu; Yonghui Wu", "journal": "", "ref_id": "b22", "title": "Self-supervised learning with random-projection quantizer for speech recognition", "year": "2022" }, { "authors": "Heng-Jui Chang; Shu-Wen Yang; Hung-Yi Lee", "journal": "", "ref_id": "b23", "title": "Distilhubert: Speech representation learning by layer-wise distillation of hidden-unit bert", "year": "2022" }, { "authors": "Rui Wang; Qibing Bai; Junyi Ao; Long Zhou; Zhixiang Xiong; Zhihua Wei; Yu Zhang; Tom Ko; Haizhou Li", "journal": "", "ref_id": "b24", "title": "LightHuBERT: Lightweight and Configurable Speech Representation Learning with Once-for-All Hidden-Unit BERT", "year": "2022" }, { "authors": "Yeonghyeon Lee; Kangwook Jang; Jahyun Goo; Youngmoon Jung; Hoi Rin Kim", "journal": "", "ref_id": "b25", "title": "FitHu-BERT: Going Thinner and Deeper for Knowledge Distillation of Speech Self-Supervised Models", "year": "2022" }, { "authors": "Takanori Ashihara; Takafumi Moriya; Kohei Matsuura; Tomohiro Tanaka", "journal": "", "ref_id": "b26", "title": "Deep versus Wide: An Analysis of Student Architectures for Task-Agnostic Knowledge Distillation of Self-Supervised Speech Models", "year": "2022" }, { "authors": "Jan Chorowski; Ron J Weiss; Samy Bengio; Aäron Van Den; Oord", "journal": "IEEE/ACM TASLP", "ref_id": "b27", "title": "Unsupervised speech representation learning using wavenet autoencoders", "year": "2019" }, { "authors": "Sung-Lin Yeh; Hao Tang", "journal": "", "ref_id": "b28", "title": "Autoregressive Co-Training for Learning Discrete Speech Representation", "year": "2022" }, { "authors": "Yoshua Bengio; Nicholas Léonard; Aaron Courville", "journal": "", "ref_id": "b29", "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", "year": "2013" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b30", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "Anh Tu; Benoit Nguyen; Emmanuel Sagot; Dupoux", "journal": "IEEE Journal of Selected Topics in Signal Processing", "ref_id": "b31", "title": "Are discrete units necessary for spoken language modeling", "year": "2022" }, { "authors": "Jan Chorowski; Grzegorz Ciesielski; Jarosław Dzikowski; Adrian Łańcucki; Ricard Marxer; Mateusz Opala; Piotr Pusz; Paweł Rychlikowski; Michał Stypułkowski", "journal": "Interspeech", "ref_id": "b32", "title": "Information Retrieval for ZeroSpeech", "year": "2021" }, { "authors": "Kaizhi Qian; Yang Zhang; Heting Gao; Junrui Ni; Cheng-I Lai; David Cox; Mark Hasegawa-Johnson; Shiyu Chang", "journal": "", "ref_id": "b33", "title": "Contentvec: An improved self-supervised speech representation by disentangling speakers", "year": "2022" }, { "authors": "Alex Graves; Santiago Fernández; Faustino Gomez; Jürgen Schmidhuber", "journal": "", "ref_id": "b34", "title": "Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks", "year": "2006" }, { "authors": "J Kahn; M Rivière; W Zheng; E Kharitonov; Q Xu; P E Mazaré; J Karadayi; V Liptchinsky; R Collobert; C Fuegen; T Likhomanenko; G Synnaeve; A Joulin; A Mohamed; E Dupoux", "journal": "", "ref_id": "b35", "title": "Libri-light: A benchmark for asr with limited or no supervision", "year": "2020" }, { "authors": "Shu-Wen Yang; Po-Han Chi; Yung-Sung Chuang; Cheng-I Jeff Lai; Kushal Lakhotia; Yist Y Lin; Andy T Liu; Jiatong Shi; Xuankai Chang; Guan-Ting Lin; Tzu-Hsien Huang; Wei-Cheng Tseng; Da-Rong Ko Tik Lee; Zili Liu; Shuyan Huang; Shang-Wen Dong; Shinji Li; Abdelrahman Watanabe; Hung-Yi Mohamed; Lee", "journal": "", "ref_id": "b36", "title": "SUPERB: Speech Processing Universal PERformance Benchmark", "year": "2021" }, { "authors": "Sreyan Vasista Sai Lodagala; Ghosh; Umesh", "journal": "", "ref_id": "b37", "title": "CCC-wav2vec 2.0: Clustering aided cross contrastive self-supervised learning of speech representations", "year": "2022" }, { "authors": "Hsiang-Sheng Tsai; Heng-Jui Chang; Wen-Chin Huang; Zili Huang; Kushal Lakhotia; Shu-Wen Yang; Shuyan Dong; Andy Liu; Cheng-I Lai; Jiatong Shi; Xuankai Chang; Phil Hall; Hsuan-Jui Chen; Shang-Wen Li; Shinji Watanabe; Abdelrahman Mohamed; Hung-Yi Lee", "journal": "", "ref_id": "b38", "title": "SUPERB-SG: Enhanced speech processing universal PERformance benchmark for semantic and generative capabilities", "year": "2022" }, { "authors": "Aaron Sander Dieleman; Karen Van Den Oord; Simonyan", "journal": "", "ref_id": "b39", "title": "The challenge of realistic music generation: modelling raw audio at scale", "year": "2018" }, { "authors": "Laurens Van Der Maaten; Geoffrey Hinton", "journal": "JMLR", "ref_id": "b40", "title": "Visualizing data using t-sne", "year": "2008" }, { "authors": "Jaejin Cho; Murali Karthick Baskar; Ruizhi Li; Matthew Wiesner; Harish Sri; Nelson Mallidi; Martin Yalta; Shinji Karafiat; Takaaki Watanabe; Hori", "journal": "IEEE", "ref_id": "b41", "title": "Multilingual sequence-to-sequence speech recognition: architecture, transfer learning, and language modeling", "year": "2018" }, { "authors": "Alexis Conneau; Alexei Baevski; Ronan Collobert; Abdelrahman Mohamed; Michael Auli", "journal": "", "ref_id": "b42", "title": "Unsupervised cross-lingual representation learning for speech recognition", "year": "2020" }, { "authors": "J F Mark; Kate M Gales; Anton Knill; Ragni; P Shakti; Rath", "journal": "International Speech Communication Association (ISCA)", "ref_id": "b43", "title": "Speech recognition and keyword spotting for low-resource languages: Babel project research at cued", "year": "2014" }, { "authors": "", "journal": "Cambridge University Press", "ref_id": "b44", "title": "Handbook of the International Phonetic Association: A guide to the use of the International Phonetic Alphabet", "year": "1999" } ]
[ { "formula_coordinates": [ 3, 229.01, 449.84, 275.66, 10.47 ], "formula_id": "formula_0", "formula_text": "θ teacher ←-λ θ teacher + (1 -λ) θ student ,(1)" }, { "formula_coordinates": [ 3, 223.67, 649.41, 236.07, 12.48 ], "formula_id": "formula_1", "formula_text": "E k = {e k 1 , ..., e k V } with V codewords (centroids) e k i ∈ R D ." }, { "formula_coordinates": [ 3, 223.49, 705.22, 281.18, 19.59 ], "formula_id": "formula_2", "formula_text": "Zk v = zk t v = argmin i∈V zk t -e k i 2 ,(2)" }, { "formula_coordinates": [ 4, 242.35, 103.35, 262.32, 67.11 ], "formula_id": "formula_3", "formula_text": "s k v ←-τ s k v + (1 -τ ) Zk v , n k v ←-τ n k v + (1 -τ ) Zk v , e k v ←- s k v n k v .(3)" }, { "formula_coordinates": [ 4, 243.4, 535.55, 261.27, 22.6 ], "formula_id": "formula_4", "formula_text": "t∈M k∈(K-N,K] log p ϕ k (v|z K t ),(4)" }, { "formula_coordinates": [ 14, 245.17, 102.45, 259.5, 30.8 ], "formula_id": "formula_5", "formula_text": "e k v ←-τ e k v + (1 -τ ) Zk v Zk v .(6)" } ]
10.18653/v1/2020.emnlp-main.263
2023-05-17
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b10", "b24", "b16", "b13", "b40", "b36", "b23", "b27", "b22", "b39", "b10", "b42" ], "table_ref": [], "text": "Transformer-based pre-trained language models (PLMs), such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019), have aroused widespread interest among Natural Language Processing (NLP) researchers in recent years. These language models are first pre-trained on large-scale unlabeled corpora to learn the general representation of language, and then fine-tuned on specific downstream tasks to effectively transfer the general knowledge to target domains. This pre-training and fine-tuning paradigm leads to state-of-the-art performances in various NLP tasks such as natural language understanding. However, with the rapid growth of the model scale, the deployment of large-scale PLMs becomes challenging, especially in low-resource scenarios. To this end, a variety of model compression techniques have been developed. Among them, knowledge distillation (KD) (Hinton et al., 2015) is a newly emerging technology that aims to obtain a small student model by distilling knowledge from a large teacher model and achieve comparable performance.\nExisting knowledge distillation methods can be divided into three categories, namely responsebased, feature-based, and relation-based (Gou et al., 2021). While response-based methods (Turc et al., 2019) directly distill the final output, e.g. probability distribution, from the top of the teacher, feature-based (Sun et al., 2019) and relation-based methods (Liu et al., 2022) try to align the features from intermediate layers of teacher and student models and minimize the difference. To transfer comprehensive knowledge from the teacher, a common practice is to combine response-based methods with the other two (Park et al., 2021). However, due to the capacity gap between the teacher and the student, feature-based and relation-based methods may not necessarily bring improvement to response-based methods (Liang et al., 2022). To sum up, existing knowledge distillation methods have two limitations. First, they mainly focus on understanding what the teacher's behavior is, instead of why the teacher behaves like this, hindering the reasoning and generalization ability of the student model. Second, they pay more attention to distilling sophisticated model-specific knowledge from intermediate layers but neglect data-specific knowledge, which may contain valuable rationale information to understand how the teacher model arrives at a prediction.\nTo address the above limitations, in this paper we propose a novel Attribution-Driven Knowledge Distillation (AD-KD) approach that transfers attribution-based knowledge from the teacher to the student. As shown in Figure 1, the attribution information reflects the importance of different tokens towards the prediction, which contains reasoning knowledge of the model and can be complementary to the soft-label knowledge. By transferring such attribution knowledge, the student is allowed to learn the token-level rationale behind the teacher's behavior and thus generalizes better. Specifically, we utilize Integrated Gradients (IG) (Sundararajan et al., 2017), a well-established gradient-based attribution method, to calculate the importance score of each input token. To reduce the influence of trivial dimensions in the teacher's input embeddings, we further adopt the top-K strategy to filter out dimensions with low attribution scores. The remaining attribution scores are aggregated and normalized to denote the importance of individual tokens. Moreover, we extract the attribution knowledge for all possible predictions rather than just the prediction with the highest probability. By transferring the multi-view attribution knowledge, the student learns a more comprehensive understanding of the teacher's soft-label distribution.\nExtensive experiments are conducted with BERT (Devlin et al., 2019) on the GLUE benchmark (Wang et al., 2018). The experimental results demonstrate the effectiveness and superiority of our approach over several state-of-the-art baselines. Furthermore, we show that attribution knowledge from different layers contains different information, while the input layer contains the most prominent attribution knowledge for distillation. To summarize, the main contributions are threefold. First, we propose a novel attribution-driven knowledge distillation framework for language model compression that effectively transfers attribution knowledge from the teacher to the student. Second, we extract multi-view attribution knowledge based on model predictions to learn comprehensive reason-ing knowledge. Third, we systematically validate AD-KD on the GLUE benchmark and show its superior performance over state-of-the-art baselines." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Knowledge Distillation", "publication_ref": [ "b13", "b16", "b30", "b40", "b36", "b19", "b27", "b23", "b44", "b43" ], "table_ref": [], "text": "Knowledge distillation methods can be divided into three categories, namely response-based, featurebased and relation-based KD (Gou et al., 2021). Response-based KD was first proposed by Hinton et al. (2015), where the final output is adopted to transfer the label knowledge. Sanh et al. (2019) and Turc et al. (2019) applied this idea to BERT and yielded smaller models with minor performance drops. Recently, feature-based and relation-based distillation methods have drawn a lot of attention, which transfer knowledge contained in the intermediate layers to the student. For feature-based methods, Sun et al. (2019) first regarded the hidden representations of the [CLS] token as hints to extract sentence-level features from the teacher. Jiao et al. (2020) and Sun et al. (2020b) further matched the hidden representations of all tokens between teacher and student models. Sun et al. (2020a) proposed contrastive distillation on intermediate representations. As for relation-based methods, Park et al. (2021) proposed CKD which adopts pair-wise distance and triple-wise angle to model the sophisticated relations among token representations from both horizontal and vertical directions. Based on CKD, Liu et al. (2022) further extracted structural relations from multi-granularity representations and distilled this kind of well-organized multi-granularity structural knowledge hierarchically across layers. Wang et al. (2020Wang et al. ( , 2021) ) generalized the conventional query-key attention to query-query attention, key-key attention, and valuevalue attention. Different from these methods, we investigate knowledge distillation from the attribution perspective, which reveals the teacher's reasoning behavior and can be used to transfer comprehensive data-specific knowledge. More details about the differences between existing methods and ours are discussed in Appendix B." }, { "figure_ref": [], "heading": "Attribution", "publication_ref": [ "b3", "b0", "b51", "b21", "b11", "b6", "b39", "b2", "b32", "b14", "b31", "b9", "b25", "b26" ], "table_ref": [], "text": "Attribution analysis (Baehrens et al., 2010;Ancona et al., 2018) aims at assigning importance scores to intermediate or input features of a network. Occlusion-based methods (Zeiler and Fergus, 2014) compute the importance score of each feature by erasing that feature and measuring the difference between new output and the original output. However, occlusion-based methods need to forward pass the model once for each feature, leading to low computational efficiency. To address this issue, gradient-based methods (Li et al., 2016;Ding et al., 2019;Brunner et al., 2020;Sundararajan et al., 2017) exploit the gradient information of features to approximate occlusionbased methods, which only require a single forward process. Similarly, propagation-based methods (Bach et al., 2015;Shrikumar et al., 2017) modify the back-propagation rules to redistribute the model output among the target features along the back-propagation path. Perturbation-based methods (Guan et al., 2019;Schulz et al., 2020;De Cao et al., 2020) add noise to features to examine their importance for model predictions. Attribution has been adopted in model compression techniques such as pruning (Michel et al., 2019) and adaptive inference (Modarressi et al., 2022) but has not been explored in knowledge distillation. In this work, we take the initiative to investigate the effect of attribution in knowledge distillation." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Preliminary", "publication_ref": [ "b39" ], "table_ref": [], "text": "Integrated Gradients (Sundararajan et al., 2017) is a theoretically tenable method to attribute the prediction of a deep network to its input or intermediate features. Formally, given a feature x = [x 1 , x 2 , ..., x n ] ∈ R n with a baseline feature x = [x 1 , x 2 , ..., x n ] ∈ R n , and the model function F (.), IG leverages integral to represent the difference between F (x) and F (x ) by selecting a straight line path from x to x as the integral path:\nF (x) -F (x ) = n i=1 IGi(F, x) = n i=1 [(xi -x i ) × 1 α=0 ∂F (x + α × (x -x )) ∂xi dα].(1)\nIn practice, continual integral can be approximated by discrete summation:\nIG approx i (F, x) = (xi -x i ) × m k=1 ∂F (x + k m × (x -x )) ∂xi × 1 m , (2\n)\nwhere m is the number of summation steps (a bigger m usually results in better approximation). Intuitively, the magnitude of integrated gradient indicates its importance while its sign illustrates the positive or negative effect on the target output.\nIn this paper, we focus on Transformer-based architecture and attribute the model prediction to input features. With slight abuse of notation, we denote the input sequence as\nx = [x 1 , x 2 , ..., x n ],\nwhere n is the sequence length and each x i represents a token. Transformer first converts the token sequence to d-dimensional embedding sequence E = [e 1 , e 2 , ..., e n ] ∈ R n×d through the embedding layer. And then the contextualized representations H = Transformer(E) ∈ R n×d are obtained after several layers of Transformer blocks. Finally, a task-specific head is applied on H to get the final output P = [P 1 , P 2 , ..., P C ] ∈ R C , which is typically a probability distribution. Denote the mapping function E → P c as F c (.), where c represents the label of interest. In this case, our attribution map is computed on each individual dimension of each input embedding, which is denoted as e ij :\nIG approx ij (F c , E) = (eij -e ij ) × m k=1 ∂F c (E + k m × (E -E )) ∂eij × 1 m .(3)\nIn the implementation, we stack n [PAD] token embeddings as baseline features E since they usually have no influence on the model prediction." }, { "figure_ref": [ "fig_1" ], "heading": "AD-KD", "publication_ref": [], "table_ref": [], "text": "In this section, we elaborate on our proposed Attribution-Driven Knowledge Distillation (AD-KD), including attribution maps and attribution distillation. The overall framework of AD-KD is illustrated in Figure 2." }, { "figure_ref": [ "fig_2" ], "heading": "Attribution Maps", "publication_ref": [ "b1" ], "table_ref": [], "text": "The attribution scores of a language model reflect the importance of different tokens towards the prediction, which contains valuable data-specific reasoning knowledge. The scores are computed among different tokens at different dimensions of a given model, using IG defined in Section 3.1. In this work, we do not take the sign into consideration, since the scores at different dimensions of the same token embedding would cannibalize each other when combining them into a token-level attribution score. This observation is consistent with the findings in (Atanasova et al., 2020).\nWhen calculating the attribution scores, we observed that there exist certain dimensions whose attribution scores remain relatively low across different tokens. The attribution scores from these dimensions minimize the difference between important and unimportant tokens, which can be regarded as noises. For better illustration, Figure 3 shows an example of sentence \"seem weird and distanced\" whose annotation is negative sentiment. It is clear that \"weird\" and \"distance\" are the keywords that contribute most to the prediction, whereas a proportion of dimensions of them present low attribution scores. To alleviate the influence of noisy dimensions in the input embeddings, we simply choose the top-K dimensions with high attribution scores and filter out dimensions with low attribution scores. Formally, the attribution score of token x i with respect to the label c in the teacher model can be calculated as:\na t,c i = TopK(IG approx i (F t,c , E t )) 2,(4)\nwhere the superscript t denotes the teacher model. Therefore, the attribution map of the teacher consists of a sequence of attribution scores:\na t,c = [a t,c 1 , a t,c 2 , ..., a t,c n ].(5)\nFor the student, the extraction of attribution map is similar except that we consider all dimensions for two reasons. First, it reduces the difficulty of training. Second, the student is allowed to learn from the noiseless attribution map of the teacher.\na s,c i = IG approx i (F s,c , E s ) 2, a s,c = [a s,c 1 , a s,c 2 , ..., a s,c n ].(6)\nConsidering that the teacher can make multiple decisions, each of which is associated with a probability, we further propose to extract multi-view attribution knowledge. Specifically, we extract the attribution maps for all possible predictions of the model rather than a single prediction, e.g., the prediction with the maximum probability or the prediction corresponding to the ground-truth label. By transferring the multi-view attribution knowledge, the student can capture a more comprehensive understanding of the teacher's soft-label distribution. The multi-view attribution maps are defined as:\nA t = C c=1 a t,c , A s = C c=1 a s,c ,(7)\nwhere is the concatenation operation." }, { "figure_ref": [], "heading": "Attribution Distillation", "publication_ref": [], "table_ref": [], "text": "Given the multi-view attribution maps, a straightforward strategy to transfer the knowledge is to directly minimize the difference between the two sets of maps in teacher and student models, with distance metrics like L2 distance (MSE):\nA t -A s 2. (8\n)\nHowever, one obvious shortcoming with this approach is that there may exist a magnitude gap between the attribution scores in teacher and student models at the early phase of distillation, since the teacher is already well-trained while the student has little attribution knowledge. Under this circumstance, the student is likely to fall into a local optimum. To enable smooth knowledge distillation, we normalize the attribution maps before minimizing the difference. Concretely, we first transform the single-view attribution maps into unit vectors: Then we reformulate the normalized multi-view attribution maps in Eq. ( 7) as:\na t,c = a t,c a t,c 2 , a s,c = a s,c a s,c 2 . (9\nA t = C c=1 a t,c , A s = C c=1 a s,c .(10)\nThe normalized attribution maps only preserve the information of relative importance among tokens regardless of their absolute importance, which we believe is the crucial knowledge to transfer. Finally, we define the attribution distillation loss as:\nLattr = A t -A s 2.\n(11)" }, { "figure_ref": [], "heading": "Overall Objective", "publication_ref": [ "b16" ], "table_ref": [], "text": "We combine the original cross-entropy loss between the output of the student and the groundtruth label, the response-based loss (on the logits) (Hinton et al., 2015), and the proposed attributiondriven distillation loss to train the student model. The overall objective is defined as:\nL = (1 -α)Lce + αL logit + βLattr,(12)\nwhere L ce = -logσ(z s )[y] is the cross-entropy loss and L logit =KL(σ( z t τ ) σ( z s τ )) is the loss on the output logits. And, α and β are two hyperparameters, σ is the softmax function, y is the ground-truth label, τ is the temperature, and z t and z s are the output logits of the teacher and student models, respectively. KL(•) denotes the KL-divergence." }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b42", "b45", "b48", "b35", "b29", "b12", "b8", "b5", "b7", "b27", "b23" ], "table_ref": [], "text": "We evaluate our method on eight tasks of the GLUE benchmark (Wang et al., 2018), including CoLA (Warstadt et al., 2019), MNLI (Williams et al., 2018), SST-2 (Socher et al., 2013), QNLI (Rajpurkar et al., 2016), MRPC (Dolan and Brockett, 2005), QQP (Chen et al., 2018), RTE (Bentivogli et al., 2009) and STS-B (Cer et al., 2017). The details of these datasets are introduced in Appendix A.1. For evaluation metrics, we follow previous works (Park et al., 2021;Liu et al., 2022) and report accuracy on MNLI, SST-2, QNLI, QQP and RTE, F1 score on MRPC, Matthews correlation coefficient on CoLA, and Spearman's rank correlation coefficient on STS-B." }, { "figure_ref": [], "heading": "Baseline Methods", "publication_ref": [ "b16", "b40", "b36", "b19", "b27", "b23", "b44", "b43", "b23" ], "table_ref": [], "text": "We compare AD-KD with response-based KD methods and several state-of-the-art feature-based and relation-based KD methods. Response-based baselines include Vanilla KD (Hinton et al., 2015) and PD (Turc et al., 2019). Feature-based and relation-based baselines include PKD (Sun et al., 2019) which distills the hidden representations, TinyBERT (Jiao et al., 2020) which distills the selfattention matrices, and CKD (Park et al., 2021) and MGSKD (Liu et al., 2022) which distill the relation between hidden representations. For a fair comparison, MiniLM (Wang et al., 2020(Wang et al., , 2021) ) and MobileBERT (Sun et al., 2020b) are not presented due to their two-stage distillation settings which involve both task-agnostic and task-specific distillation. Our AD-KD focuses on task-specific distillation and does not augment the training sets. Moreover, MGSKD (Liu et al., 2022) only reports results on a 4-layer BERT student model which is different from other baselines. To ensure a fair comparison, we re-implemented MGSKD using their released code to obtain a 6-layer student model. The original MGSKD approach also relies on spanlevel information that is extracted from external knowledge sources, which is not publicly available nor included in other baselines. Therefore, we did not use this external knowledge in our reimplementation of MGSKD. " }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b49", "b40", "b27" ], "table_ref": [], "text": "Our code is implemented in Pytorch with the Transformers package (Wolf et al., 2020). We finetune BERT base as the teacher model, and utilize a smaller BERT released by Turc et al. (2019) with 6 Transformer layers, 768 hidden neurons and 12 attention heads to instantiate the student model following Park et al. (2021). We search for the optimal learning rate in {2e-5, 3e-5, 4e-5, 5e-5}, α in {0.8, 0.9, 1.0} and temperature τ in {1, 2, 3, 4}. For the hyperparameter β, we tune within {1, 10, 50, 100}. For the IG steps m described in Section 3.1, we adopt m = 1 in the main results due to the huge computational overhead. Part of results with m varying from 1 to 8 are reported in Section 5.4. K is empirically searched within {384, 512, 640, 700, 734, 768}. Results with different values of K are also reported. The detailed hyperparameter settings and training cost are provided in Appendix A.2. Our code is available at https://github.com/brucewsy/AD-KD." }, { "figure_ref": [], "heading": "Results and Analysis", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [ "b52" ], "table_ref": [ "tab_2" ], "text": "The main results are presented in Table 1. It can be seen that AD-KD outperforms all baselines on most of the datasets. Specifically, AD-KD yields an average improvement of 1.0 and 1.9 points over CKD and MGSKD respectively on development sets, and another average improvement of 0.9 points over MGSKD on test sets. Note that other featurebased and relation-based KD methods even underperform vanilla KD, indicating the difficulty of aligning the teacher and the student at intermediate layers. In contrast, AD-KD distills the attribution knowledge from a global perspective which is more data-specific and shows significant improvement over vanilla KD. We provide two cases in Appendix C.3 to intuitively demonstrate the strength of AD-KD. We also observe that AD-KD does not show a satisfying performance on SST-2. We believe the reason is that the sentences in SST-2 are much shorter than those in other datasets, and in this case, the student is likely to already capture the attribution knowledge implicitly from the soft-labels of the teacher (Zhang et al., 2022)." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "Impact of Loss Terms To analyze the impact of different loss terms, we conduct ablation experiments on three variants of AD-KD: (1) AD-KD without attribution distillation (i.e., vanilla KD),\n(2) AD-KD without the original cross-entropy loss, and (3) AD-KD without logit distillation. As reported in Table 2, again we observe an obvious performance drop after removing the attribution distillation. We also note that removing either the conventional cross-entropy loss or logit distillation loss causes noticeable performance degradation, suggesting both of them contribute to the improvement of AD-KD. Nevertheless, our attribution distillation contributes most to the performance of AD-KD, showing that data-specific reasoning information is crucial in knowledge distillation." }, { "figure_ref": [ "fig_3", "fig_4" ], "heading": "Multi-view Attribution", "publication_ref": [ "b40" ], "table_ref": [], "text": "In AD-KD, the student learns the attribution knowledge from a variety of possible outputs to get a better understanding of the teacher. Here we study how the number of attribution views affects the final results. Experiments are conducted on MNLI which is a multi-classification task including three labels: entailment, contradiction, and neutral. We make a comparison between multi-view attribution and single-view attribution w.r.t. each candidate label respectively. The results are shown in Figure 4, from which we note that each of the single-view attributions plays a positive role and is superior to vanilla KD. Moreover, combining all attribution views yields further performance improvement, demonstrating that multiview attribution is more preferable for distillation.\nStudent Model Size To investigate whether AD-KD can boost the performance across different sizes of student, we further compare AD-KD with vanilla KD on MRPC and QNLI under various student scales provided by Turc et al. (2019). As observed in Figure 5, AD-KD consistently outperforms vanilla KD, which validates the effectiveness and stability of our approach. " }, { "figure_ref": [], "heading": "Impact of Top-K", "publication_ref": [], "table_ref": [], "text": "Recall that in order to eliminate the interference of noisy dimension, AD-KD adopts the top-K approach on the input embeddings of the teacher to filter out the dimensions with relatively low attribution scores. In this section, we conduct in-depth analysis on the impact of K. We conduct experiments on STS-B and QNLI, and plot the results with different values of K in Figure 6. As illustrated in the figure, the performance on the small dataset STS-B (7k) first improves as K increases and then slightly degrades after K exceeds 600. However, the performance on the larger dataset QNLI (108k) improves almost monotonically with the increasing of K. We conjecture that choosing a suitable K is beneficial on small datasets since there are probably more noisy dimensions in the input embeddings of the teacher, while preserving all dimensions may be preferable on larger datasets." }, { "figure_ref": [], "heading": "Impact of IG Steps", "publication_ref": [], "table_ref": [], "text": "In our experiments, the IG steps m are set to 1 by default when extracting the attribution maps. In this section, we provide more results with different values of m in Figure 7 to understand its impact on distillation. We observe that as m increases, the performance of AD-KD fluctuates in a certain range. Although it is possible to find a point that surpasses our default setting and even the teacher, identifying the optimal value of m for each task is costly since a large m causes huge computational overhead. In contrast, m=1 achieves better tradeoff between performance and computational cost. " }, { "figure_ref": [], "heading": "Attribution Distillation Layer", "publication_ref": [ "b19", "b27", "b23" ], "table_ref": [ "tab_5" ], "text": "Apart from the attribution knowledge of input layer, the attribution knowledge of intermediate layers can also be transferred during distillation. To confirm the motivation that the former is better than the latter, we conduct experiments on MRPC and QNLI with different attribution layers. Specifically, we choose the first layer and the penultimate layer for comparison. Besides, we also try a uniform strategy which is widely adopted as the mapping function between the teacher and the student layers ( Jiao et al., 2020;Park et al., 2021;Liu et al., 2022).\nFrom the results shown in Table 3, we see that uniform mapping strategy performs best among intermediate layer methods. However, neither of these intermediate layers outperforms input layer, indicating that the attribution knowledge of intermediate layers is more model-specific and difficult to transfer. In addition, distilling the knowledge jointly from the input and the intermediate layers does not improve the performance." }, { "figure_ref": [ "fig_6" ], "heading": "Impact of α and β", "publication_ref": [], "table_ref": [], "text": "For the training objective of AD-KD, we introduce α and β to balance the original cross-entropy loss, logit distillation loss, and attribution distillation loss. To investigate their impact on model performance, we show the results of different values of α and β on MRPC and QNLI in Figure 8, where we fix one while altering the other. We observe a unified trend across different tasks that when α is small, the student does not perform well due to the lack of response-based knowledge of the teacher, and when α is around 0.9, the student performs best. Therefore, we select α close to 1. We also observe from the figure that as β increases, the performance first keeps improving and reaches the peak, then it starts to decline. Unlike α, however, the optimal value of β varies with different tasks, indicating that β is more sensitive to the task compared to α. More discussion of β are given in Appendix C.2." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b46" ], "table_ref": [], "text": "In this paper, we propose AD-KD, a novel knowledge distillation framework for language model compression. Unlike other distillation methods, AD-KD investigates the model knowledge from the perspective of input attribution, which is vital yet easy to transfer between the teacher and the student. Moreover, top-K method is adopted to obtain noiseless attribution maps among input tokens, and multi-view attribution is conducted for a more comprehensive distillation. To our knowledge, this is the first work that incorporates attribution into knowledge distillation. Extensive experiments including ablation studies are carried out to show the effectiveness of AD-KD and its components. With the recent emergence of large language models (LLMs), gradient-based attribution methods are infeasible due to the unavailable parameters. However, the idea of AD-KD can still be potentially extended to these black-box models by using occlusion-based attribution or using chainof-thoughts (Wei et al., 2022) as the rationale for distillation. We will leave it to future work." }, { "figure_ref": [], "heading": "A Experimental Details", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.1 Details of Datasets", "publication_ref": [ "b42", "b45", "b35", "b12", "b8", "b48", "b29", "b5", "b7" ], "table_ref": [ "tab_6" ], "text": "We evaluate AD-KD on eight tasks of GLUE benchmark (Wang et al., 2018). Specifically, there are two single-sentence tasks: CoLA (Warstadt et al., 2019) which aims to predict if the given sentence is grammatically correct, and SST-2 (Socher et al., 2013) which aims to predict the sentiment of the given sentence; two paraphrase tasks: MRPC (Dolan and Brockett, 2005) which aims to predict if two given sentences are semantically equivalent, and QQP (Chen et al., 2018) which is similar to MRPC; three inference tasks which aim to predict if the premise entails the hypothesis: MNLI (Williams et al., 2018), QNLI (Rajpurkar et al., 2016), and RTE (Bentivogli et al., 2009); and one similarity task: STS-B (Cer et al., 2017) which aims to predict a continual score measuring the semantic similarity between a pair of sentences. The statistics of these datasets are shown in Table 4. " }, { "figure_ref": [], "heading": "A.2 Hyperparameter Settings", "publication_ref": [], "table_ref": [], "text": "We run all experiments on GeForce RTX 2080 Ti GPUs. " }, { "figure_ref": [], "heading": "B More Discussion", "publication_ref": [ "b4", "b50", "b28", "b17", "b47" ], "table_ref": [], "text": "In this section, we discuss the difference between distilling the attribution maps and distilling the attention matrices. In a sense, attention matrices are similar to attribution maps since they both reflect the contribution that each input token makes on a model prediction to some extent (Bastings and Filippova, 2020;Xu et al., 2020). However, there are several drawbacks when it comes to distillation. On one hand, attention correlates well with attribution locally in specific layers and heads but not globally, indicating that attention maps are inadequate to draw conclusions that refer to the input of the model (Pascual et al., 2021). In other words, attention matrices are more like model-specific knowledge that are probably challenging for the student to learn due to the layer mapping issue, especially when the student has much fewer parameters than the teacher. On the other hand, some works point out that by adversarial training, alternative attention weights can be found whereas the prediction remains almost the same (Jain and Wallace, 2019;Wiegreffe and Pinter, 2019). Therefore, an optimal student unnecessarily shares similar attention matrices with its teacher. Our proposed AD-KD adopts a more reliable gradient-based method to obtain the attribution maps, which is shown better than attention matrices employed by baselines." }, { "figure_ref": [], "heading": "C More Experimental Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "C.1 Results on MultiRC", "publication_ref": [ "b41", "b20" ], "table_ref": [], "text": "Considering that the text in GLUE is relatively short (with Max_Seq_Length set to 128), We conduct additional experiments on SuperGLUE (Wang et al., 2019) for more comprehensive evaluation. We select a challenging QA task, MultiRC (Khashabi et al., 2018) " }, { "figure_ref": [], "heading": "C.2 Overfitting Study", "publication_ref": [], "table_ref": [], "text": "In this section, we investigate whether the overfitting problem would happen in attribution distillation. Using Eq. ( 11 " }, { "figure_ref": [ "fig_0" ], "heading": "C.3 Case Study", "publication_ref": [], "table_ref": [], "text": "In this section, we provide two examples to show how AD-KD facilitates the imitation of the teacher's reasoning and outperforms vanilla KD. As shown in Figure 10, vanilla KD makes mistakes by ignoring keyword Louisiana or emphasizing an irrelevant word billion. In contrast, the attribution maps of AD-KD are more consistent with the ones in the teacher. AD-KD learns what to and not to focus on and thus predicts the label correctly." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was supported by the National Natural Science Foundation of China (No. 62176270), the Guangdong Basic and Applied Basic Research Foundation (No. 2023A1515012832), and the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (No. 2017ZT07X355)." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b18", "b34" ], "table_ref": [], "text": "This work introduces the general idea of incorporating attribution into knowledge distillation, and there are three potential limitations. First, although AD-KD chooses Integrated Gradients for attribution, there are actually other attribution methods (Janizek et al., 2021;Sikdar et al., 2021) which can also be fitted in our framework. The question of whether these methods perform better than Integrated Gradients when combined with knowledge distillation is still unclear. Second, we conduct experiments on BERT of different scales and have not yet validated the effectiveness of AD-KD on other model structures. Third, while we only perform task-specific knowledge distillation in our experiments, applying AD-KD to task-agnostic knowledge distillation is also worth investigating." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "Our work will not cause ethical issues and the datasets used in this paper are publicly available. " } ]
Knowledge distillation has attracted a great deal of interest recently to compress pretrained language models. However, existing knowledge distillation methods suffer from two limitations. First, the student model simply imitates the teacher's behavior while ignoring the underlying reasoning. Second, these methods usually focus on the transfer of sophisticated model-specific knowledge but overlook data-specific knowledge. In this paper, we present a novel attribution-driven knowledge distillation approach, which explores the token-level rationale behind the teacher model based on Integrated Gradients (IG) and transfers attribution knowledge to the student model. To enhance the knowledge transfer of model reasoning and generalization, we further explore multi-view attribution distillation on all potential decisions of the teacher. Comprehensive experiments are conducted with BERT on the GLUE benchmark. The experimental results demonstrate the superior performance of our approach to several state-of-the-art methods.
AD-KD: Attribution-Driven Knowledge Distillation for Language Model Compression
[ { "figure_caption": "Figure 1 :1Figure 1: An example from the QNLI dataset (Rajpurkar et al., 2016) to illustrate different knowledge distillation techniques including the proposed attribution-driven method. Darker colors mean larger attribution scores.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Overview of our AD-KD framework. The example in Figure 1 is taken as the input. AD-KD first extracts the attribution maps from the teacher model and then transfers the attribution-based knowledge to the student.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: An example from the SST-2 dataset(Socher et al., 2013). Given the sentence \"seem weird and distanced\" and its sentiment label negative, the distributions of absolute attribution scores among different tokens and dimensions are shown in subfigures (a)-(e). The model is a well-trained BERT base (teacher) and the IG steps m is set to 1.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Ablation study of multi-view attribution on the MNLI development set.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Results of AD-KD and vanilla KD on MRPC and QNLI development sets at different student scales.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :Figure 7 :67Figure 6: Results on STS-B and QNLI development sets as the number (K) of retained dimensions changes.", "figure_data": "", "figure_id": "fig_5", "figure_label": "67", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Results on MRPC and QNLI development sets as α and β changes.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Comparison of the attribution gap between teacher and student on training set and development set.", "figure_data": "", "figure_id": "fig_7", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "), we calculate the attribution gap between the teacher and the student models on the training and development sets of MRPC and QNLI respectively, and show the results in Figure9. By altering β, the tendency of attribution gap on development sets is consistent with the one on training sets, which indicates that the attribution knowledge learned from training data can be well generalized to unseen data. Therefore, overfitting tends not to happen in attribution distillation.", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": ")", "figure_data": "Absolute Attribution Score0.00 0.01 0.02 0.03 0.04 0.05 0.060250 500 750 Embedding DimensionAbsolute Attribution Score0.00 0.01 0.02 0.03 0.04 0.05 0.060250 500 750 Embedding DimensionAbsolute Attribution Score0.00 0.01 0.02 0.03 0.04 0.05 0.060250 500 750 Embedding DimensionAbsolute Attribution Score0.00 0.01 0.02 0.03 0.04 0.05 0.060250 500 750 Embedding DimensionAbsolute Attribution Score0.00 0.01 0.02 0.03 0.04 0.05 0.060Embedding Dimension 250 500 750(a) seem(b) weird(c) and(d) distance(e) ##d", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Overall results on the GLUE benchmark. The results of baselines except vanilla KD and MGSKD are imported fromPark et al. (2021). Results of development sets are averaged over 3 runs and we submit the model with the highest score to the official GLUE server to obtain the results of test sets. Average score is computed excluding the MNLI-mm accuracy. The best results of the student models are shown in bold and the second best results are shown with underline. Results are statistically significant with p-value < 0.005.", "figure_data": "Model#ParamsCoLA (Mcc)MNLI-(m/mm) (Acc)SST-2 (Acc)QNLI (Acc)MRPC (F1)QQP (Acc)RTE (Acc)STS-B (Spear)AvgDevBERT base (Teacher)110M60.384.9/84.893.791.791.491.569.789.484.1BERT 6 (Student)66M51.281.7/82.691.089.389.290.466.188.380.9Vanilla KD (Hinton et al., 2015)66M53.682.7/83.191.190.189.490.566.888.781.6PD (Turc et al., 2019)66M-82.5/83.491.189.489.490.766.7--PKD (Sun et al., 2019)66M45.581.3/-91.388.485.788.466.586.279.2TinyBERT (Jiao et al., 2020)66M53.883.1/83.492.389.988.890.566.988.381.7CKD (Park et al., 2021)66M55.183.6/ 84.193.090.589.691.267.389.082.4MGSKD (Liu et al., 2022)66M49.183.3/83.991.790.389.891.267.988.581.5AD-KD66M58.383.4/84.291.991.291.291.270.989.283.4TestBERT base (Teacher)110M51.584.5/84.194.190.987.789.267.585.581.4BERT 6 (Student)66M41.781.9/81.091.388.985.288.064.082.477.9Vanilla KD (Hinton et al., 2015)66M42.382.7/81.892.089.386.388.265.082.778.6PD (Turc et al., 2019)66M-82.8/82.291.888.986.888.965.3--PKD (Sun et al., 2019)66M43.581.5/81.092.089.085.088.965.581.678.4MGSKD (Liu et al., 2022)66M42.883.4/82.892.189.587.089.163.782.278.7AD-KD66M47.083.1/82.691.890.087.188.965.883.479.6", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Ablation study of different loss terms. The results are based on GLUE development sets.", "figure_data": "MethodCoLA (Mcc)MNLI-(m/mm) (Acc)SST-2 (Acc)QNLI (Acc)MRPC (F1)QQP (Acc)RTE (Acc)STS-B (Spear)AD-KD58.383.4/84.291.991.291.291.270.989.2w/o L attr53.682.7/83.191.290.289.290.567.588.9w/o L ce57.883.6/84.191.390.890.891.269.388.9w/o L logit53.981.9/82.891.190.589.990.968.688.884.5 85.0Multi Single (contradiction)84.2 Single (entailment) Single (neutral)Vanilla KDAccuracy (%)83.0 83.5 84.083.482.883.282.982.783.883.783.883.182.582.0MNLI-mMNLI-mm", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Results of different attribution layers on MRPC and QNLI development sets.", "figure_data": "Attribution LayerMRPC (F1)QNLI (Acc)input91.291.2first90.590.9penultimate90.490.9uniform90.691.1input & uniform90.190.6", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Statistics of the GLUE datasets.", "figure_data": "Task#Train #Dev #Test #LabelSingle-Sentence ClassificationCoLA8.5k1k1k2SST-267k8721.8k2Pairwise Text ClassificationMNLI393k20k20k3QNLI108k5.7k 5.7k2MRPC3.7k4081.7k2QQP364k40k 391k2RTE2.5k2763k2Text SimilaritySTS-B7k1.5k 1.4k1", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "", "figure_data": "presents the hyperparameter set-tings and training costs of AD-KD on GLUE tasks.Generally, AD-KD runs 1.2 to 3 times slower com-pared to vanilla KD on different tasks, due to theextra back-propagation. However, all students ob-tained by different distillation methods have thesame inference speed.", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Hyperparameter settings and training cost.", "figure_data": "", "figure_id": "tab_9", "figure_label": "5", "figure_type": "table" }, { "figure_caption": ", with much longer text (with Max_Seq_Length set to 512) which requires more attribution knowledge. As shown in Table6, AD-KD improves 0.97% over vanilla KD and 0.38% over MGSKD. Moreover, the performance of AD-KD is on par with the teacher. Results on MultiRC development set.", "figure_data": "Model#Params AccBERT base (Teacher)110M68.53Vanilla KD66M67.70MGSKD66M68.29AD-KD66M68.67", "figure_id": "tab_10", "figure_label": "6", "figure_type": "table" } ]
Siyue Wu; Hongzhan Chen; Xiaojun Quan; Qifan Wang; Rui Wang
[ { "authors": "Marco Ancona; Enea Ceolini; Cengiz Öztireli; Markus Gross", "journal": "OpenReview", "ref_id": "b0", "title": "Towards better understanding of gradient-based attribution methods for deep neural networks", "year": "2018-04-30" }, { "authors": "Pepa Atanasova; Jakob Grue Simonsen; Christina Lioma; Isabelle Augenstein", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "A diagnostic study of explainability techniques for text classification", "year": "2020" }, { "authors": "Sebastian Bach; Alexander Binder; Grégoire Montavon; Frederick Klauschen; Klaus-Robert Müller; Wojciech Samek", "journal": "PloS one", "ref_id": "b2", "title": "On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation", "year": "2015" }, { "authors": "David Baehrens; Timon Schroeter; Stefan Harmeling; Motoaki Kawanabe; Katja Hansen; Klaus-Robert Müller", "journal": "The Journal of Machine Learning Research", "ref_id": "b3", "title": "How to explain individual classification decisions", "year": "2010" }, { "authors": "Jasmijn Bastings; Katja Filippova", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "The elephant in the interpretability room: Why use attention as explanation when we have saliency methods", "year": "2020" }, { "authors": "Luisa Bentivogli; Peter Clark; Ido Dagan; Danilo Giampiccolo", "journal": "", "ref_id": "b5", "title": "The fifth pascal recognizing textual entailment challenge", "year": "2009" }, { "authors": "Gino Brunner; Yang Liu; Damian Pascual; Oliver Richter; Massimiliano Ciaramita; Roger Wattenhofer", "journal": "", "ref_id": "b6", "title": "On identifiability in transformers", "year": "2020" }, { "authors": "Daniel Cer; Mona Diab; Eneko Agirre; Iñigo Lopez-Gazpio; Lucia Specia", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation", "year": "2017" }, { "authors": "Zihan Chen; Hongbo Zhang; Xiaoji Zhang; Leqi Zhao", "journal": "", "ref_id": "b8", "title": "Quora question pairs", "year": "2018" }, { "authors": "Nicola De Cao; Michael Sejr Schlichtkrull; Wilker Aziz; Ivan Titov", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "How do decisions emerge across layers in neural models? interpretation with differentiable masking", "year": "2020" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Shuoyang Ding; Hainan Xu; Philipp Koehn", "journal": "", "ref_id": "b11", "title": "Saliency-driven word alignment interpretation for neural machine translation", "year": "2019" }, { "authors": "William B Dolan; Chris Brockett", "journal": "IWP", "ref_id": "b12", "title": "Automatically constructing a corpus of sentential paraphrases", "year": "2005" }, { "authors": "Jianping Gou; Baosheng Yu; Stephen J Maybank; Dacheng Tao", "journal": "International Journal of Computer Vision", "ref_id": "b13", "title": "Knowledge distillation: A survey", "year": "2021" }, { "authors": "Chaoyu Guan; Xiting Wang; Quanshi Zhang; Runjin Chen; Di He; Xing Xie", "journal": "", "ref_id": "b14", "title": "Towards a deep and unified understanding of deep neural models in nlp", "year": "2019" }, { "authors": " Pmlr", "journal": "", "ref_id": "b15", "title": "", "year": "" }, { "authors": "Geoffrey Hinton; Oriol Vinyals; Jeff Dean", "journal": "", "ref_id": "b16", "title": "Distilling the knowledge in a neural network", "year": "2015" }, { "authors": "Sarthak Jain; Byron C Wallace", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Attention is not Explanation", "year": "2019" }, { "authors": "Pascal Joseph D Janizek; Su-In Sturmfels; Lee", "journal": "Journal of Machine Learning Research", "ref_id": "b18", "title": "Explaining explanations: Axiomatic feature interactions for deep networks", "year": "2021" }, { "authors": "Xiaoqi Jiao; Yichun Yin; Lifeng Shang; Xin Jiang; Xiao Chen; Linlin Li; Fang Wang; Qun Liu", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "TinyBERT: Distilling BERT for natural language understanding", "year": "2020" }, { "authors": "Daniel Khashabi; Snigdha Chaturvedi; Michael Roth; Shyam Upadhyay; Dan Roth", "journal": "", "ref_id": "b20", "title": "Looking beyond the surface: A challenge set for reading comprehension over multiple sentences", "year": "2018" }, { "authors": "Jiwei Li; Xinlei Chen; Eduard Hovy; Dan Jurafsky", "journal": "", "ref_id": "b21", "title": "Visualizing and understanding neural models in NLP", "year": "2016" }, { "authors": "Chen Liang; Simiao Zuo; Qingru Zhang; Pengcheng He; Weizhu Chen; Tuo Zhao", "journal": "", "ref_id": "b22", "title": "Less is more: Task-aware layer-wise distillation for language model compression", "year": "2022" }, { "authors": "Chang Liu; Chongyang Tao; Jiazhan Feng; Dongyan Zhao", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Multi-granularity structural knowledge distillation for language model compression", "year": "2022" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b24", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "Paul Michel; Omer Levy; Graham Neubig", "journal": "", "ref_id": "b25", "title": "Are sixteen heads really better than one", "year": "2019" }, { "authors": "Ali Modarressi; Hosein Mohebbi; Mohammad Taher; Pilehvar ", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "AdapLeR: Speeding up inference by adaptive length reduction", "year": "2022" }, { "authors": "Geondo Park; Gyeongman Kim; Eunho Yang", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Distilling linguistic context for language model compression", "year": "2021" }, { "authors": "Damian Pascual; Gino Brunner; Roger Wattenhofer", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Telling BERT's full story: from local attention to global aggregation", "year": "2021" }, { "authors": "Pranav Rajpurkar; Jian Zhang; Konstantin Lopyrev; Percy Liang", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "SQuAD: 100,000+ questions for machine comprehension of text", "year": "2016" }, { "authors": "Victor Sanh; Lysandre Debut; Julien Chaumond; Thomas Wolf", "journal": "", "ref_id": "b30", "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "year": "2019" }, { "authors": "Karl Schulz; Leon Sixt; Federico Tombari; Tim Landgraf", "journal": "", "ref_id": "b31", "title": "Restricting the flow: Information bottlenecks for attribution", "year": "2020" }, { "authors": "Avanti Shrikumar; Peyton Greenside; Anshul Kundaje", "journal": "", "ref_id": "b32", "title": "Learning important features through propagating activation differences", "year": "2017" }, { "authors": " Pmlr", "journal": "", "ref_id": "b33", "title": "", "year": "" }, { "authors": "Sandipan Sikdar; Parantapa Bhattacharya; Kieran Heese", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Integrated directional gradients: Feature interaction attribution for neural NLP models", "year": "2021" }, { "authors": "Richard Socher; Alex Perelygin; Jean Wu; Jason Chuang; Christopher D Manning; Andrew Ng; Christopher Potts", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "year": "2013" }, { "authors": "Siqi Sun; Yu Cheng; Zhe Gan; Jingjing Liu", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "Patient knowledge distillation for BERT model compression", "year": "2019" }, { "authors": "Siqi Sun; Zhe Gan; Yuwei Fang; Yu Cheng; Shuohang Wang; Jingjing Liu; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "Contrastive distillation on intermediate representations for language model compression", "year": "2020" }, { "authors": "Zhiqing Sun; Hongkun Yu; Xiaodan Song; Renjie Liu; Yiming Yang; Denny Zhou", "journal": "Association for Computational Linguistics", "ref_id": "b38", "title": "Mobile-BERT: a compact task-agnostic BERT for resourcelimited devices", "year": "2020" }, { "authors": "Mukund Sundararajan; Ankur Taly; Qiqi Yan", "journal": "PMLR", "ref_id": "b39", "title": "Axiomatic attribution for deep networks", "year": "2017" }, { "authors": "Iulia Turc; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b40", "title": "Well-read students learn better: On the importance of pre-training compact models", "year": "2019" }, { "authors": "Alex Wang; Yada Pruksachatkun; Nikita Nangia; Amanpreet Singh; Julian Michael; Felix Hill; Omer Levy; Samuel R Bowman", "journal": "", "ref_id": "b41", "title": "Superglue: a stickier benchmark for general-purpose language understanding systems", "year": "2019" }, { "authors": "Alex Wang; Amanpreet Singh; Julian Michael; Felix Hill; Omer Levy; Samuel Bowman", "journal": "Association for Computational Linguistics", "ref_id": "b42", "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "year": "2018" }, { "authors": "Wenhui Wang; Hangbo Bao; Shaohan Huang; Li Dong; Furu Wei", "journal": "Association for Computational Linguistics", "ref_id": "b43", "title": "MiniLMv2: Multi-head selfattention relation distillation for compressing pretrained transformers", "year": "2021" }, { "authors": "Wenhui Wang; Furu Wei; Li Dong; Hangbo Bao; Nan Yang; Ming Zhou", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b44", "title": "Minilm: Deep selfattention distillation for task-agnostic compression of pre-trained transformers", "year": "2020" }, { "authors": "Alex Warstadt; Amanpreet Singh; Samuel R Bowman", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b45", "title": "Neural network acceptability judgments", "year": "2019" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Ed Chi; Quoc Le; Denny Zhou", "journal": "", "ref_id": "b46", "title": "Chain of thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Sarah Wiegreffe; Yuval Pinter", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "Attention is not not explanation", "year": "2019" }, { "authors": "Adina Williams; Nikita Nangia; Samuel Bowman", "journal": "Association for Computational Linguistics", "ref_id": "b48", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "year": "2018" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Remi Louf; Morgan Funtowicz; Joe Davison; Sam Shleifer; Clara Patrick Von Platen; Yacine Ma; Julien Jernite; Canwen Plu; Teven Xu; Sylvain Le Scao; Mariama Gugger; Quentin Drame; Alexander Lhoest; Rush", "journal": "Association for Computational Linguistics", "ref_id": "b49", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" }, { "authors": "Song Xu; Haoran Li; Peng Yuan; Youzheng Wu; Xiaodong He; Bowen Zhou", "journal": "Association for Computational Linguistics", "ref_id": "b50", "title": "Self-attention guided copy mechanism for abstractive summarization", "year": "2020" }, { "authors": "D Matthew; Rob Zeiler; Fergus", "journal": "Springer Verlag", "ref_id": "b51", "title": "Visualizing and understanding convolutional networks", "year": "2014" }, { "authors": "Quanshi Zhang; Xu Cheng; Yilan Chen; Zhefan Rao", "journal": "IEEE Transactions on Pattern Analysis & Machine Intelligence", "ref_id": "b52", "title": "Quantifying the knowledge in a dnn to explain knowledge distillation for classification", "year": "2022" } ]
[ { "formula_coordinates": [ 3, 76.78, 570.24, 212.35, 56.98 ], "formula_id": "formula_0", "formula_text": "F (x) -F (x ) = n i=1 IGi(F, x) = n i=1 [(xi -x i ) × 1 α=0 ∂F (x + α × (x -x )) ∂xi dα].(1)" }, { "formula_coordinates": [ 3, 85.43, 661.46, 200.22, 41.88 ], "formula_id": "formula_1", "formula_text": "IG approx i (F, x) = (xi -x i ) × m k=1 ∂F (x + k m × (x -x )) ∂xi × 1 m , (2" }, { "formula_coordinates": [ 3, 285.65, 679.43, 3.48, 7.77 ], "formula_id": "formula_2", "formula_text": ")" }, { "formula_coordinates": [ 3, 437.32, 114.99, 88.45, 10.67 ], "formula_id": "formula_3", "formula_text": "x = [x 1 , x 2 , ..., x n ]," }, { "formula_coordinates": [ 3, 313.71, 323.81, 210.7, 42.38 ], "formula_id": "formula_4", "formula_text": "IG approx ij (F c , E) = (eij -e ij ) × m k=1 ∂F c (E + k m × (E -E )) ∂eij × 1 m .(3)" }, { "formula_coordinates": [ 4, 109.86, 515.27, 179.28, 11.88 ], "formula_id": "formula_5", "formula_text": "a t,c i = TopK(IG approx i (F t,c , E t )) 2,(4)" }, { "formula_coordinates": [ 4, 132.21, 588.13, 156.92, 11.77 ], "formula_id": "formula_6", "formula_text": "a t,c = [a t,c 1 , a t,c 2 , ..., a t,c n ].(5)" }, { "formula_coordinates": [ 4, 122.58, 687.09, 166.56, 24.72 ], "formula_id": "formula_7", "formula_text": "a s,c i = IG approx i (F s,c , E s ) 2, a s,c = [a s,c 1 , a s,c 2 , ..., a s,c n ].(6)" }, { "formula_coordinates": [ 4, 354.73, 446.77, 169.68, 11.13 ], "formula_id": "formula_8", "formula_text": "A t = C c=1 a t,c , A s = C c=1 a s,c ,(7)" }, { "formula_coordinates": [ 4, 394.61, 577.53, 126.31, 10.55 ], "formula_id": "formula_9", "formula_text": "A t -A s 2. (8" }, { "formula_coordinates": [ 4, 520.92, 580.09, 3.48, 7.77 ], "formula_id": "formula_10", "formula_text": ")" }, { "formula_coordinates": [ 4, 353.9, 754.49, 167.02, 21.93 ], "formula_id": "formula_11", "formula_text": "a t,c = a t,c a t,c 2 , a s,c = a s,c a s,c 2 . (9" }, { "formula_coordinates": [ 5, 119.46, 281.07, 169.68, 11.13 ], "formula_id": "formula_12", "formula_text": "A t = C c=1 a t,c , A s = C c=1 a s,c .(10)" }, { "formula_coordinates": [ 5, 138.47, 376.92, 83.07, 10.55 ], "formula_id": "formula_13", "formula_text": "Lattr = A t -A s 2." }, { "formula_coordinates": [ 5, 109.77, 504.6, 179.37, 8.35 ], "formula_id": "formula_14", "formula_text": "L = (1 -α)Lce + αL logit + βLattr,(12)" } ]
2023-05-17
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b14", "b7", "b10", "b21", "b23", "b24", "b30", "b1", "b22", "b29", "b4", "b16", "b17", "b0", "b3", "b9", "b15", "b6", "b18", "b26", "b19" ], "table_ref": [], "text": "Colorectal cancer is one of the most preventable cancers, as early detection and through screening is highly effective. The most common screening procedure is optical colonoscopy -visually examining the surface of the colon for abnormalities such as colorectal lesions and polyps. However, performing a thorough examination of the entire colon surface is proven to be quite challenging due to unavoidable poor visibility segments of the procedure. As a consequence, improperly inspected regions may lead to a lower detection rate of polyps. Indeed, recent studies have shown that approximately 25% of polyps are routinely missed during a typical colonoscopy procedure [15].\nVarious efforts to automatically detect and mark non-inspected regions of the colon are reported in recent publications, where the common approach relies on the creation of a dense 3D reconstruction of the colon's shape [8,11,22,24,25,31]. However, such a reconstruction based on video solely is a challenging task, and especially so in colonoscopy, in which reflections, low-texture content, frequent changes in lighting conditions and erratic motion are common. As a consequence, while the above 3D approach has promise, it is limited to segments of the video exhibiting good visual quality.\nIn this work we propose a novel real-time approach for detecting deficient local coverage, complementing the 3D reconstruction methods mentioned above.\nOur proposed strategy provides a reliable, stable and robust solution for the grand challenge posed by temporal periods of poor visual content, such as camera blur, poor camera positioning, occlusions due to dirt and spayed water, and more. The proposed method consists of two main phases. During the first, we identify time segments with good visibility of the colon and gaps of poor visibility between them. For this purpose we train a binary classifier, leveraging a small set of annotated images and a self-supervised training scheme. During the second phase, we train an ML model that aims to answer the following question for each gap: Do you observe different scenes before and after the gap? (see Figure 1). If the answer is positive, we suspect a loss of coverage due to an unintentional drift of the endoscope position, and therefore alert the endoscopist accordingly in real-time to revisit the area.\nThe second phase model is designed to generate low-dimensional frame-based descriptors that are used for scene-change detection via a simple Cosine distance evaluation. This network is trained using a contrastive loss based on automatically generated positive and negative pairs of video segments. These training examples are sampled from good-visibility segments of real colonoscopy videos, where the translational speed of the endoscope can be reliably estimated.\nTo evaluate our method we introduce a dataset of 250 colonoscopy procedures (videos). Two doctors have been asked to evaluate up to 5 gaps per video and decide whether they suspect loss of coverage there. The evaluation of our method using this annotated dataset provides sensitivity of 75% with specificity of 90%.\nWe note that our task of same-scene detection in the colon is related to image retrieval [2,23,30] and geo-localization [5,17,18]. There is also some similarity to techniques employed for face recognition [1,4,10,16] and person reidentification [7,19,27]. In the narrower domain of colonoscopy, the only closely related work we are aware of is reported in [20]. While their technique for location recognition is related to our scene descriptor generation, their eventual tasks are markedly different, and so are the evaluation protocols. Nevertheless, for completeness of this work, we evaluate our scene descriptors on their dataset and show that our method outperforms their results.\nTo summarize, this work offers three main contributions:\n-We present a novel stable, robust and accurate method for detecting deficient local coverage in real-time for periods with poor visual content. -Our coverage solution complements the 3D reconstruction approach, covering cases beyond it's reach; -We introduce a novel self-supervised method for generating frame-based descriptors for scene change-detection in colonoscopy videos.\nThis paper is organized as follow: Section 2 describes Phase I of our method, aiming to identify time segments with good visibility of the colon and gaps between them. Phase II of our method is presented in Section 3, addressing the same-scene question by metric learning. Section 4 summarizes the results of our experiments and Section 5 concludes the paper." }, { "figure_ref": [ "fig_1" ], "heading": "Method: Phase I -Visibility Classification", "publication_ref": [ "b8" ], "table_ref": [], "text": "Our starting point is a frame-based classification of the visibility content. We characterize good visibility frames as those having a clear view of the tubular structure of the colon. In contrast, poor visibility frames may include severe occlusions due to dirt or sprayed water, a poor positioning of the camera -being dragged on the colon walls, or simply blurred content due to rapid motion.\nIn order to solve this classification task, we gather training and validation annotated datasets by experts. Operating on 85 different colonoscopy videos, 5 good visibility segments and 5 poor ones were identified in each. A naive supervised learning of a classifier leads to an unsatisfactory 84% accuracy on the validation set due to insufficient data. In an attempt to improve this result, we adopt a semi-supervised approach. First, we pre-trained an encoder on large (1e6) randomly sampled frames using simCLR [9]. This unsupervised learning embeds the frames such that similar ones (obtained by augmentations of the same frame) are close-by, while different frames (the rest of the frames in the batch) are pushed away. Given the learned encoder, we train a binary classifier on the resulting embeddings using the labeled data. Since the dimension of the embedding vectors is much smaller then the original frame sizes (512 vs. 224 2 ), this approach leads to far better accuracy of 93%. We further improve the above by smoothing the predictions based on their embeddings, as shown in Fig. 2. For each input batch of 512 frames, their cross-similarities (the cosine distance between their embedding vectors) are leveraged, such that similar frames are also encouraged to be assigned to the same class. This improves the per-frame accuracy on the validation set up to 94%. To conclude, the trained classifier provides a partitioning of the time axis into disjoint intervals of good or poor visibility. In order to further relax these intervals, we apply a median filter with window size of 10 frames." }, { "figure_ref": [ "fig_2" ], "heading": "Method: Phase II -Gaps with Loss of Coverage", "publication_ref": [ "b2", "b8", "b12", "b27", "b20", "b25", "b13", "b11", "b27", "b8", "b13", "b28" ], "table_ref": [], "text": "After partitioning the procedure timeline into periods with good visibility and gaps between them, our goal now is to identify gaps with a potential loss of coverage, defined as exhibiting a change of the scene between their ends. In order to compare scenes before and after a gap, we learn distinctive frame descriptors. These vectors are compared via a simple distance measure for addressing the same/not-same scene question. While the direct approach towards this task would be to gather a training set of many thousands of such gaps along with their human annotation, we introduce a much cheaper, faster, and easier alternative based on a self-supervised approach. In this section we describe all these ingredients in greater details. Scene Descriptors: Assume that a training set of the form\n{F k 1 , F k 2 , c k } N k=1\nis given to us, where F k 1 and F k 2 are two frames on both sides of a given gap, and c k is their label, being c k = 1 for the same scene and 0 otherwise. N is the size of this training data, set in this work to be N = 1e5 examples. We design a neural network f = T Θ (F ) that embeds the frame F to the low-dimensional vector f ∈ R 512 , while accommodating our desire to serve the same/not-same scene task. More specifically, our goal is to push same-scene descriptor-pairs to be close-by while forcing pairs of different scenes to be distant, being the essence of contrastive learning, which has been drawing increased attention recently [3,9,13,28]. Therefore, we train T Θ (•) to minimize the loss function\nL(Θ) = N k=1 (2c k -1)d T Θ (F k 1 ), T Θ (F k 2 (1) = {c k =1} k d T Θ (F k 1 ), T Θ (F k 2 - {c k =0} k d T Θ (F k 1 ), T Θ (F k 2 .\nIn the above expression, d(•, •) stands for a distance measure. In this work we use the Cosine similarity\nd(f 1 , f 2 ) = 1 -f T 1 f 2 / f 1 2 f 2 2 . Creating the Training Data: Constructing the training set {F k 1 , F k 2 , c k } N k=1\nmight be a daunting challenge if annotations by experts are to be practiced. We introduce a fully automatic alternative that builds on a reliable displacement estimation of the endoscope, accessible in good visibility video segments of any real colonoscopy. This displacement can be evaluated by estimating the opticalflow between consecutive frames (see [21,26]) and estimating the amount of flow trough the frame boundary [14] (see Figure 3). Given any time interval of good visibility content, the cumulative directional transnational motion can be estimated rather accurately. Thus, starting with such a video segments, and randomly marking an inner part of it of a random length of 5 -30 seconds as a pseudo-gap, we can define frames on both its ends as having the same scene or not based on the accumulated displacement. Figure 4 presents the whole process of creating training examples this way, easily obtaining triplets {F k 1 , F k 2 , c k }. Our attempts to improve the above contrastive training scheme by introducing a margin, as practiced in [12] and employing a \"soft-max\" loss version [28], did not bring a significant improvement. A technique that delivered a benefit is to pre-train the network T Θ in fully unsupervised way using simCLR [9] (as in Section 2), and proceed along the above contrastive learning scheme. Implementation details can be found in the supplementary material. Fig. 3. Endoscope displacement estimation is based on optical-flow calculation between consecutive frames using the amount of flow trough the frame boundary (see [14]). Gap Classification: With a simple machinery of a distance evaluation of the frame descriptors on both ends of any gap, we are now equipped to answer our main questions: Is there a potential loss of coverage during this poor-visibility video segment? Has the probe drifted away form its original position? As this distance evaluation can be applied over various frames on both sides of the gap, the challenge is to find a reliable fusion of the information within these many pairs of frames. While we have experimented with various such options, the best results are achieved by calculating a single descriptor for the scenes before and after the gap, and then comparing these using a Cosine distance. This unified descriptor, f , is obtained by a weighted average of the individual descriptors in a segment of 2 seconds on each side, f i , as follows: f = i f i w i / i w i , where w i = v i e -si , v i and s i are the raw visibility score and the temporal distance to the gap, both referring to the i-th frame. While the effectiveness of employing such a simple averaging of the descriptors might seem surprising, a similar strategy was proven successful for face recognition from multiple views in [29]." }, { "figure_ref": [ "fig_3" ], "heading": "Results", "publication_ref": [ "b19", "b19", "b19", "b19", "b19", "b19", "b5" ], "table_ref": [ "tab_1" ], "text": "As explained in Section 3, first we generate per-frame scene descriptors and then employ them to detect the gaps with potential loss of coverage. This section starts from presenting the evaluation of the stand-alone scene descriptors and compares them to SOTA. Then we describe the dataset of the annotated gaps and present the evaluation of our gap classifier on this dataset. Scene Descriptors: We evaluate our scene descriptors on the recently released dataset for colonoscopic image retrieval -Colon10K [20]. This dataset contains 20 short sequences (10,126 images), where the positive matching images were manually labeled and verified by an endoscopist. We follow the setup and the evaluation metrics described in [20]. In total, they have 620 retrieval tasks (denoted by \"all \"), while 309 tasks use the intervals that are not direct neighbor 1. Comparison of our scene descriptor generation to [20] on the Colon10K dataset. In all the evaluated metrics our method outperforms [20].\nframes of their queries as positives (denoted by \"indirect\"). We use the data from Colon10K for the evaluation purposes only. Table 1 compares the results to those reported in [20]. Rank-1 recognition rate is the percentage of tasks in which the most similar to the query image is true positive. The Mean average precision is the area under the precision-recall curve. For both metrics our method outperforms [20] for both \"all \" and \"indirect\" tasks. Gap Classification: In order to evaluate our gap classification we introduce a dataset of 250 colonoscopy procedures (videos) from five different hospitals. We have automatically identified between 2 to 5 true gaps in each video and presented these to doctors for their annotation -whether a loss of coverage is suspected. Each gap was evaluated by two doctors and the ones without a consensus (∼25%) were omitted. This resulted with 750 gaps having high-confidence annotations, 150 of which are marked as exhibiting a loss of coverage. Figure 5 presents the ROC of our direct gap classification method evaluated on the whole dataset of 750 gaps. At the working point of 10% false alarms (alert on gap with no coverage loss) we cover 75% of gaps with real coverage loss. The area under curve (AUC) is 0.9, which usually indicates a high-accuracy classifier.\nThe above classification exploits the information before and after the gap, while completely disregarding the information about the gap itself. Having a dataset of annotated true gaps, we can improve this accuracy by a supervised learning that exploits the gap characteristics. We thus split the dataset of the annotated gaps 50:50 to training and evaluation. Since we have a very limited amount of the training examples we use a low-dimensional classifier -Gradient Boosting [6] -that operates on the following input data: (i) A 32-bin histogram of the similarity matrix' values between frames 2 seconds before and after the gap; (ii) A 32-bin histogram of the visibility scores 2 seconds before and after the gap; (iii) A 32-bin histogram of the visibility scores inside the gap; and (iv) The duration of the gap. We performed class-balancing using up-sampling with augmentations before training.\nTable 2 compares the original approach to the supervised one, summarising the contribution of different input features to the final accuracy (measured by AUC). In the supervised approach we use one half of the dataset for training, thus the evaluation is performed using the other half of the dataset for both the original and the supervised approaches. In the first approach we also explored a classificaiton based on the gap duration only, getting an AUC of 0.651, being higher than random but lower than employing frame similarities. Weighing the scene descriptors by the visibility scores (see Section 3) improves the AUC by 2%. In the supervised approach both gap duration and visibility scores inside the gap provide a substantial contribution of 2% each to the AUC." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This work presents a novel method for the detection of deficient local coverage in real-time for periods with poor visual content, complementing any 3D reconstruction alternative for coverage assessment of the colon. Our method starts with an identification of time segments with good visibility of the colon and gaps between them. For each such gap we train an ML model that tests whether the scene has changed during the gap, alerting the endoscopist in such cases to revisit a given area in real-time. Our learning constructs frame-based descriptors for same scene detection, leveraging a self-supervised approach for generating the required training set. For the evaluation of the gap classification results we have built a dataset of 250 colonoscopy videos with annotations of gaps with deficient local coverage. Our future work includes an extension of our approach to a guidance of the endoscopist to the exact place where the coverage was lost, and using our scene descriptors for bookmarking points of interest in the colon." } ]
Colonoscopy is the most widely used medical technique for preventing Colorectal Cancer, by detecting and removing polyps before they become malignant. Recent studies show that around 25% of the existing polyps are routinely missed. While some of these do appear in the endoscopist's field of view, others are missed due to a partial coverage of the colon. The task of detecting and marking unseen regions of the colon has been addressed in recent work, where the common approach is based on dense 3D reconstruction, which proves to be challenging due to lack of 3D ground truth and periods with poor visual content. In this paper we propose a novel and complementary method to detect deficient local coverage in real-time for video segments where a reliable 3D reconstruction is impossible. Our method aims to identify skips along the colon caused by a drifted position of the endoscope during poor visibility time intervals. The proposed solution consists of two phases. During the first, time segments with good visibility of the colon and gaps between them are identified. During the second phase, a trained model operates on each gap, answering the question: "Do you observe the same scene before and after the gap?" If the answer is negative, the endoscopist is alerted and can be directed to the appropriate area in realtime. The second phase model is trained using a contrastive loss based on an auto-generated examples. Our method evaluation on a dataset of 250 procedures annotated by trained physicians provides sensitivity of 75% with specificity of 90%.
Colonoscopy Coverage Revisited: Identifying Scanning Gaps in Real-Time
[ { "figure_caption": "Fig. 1 .1Fig. 1. Our solution starts by detecting time segments with good visibility of the colon and gaps between them. For each such gap we answer the question: Do you observe different scenes before and after the gap? If the answer is positive, the endoscopist is alerted to revisit the area in real-time.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig.2. To achieve high accuracy visibility classifier, we train an encoder in an unsupervised manner and then train a binary classifier the resulting embeddings using the labeled data. Further improvement is made by smoothing predictions based on similarity distances, resulting in 94% accuracy on the validation set.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. We simulate random artificial gaps of various duration in good-visibility video segments, estimate the endoscope motion within these simulated gaps, and get this way reliable training examples for our overall task. Gaps associated with low accumulated motion contribute a 'same-scene' training example (c k = 1), while high-motion gaps refer to a different scene pair (c k = 0).", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. Direct gap classification: ROC curve evaluated on the whole dataset (750 gaps).", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Impact of various features on the AUC, evaluated on 375 test gaps.", "figure_data": "MethodFeatures FrameGapVisibilityVisibilityAUCsimilarities duration inside the gap outside the gap0.651Original0.8760.8960.881Supervised0.8980.9290.932", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
G Leifman; I Kligvasser; R Goldenberg; M Elad; E Rivlin Verily
[ { "authors": "I Adjabi; A Ouahabi; A Benzaoui; A Taleb-Ahmed", "journal": "Electronics", "ref_id": "b0", "title": "Past, present, and future of face recognition: A review", "year": "2020" }, { "authors": "S Ali; J Rittscher", "journal": "IEEE", "ref_id": "b1", "title": "Efficient video indexing for monitoring disease activity and progression in the upper gastrointestinal tract", "year": "2019" }, { "authors": "P Bachman; R D Hjelm; W Buchwalter", "journal": "Advances in neural information processing systems", "ref_id": "b2", "title": "Learning representations by maximizing mutual information across views", "year": "2019" }, { "authors": "G Bae; M De La Gorce; T Baltrušaitis; C Hewitt; D Chen; J Valentin; R Cipolla; J Shen", "journal": "", "ref_id": "b3", "title": "Digiface-1m: 1 million digital face images for face recognition", "year": "2023" }, { "authors": "G Berton; C Masone; V Paolicelli; B Caputo", "journal": "", "ref_id": "b4", "title": "Viewpoint invariant dense matching for visual geolocalization", "year": "2021" }, { "authors": "J Brownlee", "journal": "Machine Learning Mastery", "ref_id": "b5", "title": "XGBoost With python: Gradient boosted trees with XGBoost and scikit-learn", "year": "2016" }, { "authors": "H Chen; Y Wang; B Lagadec; A Dantcheva; F Bremond", "journal": "", "ref_id": "b6", "title": "Joint generative and contrastive learning for unsupervised person re-identification", "year": "2021" }, { "authors": "R J Chen; T L Bobrow; T Athey; F Mahmood; N J Durr", "journal": "", "ref_id": "b7", "title": "Slam endoscopy enhanced by adversarial depth prediction", "year": "2019" }, { "authors": "T Chen; S Kornblith; M Norouzi; G Hinton", "journal": "PMLR", "ref_id": "b8", "title": "A simple framework for contrastive learning of visual representations", "year": "2020" }, { "authors": "J Deng; J Guo; N Xue; S Zafeiriou", "journal": "", "ref_id": "b9", "title": "Arcface: Additive angular margin loss for deep face recognition", "year": "2019" }, { "authors": "D Freedman; Y Blau; L Katzir; A Aides; I Shimshoni; D Veikherman; T Golany; A Gordon; G Corrado; Y Matias", "journal": "IEEE Transactions on Medical Imaging", "ref_id": "b10", "title": "Detecting deficient coverage in colonoscopies", "year": "2020" }, { "authors": "R Hadsell; S Chopra; Y Lecun", "journal": "IEEE", "ref_id": "b11", "title": "Dimensionality reduction by learning an invariant mapping", "year": "2006" }, { "authors": "K He; H Fan; Y Wu; S Xie; R Girshick", "journal": "", "ref_id": "b12", "title": "Momentum contrast for unsupervised visual representation learning", "year": "2020" }, { "authors": "O Kelner; O Weinstein; E Rivlin; R Goldenberg", "journal": "", "ref_id": "b13", "title": "Motion-based weak supervision for video parsing with application to colonoscopy", "year": "2022" }, { "authors": "N H Kim; Y S Jung; W S Jeong; H J Yang; S K Park; K Choi; D I Park", "journal": "Intestinal research", "ref_id": "b14", "title": "Miss rate of colorectal neoplastic polyps and risk factors for missed polyps in consecutive colonoscopies", "year": "2017" }, { "authors": "Y Kortli; M Jridi; A Al Falou; M Atri", "journal": "Sensors", "ref_id": "b15", "title": "Face recognition systems: A survey", "year": "2020" }, { "authors": "T Y Lin; S Belongie; J Hays", "journal": "", "ref_id": "b16", "title": "Cross-view image geolocalization", "year": "2013" }, { "authors": "T Y Lin; Y Cui; S Belongie; J Hays", "journal": "", "ref_id": "b17", "title": "Learning deep representations for groundto-aerial geolocalization", "year": "2015" }, { "authors": "Y Lin; L Xie; Y Wu; C Yan; Q Tian", "journal": "", "ref_id": "b18", "title": "Unsupervised person re-identification via softened similarity learning", "year": "2020" }, { "authors": "R Ma; S K Mcgill; R Wang; J Rosenman; J M Frahm; Y Zhang; S Pizer", "journal": "IEEE", "ref_id": "b19", "title": "Colon10k: A benchmark for place recognition in colonoscopy", "year": "2021" }, { "authors": "M Oliveira; H Araujo; I N Figueiredo; L Pinto; E Curto; L Perdigoto", "journal": "IEEE Access", "ref_id": "b20", "title": "Registration of consecutive frames from wireless capsule endoscopy for 3d motion estimation", "year": "2021" }, { "authors": "E Posner; A Zholkover; N Frank; M Bouhnik", "journal": "", "ref_id": "b21", "title": "C3 fusion: Consistent contrastive colon fusion, towards deep slam in colonoscopy", "year": "2022" }, { "authors": "F Radenović; G Tolias; O Chum", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b22", "title": "Fine-tuning cnn image retrieval with no human annotation", "year": "2018" }, { "authors": "A Rau; P Edwards; O F Ahmad; P Riordan; M Janatka; L B Lovat; D Stoyanov", "journal": "International journal of computer assisted radiology and surgery", "ref_id": "b23", "title": "Implicit domain adaptation with conditional generative adversarial networks for depth prediction in endoscopy", "year": "2019" }, { "authors": "S Shao; Z Pei; W Chen; W Zhu; X Wu; D Sun; B Zhang", "journal": "Medical image analysis", "ref_id": "b24", "title": "Self-supervised monocular depth and ego-motion estimation in endoscopy: appearance flow to the rescue", "year": "2022" }, { "authors": "D Sun; X Yang; M Y Liu; J Kautz", "journal": "", "ref_id": "b25", "title": "Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume", "year": "2018" }, { "authors": "D Wang; S Zhang", "journal": "", "ref_id": "b26", "title": "Unsupervised person re-identification via multi-label classification", "year": "2020" }, { "authors": "F Wang; H Liu", "journal": "", "ref_id": "b27", "title": "Understanding the behaviour of contrastive loss", "year": "2021" }, { "authors": "L Wolf; T Hassner; Y Taigman", "journal": "", "ref_id": "b28", "title": "Descriptor based methods in the wild", "year": "2008" }, { "authors": "C Yan; B Gong; Y Wei; Y Gao", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b29", "title": "Deep multi-view enhancement hashing for image retrieval", "year": "2020" }, { "authors": "S Zhang; L Zhao; S Huang; M Ye; Q Hao", "journal": "IEEE Transactions on Medical Robotics and Bionics", "ref_id": "b30", "title": "A template-based 3d reconstruction of colon structures and textures from stereo colonoscopic images", "year": "2020" } ]
[ { "formula_coordinates": [ 4, 412.19, 594.77, 67.91, 12.55 ], "formula_id": "formula_0", "formula_text": "{F k 1 , F k 2 , c k } N k=1" }, { "formula_coordinates": [ 5, 166.73, 174.34, 313.87, 59.04 ], "formula_id": "formula_1", "formula_text": "L(Θ) = N k=1 (2c k -1)d T Θ (F k 1 ), T Θ (F k 2 (1) = {c k =1} k d T Θ (F k 1 ), T Θ (F k 2 - {c k =0} k d T Θ (F k 1 ), T Θ (F k 2 ." }, { "formula_coordinates": [ 5, 134.77, 254.32, 345.33, 24.51 ], "formula_id": "formula_2", "formula_text": "d(f 1 , f 2 ) = 1 -f T 1 f 2 / f 1 2 f 2 2 . Creating the Training Data: Constructing the training set {F k 1 , F k 2 , c k } N k=1" } ]
2023-05-17
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b3", "b31", "b49", "b29", "b19", "b49", "b53", "b25" ], "table_ref": [], "text": "Low-light images suffer from noise bursts, and recovering ideal normal-light images from them is a long-studied problem. Thanks to the development of deep learning, many effective methods have been proposed. LLNet [Lore et al., 2017] and SID [Chen et al., 2018] show the power of neural networks by training them on lots of paired data. According to the Retinex theory [Land, 1977], KIND [Zhang et al., 2019] and RetinexNet [Wei et al., 2018] decompose the illumination and reflectance map through a well-designed loss.\nTo handle this highly ill-posed problem, LLFLOW [Wang et al., 2022] introduces normalizing flow models [Kingma and Dhariwal, 2018] to low-light image enhancement.\nAlthough the above methods have made significant progress in low-light image enhancement, noise-covered details restored by them can be further enhanced. As shown in Fig. 1(a), previous methods often lead to blurred details and distorted colors. Diffusion models [Ho et al., 2020;Song et al., 2020b] have recently shown their talents in image generation, which can generate more realistic details through a sequence of refinements. Therefore, we introduce diffusion models to low-light image enhancement for better restoring noise-covered details, as shown in Fig. 1(a).\nWhen introducing diffusion models to low-light image enhancement, we found two problems, as demonstrated in Fig. 1(b). 1) The resolution is constant in one reverse process, which limits the speed. 2) Diffusion models result in global degradation similar to the RGB shift we analyze in Fig. 5.\nTo solve these problems, we propose a Pyramid Diffusion model (PyDiff) for low-light image enhancement. As shown in Fig. 1(c), PyDiff uses a novel pyramid diffusion method to sample images in an efficient pyramid resolution style (i.e., progressively increasing resolution in one reverse process). Performing noisier sampling at lower resolution makes the reverse process faster and provides PyDiff with a larger inception field, which benefits global information recovery. Moreover, we analyze the cause of global degradation (Fig. 5) and argue that denoising networks are hard to treat global degradation as part of the noise and correct it during denoising since the reverse process is biased to eliminate Gaussian noise. To alleviate the global degradation that denoising networks can not notice, PyDiff performs sampling with a global corrector. With little additional computational consumption, the global corrector significantly improves the performance and makes the training of diffusion models easier.\nWe conduct extensive experiments on two popular benchmarks (i.e., LOL [Wei et al., 2018] and LOLV2 [Yang et al., 2021]) to validate the effectiveness and efficiency of PyDiff. Experimental results show that PyDiff achieves superior performance quantitatively and qualitatively under various scenarios. Compared to the previous state-of-the-art (SOTA) method LLFLOW, which also requires iterative refinements, PyDiff significantly outperforms LLFLOW with a speed of nearly 2× faster. In particular, when dealing with unseen noise distributions, PyDiff significantly outperforms other SOTA competitors, e.g., 10 points (SSIM) higher than the second place (NE [Jin et al., 2022]). When handling unseen illumination distributions, PyDiff also gives competitive results, demonstrating our generalization ability further.\nOur contributions can be summarized below:\n• To the best of our knowledge, we are the first to introduce diffusion models to low-light image enhancement and achieve SOTA. Using a novel pyramid diffusion method, PyDiff is nearly twice as fast as the previous SOTA method LLFLOW.\n• We propose a global corrector to alleviate the global degradation that occurs in the reverse process. This significantly improves the performance and makes the training of diffusion models easier with little additional computational consumption.\n• Experiments on popular benchmarks show that PyDiff achieves new SOTA performance, and PyDiff can generalize well to unseen noise and illumination distributions.\n2 Related Work" }, { "figure_ref": [], "heading": "Low-light image enhancement", "publication_ref": [ "b3", "b31", "b49", "b57", "b59", "b23", "b15", "b25", "b13", "b6", "b27", "b47", "b9", "b29", "b17" ], "table_ref": [], "text": "Low-light image enhancement has been studied for a long time, with numerous deep learning-based approaches proposed. LLNet [Lore et al., 2017] and SID [Chen et al., 2018] collect lots of low/normal-light image pairs to train the network. For getting illumination and reflectance maps [Land, 1977], RetinexNet [Wei et al., 2018], KIND [Zhang et al., 2019], and KIND++ [Zhang et al., 2021] carefully design the loss to train a decomposition network. Enlighten- 8). For better viewing, we brighten the x low . Please zoom in for the best view.\nGAN [Jiang et al., 2021], ZeroDCE [Guo et al., 2020], and NE [Jin et al., 2022] propose effective unsupervised methods which do not require paired data. BREAD [Guo and Hu, 2022] decouples the entanglement of noise and color distortion. Some works [Fan et al., 2022a;Cui et al., 2022;Kim et al., 2021] have brought performance improvements by designing novel and efficient networks. LLFLOW [Wang et al., 2022] models this ill-posed problem via a normalizing flow model [Dinh et al., 2016;Kingma and Dhariwal, 2018]. Although the above methods have made significant progress in low-light image enhancement, noise-covered details restored by them can be further enhanced. This paper introduces diffusion models [He et al., 2020] to low-light image enhancement for better recovering the details." }, { "figure_ref": [], "heading": "Diffusion models", "publication_ref": [ "b19", "b37", "b0", "b35", "b8", "b51", "b21", "b39", "b19", "b4" ], "table_ref": [], "text": "Diffusion Models [Ho et al., 2020;Song et al., 2020b] present high-quality image synthesis results recently. However, they typically require a high number of iterations, resulting in slow performance. A number of training-free samplers [Song et al., 2020a;Nichol and Dhariwal, 2021;Bao et al., 2022;Lu et al., 2022] have been proposed that can achieve comparable results with fewer denoising iterations. To achieve conditional generation, Guided-Diffusion [Dhariwal and Nichol, 2021] samples with classifier guidance, while our PyDiff concatenates noisy images with source images to guide denoising like some low-level vision methods [Saharia et al., 2022b;Saharia et al., 2022a;Whang et al., 2022].\nTo generate high-resolution images more efficiently, some works [Saharia et al., 2022b;Ho et al., 2022;Fan et al., 2022b] use multiple diffusion models to achieve cascaded high-resolution image synthesis, while LDM [Rombach et al., 2022] makes reverse processes situated within the image encoder's latent space. In one reverse process, the above methods perform sampling at a constant resolution style, limiting the speed. In this paper, PyDiff uses a pyramid diffusion method to achieve faster speed and a global corrector to ensure the sample quality for low-light image enhancement.\n3 Background: Denoising Diffusion Probabilistic Models\nThe Denoising Diffusion Probabilistic Model [Ho et al., 2020;Song et al., 2020a] is a latent variable model specified by a T-step Markov chain. It starts with a given data distribution x 0 ∼ q(x 0 ) and repeatedly adds Gaussian noise according to q(x t |x t-1 ) as follows:\nq (x t |x t-1 ) := N (x t ; √ α t x t-1 , (1 -α t ) I) ,\n(1) where α t ∈ (0, 1), and α t ≥ α t+1 . Using the notation ᾱt := t i=1 α i , the marginal distribution q(x t |x 0 ) can be expressed as follows:\nq\n(x t |x 0 ) := N x t ; √ ᾱt x 0 , (1 -ᾱt ) I (2) When √\nᾱT is close to 0, the defined forward process will transform this data distribution into an isotropic Gaussian distribution.\nIn practical applications, the reverse process of diffusion models is used more often, which converts an isotropic Gaussian distribution to a target data distribution. It is worth mentioning that q(x t-1 |x t ) is hard to estimate while q(x t-1 |x t , x 0 ) is tractable. We can derive the posterior distribution of x t-1 given (x t , x 0 ) with some algebraic manipulation:\nq\n(x t-1 |x t , x 0 ) := N x t-1 ; μt (x t , x 0 ) , βt I , (3) μt (x t , x 0 ) := √ ᾱt-1 β t 1 -ᾱt x 0 + √ α t (1 -ᾱt-1 ) 1 -ᾱt x t , (4) βt := 1 -ᾱt-1 1 -ᾱt β t ,(5)\nwhere β t := 1 -α t . We have no x 0 during testing, but we can calculate its approximate value according to Eq. ( 2):\ny θ (x t ) := 1 √ ᾱt (x t - √ 1 -ᾱt θ (x t )),(6)\nwhere θ (x t ) is the predicted noise derived from the denoising network, and y θ (x t ) is an approximation of x 0 calculated according to θ (x t ). Furthermore, we update Eq. (3) as follows:\np θ (x t-1 |x t ) := N x t-1 ; μt (x t , y θ (x t )) , βt I(7)\nWhen it comes to image-to-image translation, [Saharia et al., 2022a;Saharia et al., 2022b;Choi et al., 2021] make the reverse process conditional on an input signal. Specifically, when we need to translate z to y (e.g., low-light image to normal-light image), we update Eq. ( 6) as follow:" }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "This section presents PyDiff, an effective and efficient method for low-light image enhancement. First of all, we describe the motivation for designing PyDiff. Secondly, we introduce our proposed pyramid diffusion, which significantly improves the inference speed without any performance degradation. Furthermore, we present our proposed global corrector, which alleviates the global degradation that may occur in the reverse process of the diffusion models. Finally, we describe the training and sampling procedures of PyDiff. " }, { "figure_ref": [ "fig_2", "fig_2", "fig_2" ], "heading": "Motivation", "publication_ref": [ "b19", "b37", "b21" ], "table_ref": [], "text": "Reason for Using Diffusion Models. Diffusion models [Ho et al., 2020;Song et al., 2020b] have recently shown their talents in image generation, which can generate more realistic details through a sequence of refinements. Therefore, we introduce diffusion models to low-light image enhancement for better restoring noise-covered details. Shortcomings of Diffusion Models. Diffusion models have often been criticized for their slow inference speed. Many methods [Song et al., 2020a;Nichol and Dhariwal, 2021;Ho et al., 2022;Saharia et al., 2022b] have been proposed to accelerate the reverse process of diffusion models. However, they still suffer from one drawback: the resolution or feature dimension is invariant in one reverse process of diffusion models, which limits the speed. Constant Resolution is Not Necessary. Taking resolution as an example, we note that constant resolution is not necessary for one reverse process. Fig. 3(a) indicates that the first half of the reverse process can be performed at a lower resolution, which does not affect the details generated at the end.\nThe Effect of Noisy Sampling. Furthermore, Fig. 3(b) shows that if global degradation (e.g., RGB shift) occurs in the first half (i.e., sampling result has more noise) of the reverse process, the second half (i.e., sampling result has less noise) will not be able to correct it. Fig. 3 demonstrates that noisy sampling (e.g., sampling in the first half of inverse processes) in diffusion models usually does not affect the final details, mainly recovering global information such as brightness and hue. Therefore, PyDiff can perform noisier sampling at a lower resolution while ensuring the global information can be recovered correctly." }, { "figure_ref": [ "fig_1" ], "heading": "Pyramid Diffusion", "publication_ref": [ "b19" ], "table_ref": [], "text": "As shown in Fig. 2, PyDiff uses a novel pyramid diffusion method to iterate in a pyramid resolution style. Performing nosier sampling at a lower resolution can make the reverse process faster and provide the network with a larger receptive field, which is beneficial for recovering global information.\nIn this section, we introduce the proposed pyramid diffusion. Downsampling Schedule. Similar to the noise schedule {α} T t=0 in diffusion models, pyramid diffusion defines a downsampling schedule {s} T t=0 , which means that the ith sampling will be performed at the resolution downsampled with a scale factor s i . While a t ≥ a t+1 to get bigger and bigger noise, s t ≤ s t+1 to get lower and lower resolution. Forward Process. For the forward process, pyramid diffusion updates the Eq. ( 1) in diffusion models as follows:\nq\n(x t |x t-1 ) := N x t ; √ α t x t-1 ↓ st/st-1 , (1 -α t ) I ,(9)\nwhere ↓ r means downsampling with a scale factor of r. The marginal distribution q(x t |x 0 ) can be expressed as follows:\nq (x t |x 0 ) := N x t ; √ ᾱt (x 0 ↓ st ) , (1 -ᾱt ) I (10) Reverse Process. In the case of s t-1 = s t , we can derive x t-1 from x t according to Eq. ( 7). However, this is no longer applicable in the case of s t-1 < s t since differences in resolution. As shown in Fig. 4(a), (x 0 ↓ r ) ↑ r can serve as x 0 at noisy sampling (i.e., Larger r matches noisier sampling), where ↑ r means upsampling with a scale factor of r. Therefore, with well-scheduled {α} T t=0 and {s} T t=0 , we can take y θ (x t ) ↑ st/st-1 as x 0 ↓ st-1 . According to Eq. ( 10), we can further add noise to x 0 ↓ st-1 for deriving x t-1 . Adding noise through such a method leads to inconsistency between x t and x t-1 . However, this inconsistency has little impact on noisy sampling, which is primarily concerned with recovering global information. To summarize, the posterior distribution of pyramid diffusion can be expressed as follows:\np θ (x t-1 |x t ) =              N (x t-1 ; √ ᾱt-1βt 1-ᾱt y θ (x t ) + √ αt(1-ᾱt-1) 1-ᾱt x t , 1-ᾱt-1 1-ᾱt β t I), if s t = s t-1 N (x t-1 ; √ ᾱt-1 (y θ (x t ) ↑ st/st-1 ) , (1 -ᾱt-1 )I), if s t > s t-1(\n11) Position Encoding. Pyramid diffusion requires one network to process images of multiple resolutions. As the main operator of the denoising network [Ho et al., 2020], convolution kernels cannot perceive the change of resolution. We consider using position encoding to guide the network. For an image I with a resolution of H × W , its coordinates are X, Y ∈ R H×W , where X i,j = i, and Y i,j = j. After normalizing X and Y, we apply sinusoidal positional encoding of them to guide the network. Specifically, the position encoding is expressed as: Convolution kernels have a constant receptive field. When dealing with images downsampled with a scale factor of r, the range of position encoding perceived by convolution kernels will be correspondingly expanded by r times, which may tell convolution kernels the change of resolution.\npos(I) = [sin(X), cos(X), sin(Y), cos(Y)](12)" }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "Global Corrector", "publication_ref": [], "table_ref": [], "text": "As shown in Fig. 2, PyDiff uses the global corrector to alleviate global degradation in reverse processes. The global corrector can significantly improve performance with little additional computational consumption. In this section, we introduce the proposed global corrector. Global Degradation. When applying diffusion models to low-light image enhancement, we found them sometimes result in significant global degradation, as shown in Fig. 5(a). This global degradation looks like a shift in the RGB channels, similar to the RGB shift shown in Fig. 3(b). Cause of Global Degradation. Looking back on Eq. ( 6), we can rewrite it as follows:\ny θ (x t ) := 1 √ ᾱt (x t - √ 1 -ᾱt ( t -δ t ))(13)\n:= x 0 + √ 1 -ᾱt √ ᾱt δ t , (14\n)\nwhere t is the actual noise in x t , and δ t is the error between t and θ (x t ). We found that there is a coefficient\n√ 1-ᾱt √ ᾱt\nin front of the error δ t . When t is relatively large, this coefficient will also be large, as shown in Fig. 4(b). As demonstrated in Fig. 5(b), the original error δ T is small, but the coefficient\n√ 1-ᾱT √ ᾱT\nenlarges the error and leads to an obvious RGB shift (e.g., a significant gain in the R channel). As shown in Algorithm 1 Training 1: Input: noise schedule α, downsampling schedule s, correction threshold γ, denoising network θ, global corrector c, low/normal-light image pairs q(x low , y). 2: repeat 3: Sample (x low , y) ∼ q(x low , y)." }, { "figure_ref": [], "heading": "4:", "publication_ref": [], "table_ref": [], "text": "Sample t ∼ U nif orm(1, ..., T )" }, { "figure_ref": [], "heading": "5:", "publication_ref": [], "table_ref": [], "text": "Sample ∼ N (0, I) 6:\nx t = √ ᾱt (y ↓ st ) + √ 1 -ᾱt ↓ r means downsampling with a scale factor of r." }, { "figure_ref": [], "heading": "7:", "publication_ref": [ "b17" ], "table_ref": [], "text": "Take gradient descent step on\n∇ θ -θ (x t , (x low ↓ st )) 1 8: if √ 1-ᾱt √ ᾱt > γ then 9:\nTake gradient descent step on ∇ c (y ↓ st ) -y c (y θ (x t , (x low ↓ st ))) 1 10:\nend if 11: until converged Fig. 5(a), the denoising network treats the image under global degradation as usual and only performs its denoising duties, which can not eliminate the global degradation. Design of Global Corrector. We add a global corrector to alleviate the global degradation that denoising networks can not notice. The design of the global corrector needs to meet the following requirements: 1) The global corrector should alleviate global degradation while preserving generated edges and textures. 2) The global corrector is lightweight and fast. Inspired by CSRNet [He et al., 2020], we design an efficient global corrector that performs pixel-independent retouching based on global conditions. Please refer to the supplementary materials for the specific design of the global corrector. To sample with a global corrector, we update Eq. ( 7) as follows: will gradually decrease to 0 as t decreases. When the amplification factor is small enough, it will no longer amplify the error of the denoising network. Therefore, We set a correction threshold γ, and the global corrector is needed only when the\np θ,c (x t-1 |x t ) := N x t-1 ; μt (x t , y c (y θ (x t ))) , βt I(\n√ 1-ᾱt √ ᾱt > γ. We set γ = 1." }, { "figure_ref": [], "heading": "Training and Sampling", "publication_ref": [], "table_ref": [], "text": "To better demonstrate the overall framework of our PyDiff, we omit some details (e.g., position encoding) when describing the algorithm. \ny = y θ (x t , x low ) 5: y = y c (y ) if √ 1-ᾱt √ ᾱt > γ, else y = y 6: Sample ∼ N (0, I) if t > 1, else = 0 7:\nif s t > s t-1 then 8:\nx t-1 = √ ᾱt-1 (y ↑ st/st-1 ) + √ 1 -ᾱt-1 ↑ r means upsampling with a scale factor of r. 9: else 10:\nx t-1 = √ ᾱt-1βt 1-ᾱt y + √ αt(1-ᾱt-1) 1-ᾱt x t + 1-ᾱt-1 1-ᾱt β t 11:\nend if 12: end for 13: return x 0" }, { "figure_ref": [], "heading": "Methods", "publication_ref": [ "b15", "b23", "b57", "b59", "b27", "b13", "b25", "b6", "b47", "b37" ], "table_ref": [], "text": "PSNR ↑ SSIM ↑ LPIPS ↓ Zero-DCE [Guo et al., 2020] 14.86 0.54 0.33 EnlightenGAN [Jiang et al., 2021] 17.48 0.65 0.32 KinD [Zhang et al., 2019] 20.87 0.80 0.17 KinD++ [Zhang et al., 2021] 21.30 0.82 0.16 RCTNet [Kim et al., 2021] 22.67 0.79 -Bread [Guo and Hu, 2022] 22.96 0.84 0.16 NE [Jin et al., 2022] 21.52 0.76 0.24 IAT [Cui et al., 2022] 23.38 0.81 0.26 HWMNet [Fan et al., 2022a] 24.24 0.85 0.12 LLFLOW [Wang et al., 2022] 24.99 0.92 0.11 PyDiff (ours) 27.09 0.93 0.10 bined with DDIM [Song et al., 2020a] or DDPM+ [Nichol and Dhariwal, 2021] to achieve further speedup. Training Loss. As described in algorithm 1, we use the simple L1 loss to optimize the denoising network and global corrector without additional optimization objectives." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Setup", "publication_ref": [ "b49", "b53", "b6", "b59", "b13", "b25", "b57", "b47" ], "table_ref": [], "text": "Dataset. We conduct experiments on LOL [Wei et al., 2018] and LOLV2 [Yang et al., 2021] datasets. The LOLV2 dataset contains two parts, REAL and SYNC. REAL PART has noise distributions not present in the LOL dataset, and SYNC PART has illumination distributions not present in the LOL dataset.\nFor supervised learning methods involved in comparison, we use their pre-trained model only trained on the LOL dataset. Correspondingly, we train PyDiff only on the LOL dataset too. For unsupervised learning methods, we use their released pre-trained models no matter how they train.\nSchedules. First of all, we set T = 2000. For the noise schedule, we decrease α t linearly from α 1 = 0.999999 to α T = 0.99. For the downsampling schedule, our default set-\nInput IAT KIND KIND++ BREAD NE HWMNet LLFLOW PyDiff Reference\nFigure 6: Qualitative comparison with state-of-the-art methods on the LOL dataset. It can be seen that IAT [Cui et al., 2022] cannot even restore the correct brightness, and the results generated by KIND++ [Zhang et al., 2021], BREAD [Guo and Hu, 2022], NE [Jin et al., 2022],\nand HWMNet [Fan et al., 2022a] have apparent artifacts, while the KIND [Zhang et al., 2019] cannot restore colors well. LLFLOW [Wang et al., 2022] gives a not-bad result, but its result can be too smooth, and the colors of some items need to be more accurately restored. PyDiff exhibits the best result, which restores the correct color and preserves the details covered by noise. Please zoom in for the best view. ting is to set {s} T /2 t=1 = 1 and {s} T t=T /2 = 2, and we will experiment with more schedules in ablation studies. Training. We set the patch size to 192 × 288 and the batch size to 16. We use the Adam optimizer with an initial learning rate of 1 × 10 -4 for 320k iterations and halve the learning rate at 50k, 75k, 100k, 150k, and 200k. The optimizer does not use weight decay. We complete training on two NVIDIA GeForce RTX 3090s. Evaluation. Combined with DDIM [Song et al., 2020a], Py-Diff requires only 4 iterations to obtain results comparable to other SOTA methods. Other Details. The reverse process is conditional on lowlight images x low , low-light images after histogram equalization hiseq(x low ), and position encoding pos(x low ). We use the method of concatenating to achieve conditional sampling [Saharia et al., 2022b;Saharia et al., 2022a]. During training, we swap the concatenating order of x low and hiseq(x low ) with a probability of 0.5. Please refer to the supplementary materials for model configuration and details of the global corrector." }, { "figure_ref": [], "heading": "Comparsion with SOTA Methods", "publication_ref": [ "b45", "b55" ], "table_ref": [], "text": "LOL Dataset. We first compare PyDiff with SOTA methods on the LOL dataset. The quantitative results are shown in Tab. 1. PyDiff outperforms other methods in all three metrics: PSNR, SSIM [Wang et al., 2004], and LPIPS [Zhang et al., 2018]. Beating second place by 2.1 points on PSNR shows that PyDiff can recover more accurate colors. Surpassing second place by 1 point on SSIM shows that PyDiff accurately preserves more high-frequency details. Exceeding second place by 1 point on LPIPS shows that PyDiff gives more eye-pleasing results. Fig. 6 shows qualitative comparisons with other methods, where PyDiff exhibits the best result." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [ "b15", "b57", "b59", "b25", "b6", "b13", "b47" ], "table_ref": [], "text": "PSNR ↑ SSIM ↑ LPIPS ↓ Zero-DCE [Guo et al., 2020] 13.65 0.246 0.98 KinD [Zhang et al., 2019] 20.40 0.652 0.50 KinD++ [Zhang et al., 2021] 20.15 0.678 0.47 NE [Jin et al., 2022] 21.12 0.767 0.46 IAT [Cui et al., 2022] 21.43 0.638 0.60 Bread [Guo and Hu, 2022] 22.54 0.762 0.44 HWMNet [Fan et al., 2022a] 22.40 0.622 0.56 LLFLOW [Wang et al., 2022] " }, { "figure_ref": [ "fig_1" ], "heading": "Ablation Study", "publication_ref": [ "b47", "b9", "b29" ], "table_ref": [], "text": "In this section, we conduct ablation studies on the main components of PyDiff to observe their impact on performance. about the change of resolution.\nEffectiveness of Global Corrector. Tab. 5 shows that the global corrector can bring improvements to PyDiff under various settings. Fig. 2 and Fig. 5(a)(c) show that the global corrector effectively alleviates global degradation. Furthermore, we can see from Tab. 5 that the global corrector gives little additional computational consumption to PyDiff. Robustness to Batch Size. Tab. 5 shows that vanilla diffusion models without global corrector are very dependent on large batch size, which shows a significant performance drop when the batch size decreases. As our analysis in section 4.3, we argue that this is caused by the amplification factor, which has rigorous requirements for denoising networks. This problem has been significantly improved by adding a global corrector.\nAs shown in Tab. 5, the global corrector enhances the performance of diffusion models under bs = 4(8) and outperforms the ones without global corrector under bs = 8( 16), which means that the global corrector can make diffusion models more robust to batch size and easier to train.\nComparison with LLFLOW. LLFLOW [Wang et al., 2022] used to be first place on the LOL dataset based on the normalizing flow [Dinh et al., 2016;Kingma and Dhariwal, 2018]. Both FLOWs and diffusion models are generative models that require multiple iterations. Therefore, it will be interesting to compare the speed of LLFLOW and PyDiff. According to Tab. 4, PyDiff significantly enhances performance, achieving an 87% faster speed than LLFLOW." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper proposes PyDiff, a diffusion model based method for low-light image enhancement. PyDiff uses a novel pyramid diffusion method, which makes sampling faster than vanilla diffusion models without any performance degradation. Furthermore, PyDiff uses a global corrector to alleviate global degradations that cannot be noticed by the denoising network and significantly improves performance with little additional computational consumption. Experimentally, Py-Diff shows superior effectiveness, efficiency, and generalization ability on popular benchmarks. We hope that PyDiff will serve as a strong baseline for low-light image enhancement and that the pyramid diffusion method will facilitate the application of diffusion models in more low-level vision tasks." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgement. This work is partially supported by the Fundamental Research Funds for the Central Universities (No. 226-2022-00051)." } ]
Recovering noise-covered details from low-light images is challenging, and the results given by previous methods leave room for improvement. Recent diffusion models show realistic and detailed image generation through a sequence of denoising refinements and motivate us to introduce them to low-light image enhancement for recovering realistic details. However, we found two problems when doing this, i.e., 1) diffusion models keep constant resolution in one reverse process, which limits the speed; 2) diffusion models sometimes result in global degradation (e.g., RGB shift). To address the above problems, this paper proposes a Pyramid Diffusion model (PyDiff) for low-light image enhancement. PyDiff uses a novel pyramid diffusion method to perform sampling in a pyramid resolution style (i.e., progressively increasing resolution in one reverse process). Pyramid diffusion makes PyDiff much faster than vanilla diffusion models and introduces no performance degradation. Furthermore, PyDiff uses a global corrector to alleviate the global degradation that may occur in the reverse process, significantly improving the performance and making the training of diffusion models easier with little additional computational consumption. Extensive experiments on popular benchmarks show that PyDiff achieves superior performance and efficiency. Moreover, PyDiff can generalize well to unseen noise and illumination distributions.
Pyramid Diffusion Models For Low-light Image Enhancement
[ { "figure_caption": "Figure 1 :1Figure 1: (a) Compared with other SOTA methods, our PyDiff generates more realistic details and restores correct colors. For better viewing, we brighten the Input. (b) Vanilla diffusion models perform sampling in a constant resolution style, and they result in global degradation similar to the RGB shift we analyze in Fig. 5. (c) Our PyDiff performs sampling in a pyramid resolution style (i.e., progressively increasing resolution in one reverse process) to achieve faster speed (i.e., to sample at a lower resolution is faster). With the help of a global corrector, PyDiff shows stunning results without global degradation. Please zoom in for the best view.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Overview of proposed PyDiff. y θ (xt, x low ) is the approximate value of x0 calculated according to the denoising network, as discussed in Eq. (8). For better viewing, we brighten the x low . Please zoom in for the best view.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: We impose various degradations (e.g., downsampling or RGB shift) on normal-light images and get noisy x T /2 according to Eq. (2). Correspondingly, we begin the reverse process of diffusion from t = T /2, conditional on low-light images. We want to know how these degradations affect the second half of the reverse process. (a) Downsampling does not affect the details of the final result. (b) RGB shift will not be corrected. Please zoom in for the best view.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 5: θ (xt) is the predicted noise derived from the denoising network, and y θ (xt) is an approximation of x0 calculated based on θ (xt). (a) Diffusion models result in significant global degradation, which appears in y θ (xT ) for the first time and affects subsequent sampling. (b) The original error δT is nearly 0, but the amplification factor √ 1-ᾱT √ ᾱT enlarges the error, which leads to an obvious RGB shift. (c) With the help of the global corrector, diffusion models give promising results. yc(x) means using the global corrector to alleviate the global degradation in x.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "15) where y c (x) means using the global corrector to alleviate the global degradation in x. Fig. 5(c) shows that the global corrector can alleviate global degradation while maintaining generated edges and textures. Correction Threshold. As shown in Fig. 4(b), the amplification factor √ 1-ᾱt √ ᾱt", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Qualitative comparison with state-of-the-art methods on the REAL PART of the LOLV2 dataset. Please zoom in for the best view.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "𝑙𝑜𝑤𝑟𝑒𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛ℎ𝑖𝑔ℎ𝑑𝑒𝑛𝑜𝑖𝑠𝑖𝑛𝑔 𝑟𝑒𝑠𝑢𝑙𝑡𝑠𝑑𝑒𝑛𝑜𝑖𝑠𝑖𝑛𝑔 𝑟𝑒𝑠𝑢𝑙𝑡𝑠𝒘𝒊𝒕𝒉𝒘𝒊𝒕𝒉𝒐𝒖𝒕𝑥 '()𝑔𝑙𝑜𝑏𝑎𝑙 𝑑𝑒𝑔𝑟𝑎𝑑𝑎𝑡𝑖𝑜𝑛𝑔𝑙𝑜𝑏𝑎𝑙 𝑑𝑒𝑔𝑟𝑎𝑑𝑎𝑡𝑖𝑜𝑛𝑥 % && 𝑔𝑙𝑜𝑏𝑎𝑙 𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑜𝑟 𝑎𝑙𝑙𝑒𝑣𝑖𝑎𝑡𝑒 𝑎𝑙𝑙𝑒𝑣𝑖𝑎𝑡𝑒 𝑥 % && 𝑞 𝑥 ! 𝑥 !\"# .𝑦 % 𝑥 ! , 𝑥 &'( 𝑓𝑎𝑠𝑡 𝑥 ! 𝑠𝑝𝑒𝑒𝑑𝑞 𝑥 !\"# 𝑥 ! , 𝑥 $ ... ... 𝑠𝑙𝑜𝑤 𝑥 %", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Training. Algorithm 1 shows the specific training procedure of the proposed PyDiff. The global corrector aims to alleviate the global degradation that the denoising network cannot notice, and it will not impact the denoising network. Sampling. Algorithm 2 shows the specific sampling procedure of the proposed PyDiff. PyDiff can be easily com-Input: noise schedule α, downsampling schedule s, correction threshold γ, denoising network θ, global corrector c, low-light image x low . 2: Sample x T ∼ N (0, I) 3: for t = T, ..., 1 do", "figure_data": "Algorithm 2 Sampling1: 4:", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Quantitative results on the LOL dataset in terms of PSNR, SSIM, and LPIPS. ↑ (↓) denotes that larger (smaller) values lead to better quality.", "figure_data": "", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Quantitative results on the LOLV2 REAL PART in terms of PSNR, SSIM, and LPIPS. All methods involved in the comparison were not retrained on the corresponding training set. ↑ (↓) denotes that larger (smaller) values lead to better quality.LOLV2 REAL PART. Since the test set of LOLV2 REAL PART overlaps with the training set of the LOL dataset, we combine the training set and test set of LOLV2 REAL PART and filter out the overlapping parts with the LOL training set by the ID of the images. For the filtered images, we sort them by ID and select 100 (i.e., the same size as the original test set) images with the smallest ID as the test set of LOLV2 REAL PART. Many of the selected images were taken at ISOs not included in the LOL training set, which is a good test of the model's ability to deal with unseen noise. Tab. 2 shows the quantitative comparison with other SOTA methods on LOLV2 REAL PART. As PyDiff can deal with unseen noise better, it outperforms second place by 10.9 points on SSIM and 21 points on LPIPS. As shown in the second row of Fig.7, other SOTA methods give results with significant noise, while PyDiff can remove noise well. At the same time, the first row of Fig.7also shows that PyDiff can better restore images with different exposure times.", "figure_data": "21.600.6430.53", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation study on the pyramid diffusion. schedule stands for downsampling schedule, while pe means position encoding.", "figure_data": "LOLV2 SYNC PART. LOLV2 SYNC PART contains manyillumination distributions that the LOL dataset does not have,and the scenarios in it are entirely different from the LOLdataset, which can test models' generalization. Tab. 3 showsthe quantitative comparison between PyDiff and other SOTAmethods on LOLV2 SYNC PART. PyDiff shows competi-tive results and achieves first place in performance (e.g., 1.8points higher than second place on SSIM), which demon-strates the generalization of PyDiff. Supplementary materialswill show the qualitative comparison with other SOTA meth-ods on LOLV2 SYNC PART.", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "The score for this section is calculated by combining the performance on the LOL dataset, LOLV2 REAL PART, and LOLV2 SYNC PART, which gives a better indication of the effectiveness of a component. FPS is measured on the LOL dataset (i.e., the resolution is 400 × 600). These findings indicate that noisier sampling can be performed at a lower resolution, while still maintaining high performance. Position Encoding. Tab. 4 shows that the position encoding boosts PSNR and SSIM for PyDiff, which may tell networks Ablation study on the global corrector. batch stands for batch size, while gc means global corrector.", "figure_data": "Downsampling Schedules. In Tab. 4, schedule [1, 1, 1, 1]represents vanilla diffusion models, which sample at a con-stant resolution. Our default setting, schedule [1, 1, 2, 2], per-forms noisy sampling at a 1/2 resolution. Schedule [1, 1, 2, 2]is 54% faster while slightly outperforming the schedule[1, 1, 1, 1]. Furthermore, our investigation revealed that fasterschedules (e.g., [1, 1, 2, 4], [1, 2, 2, 2], and [1, 2, 4, 8]) producecomparable results to the vanilla schedule [1, 1, 1, 1].", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" } ]
Dewei Zhou; Zongxin Yang; Yi Yang; Reler; Ccai
[ { "authors": "Bao ", "journal": "", "ref_id": "b0", "title": "", "year": "2022" }, { "authors": "Fan Bao; Chongxuan Li; Jun Zhu; Bo Zhang", "journal": "", "ref_id": "b1", "title": "Analytic-dpm: an analytic estimate of the optimal reverse variance in diffusion probabilistic models", "year": "2022" }, { "authors": "Chen ", "journal": "", "ref_id": "b2", "title": "", "year": "2018" }, { "authors": "Chen Chen; Qifeng Chen; Jia Xu; Vladlen Koltun", "journal": "", "ref_id": "b3", "title": "Learning to see in the dark", "year": "2018" }, { "authors": " Choi", "journal": "", "ref_id": "b4", "title": "", "year": "2021" }, { "authors": "Jooyoung Choi; Sungwon Kim; Yonghyun Jeong; Youngjune Gwon; Sungroh Yoon", "journal": "IEEE", "ref_id": "b5", "title": "Ilvr: Conditioning method for denoising diffusion probabilistic models", "year": "2021" }, { "authors": " Cui", "journal": "", "ref_id": "b6", "title": "", "year": "2022" }, { "authors": "Ziteng Cui; Kunchang Li; Lin Gu; Shenghan Su; Peng Gao; Zhengkai Jiang; Yu Qiao; Tatsuya Harada", "journal": "", "ref_id": "b7", "title": "Illumination adaptive transformer", "year": "2022" }, { "authors": "Nichol Dhariwal", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b8", "title": "Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": " Dinh", "journal": "", "ref_id": "b9", "title": "", "year": "2016" }, { "authors": "Laurent Dinh; Jascha Sohl-Dickstein; Samy Bengio", "journal": "", "ref_id": "b10", "title": "Density estimation using real nvp", "year": "2016" }, { "authors": "Fan ", "journal": "", "ref_id": "b11", "title": "Half wavelet attention on m-net+ for low-light image enhancement", "year": "2022" }, { "authors": "Fan ", "journal": "", "ref_id": "b12", "title": "Frido: Feature pyramid diffusion for complex scene image synthesis", "year": "2022" }, { "authors": "Hu Guo", "journal": "", "ref_id": "b13", "title": "", "year": "2022" }, { "authors": "Xiaojie Guo; Qiming Hu", "journal": "ternational Journal of Computer Vision", "ref_id": "b14", "title": "Low-light image enhancement via breaking down the darkness", "year": "2022" }, { "authors": " Guo", "journal": "", "ref_id": "b15", "title": "", "year": "2020" }, { "authors": "Chunle Guo; Chongyi Li; Jichang Guo; Chen Change Loy; Junhui Hou; Sam Kwong; Runmin Cong", "journal": "", "ref_id": "b16", "title": "Zero-reference deep curve estimation for low-light image enhancement", "year": "2020" }, { "authors": " He", "journal": "", "ref_id": "b17", "title": "", "year": "2020" }, { "authors": "Jingwen He; Yihao Liu; Yu Qiao; Chao Dong", "journal": "Springer", "ref_id": "b18", "title": "Conditional sequential modulation for efficient global image retouching", "year": "2020" }, { "authors": " Ho", "journal": "", "ref_id": "b19", "title": "", "year": "2020" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b20", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": " Ho", "journal": "", "ref_id": "b21", "title": "", "year": "2022" }, { "authors": "Jonathan Ho; Chitwan Saharia; William Chan; David J Fleet; Mohammad Norouzi; Tim Salimans", "journal": "J. Mach. Learn. Res", "ref_id": "b22", "title": "Cascaded diffusion models for high fidelity image generation", "year": "2022" }, { "authors": " Jiang", "journal": "", "ref_id": "b23", "title": "", "year": "2021" }, { "authors": "Yifan Jiang; Xinyu Gong; Ding Liu; Yu Cheng; Chen Fang; Xiaohui Shen; Jianchao Yang; Pan Zhou; Zhangyang Wang", "journal": "IEEE Transactions on Image Processing", "ref_id": "b24", "title": "Enlightengan: Deep light enhancement without paired supervision", "year": "2021" }, { "authors": "Jin ", "journal": "", "ref_id": "b25", "title": "", "year": "2022" }, { "authors": "Yeying Jin; Wenhan Yang; Robby T Tan", "journal": "Springer", "ref_id": "b26", "title": "Unsupervised night image enhancement: When layer decomposition meets light-effects suppression", "year": "2022" }, { "authors": " Kim", "journal": "", "ref_id": "b27", "title": "", "year": "2021" }, { "authors": "Hanul Kim; Su-Min Choi; Chang-Su Kim; Yeong Jun; Koh ", "journal": "", "ref_id": "b28", "title": "Representative color transform for image enhancement", "year": "2021" }, { "authors": "Dhariwal Kingma", "journal": "", "ref_id": "b29", "title": "", "year": "2018" }, { "authors": "P Durk; Prafulla Kingma; Dhariwal", "journal": "Advances in neural information processing systems", "ref_id": "b30", "title": "Glow: Generative flow with invertible 1x1 convolutions", "year": "2018" }, { "authors": " Land", "journal": "", "ref_id": "b31", "title": "", "year": "1977" }, { "authors": "H Edwin; Land", "journal": "Scientific american", "ref_id": "b32", "title": "The retinex theory of color vision", "year": "1977" }, { "authors": "Lore ", "journal": "", "ref_id": "b33", "title": "", "year": "2017" }, { "authors": "Kin Gwn; Lore ; Adedotun Akintayo; Soumik Sarkar", "journal": "Pattern Recognition", "ref_id": "b34", "title": "Llnet: A deep autoencoder approach to natural low-light image enhancement", "year": "2017" }, { "authors": " Lu", "journal": "", "ref_id": "b35", "title": "", "year": "2022" }, { "authors": "Cheng Lu; Yuhao Zhou; Fan Bao; Jianfei Chen; Chongxuan Li; Jun Zhu", "journal": "", "ref_id": "b36", "title": "Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps", "year": "2022" }, { "authors": "Nichol ; Dhariwal ", "journal": "", "ref_id": "b37", "title": "", "year": "2021" }, { "authors": "Alexander Quinn; Nichol ; Prafulla Dhariwal", "journal": "PMLR", "ref_id": "b38", "title": "Improved denoising diffusion probabilistic models", "year": "2021" }, { "authors": " Rombach", "journal": "", "ref_id": "b39", "title": "", "year": "2022" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b40", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": " Saharia", "journal": "", "ref_id": "b41", "title": "Palette: Image-to-image diffusion models", "year": "2022" }, { "authors": " Saharia", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b42", "title": "Image super-resolution via iterative refinement", "year": "2022" }, { "authors": " Song", "journal": "", "ref_id": "b43", "title": "Jiaming Song, Chenlin Meng, and Stefano Ermon", "year": "2020" }, { "authors": " Song", "journal": "", "ref_id": "b44", "title": "Score-based generative modeling through stochastic differential equations", "year": "2020" }, { "authors": " Wang", "journal": "", "ref_id": "b45", "title": "", "year": "2004" }, { "authors": "Zhou Wang; Alan C Bovik; Hamid R Sheikh; Eero P Simoncelli", "journal": "IEEE Transactions on Image Processing", "ref_id": "b46", "title": "Image quality assessment: from error visibility to structural similarity", "year": "2004" }, { "authors": " Wang", "journal": "", "ref_id": "b47", "title": "", "year": "2022" }, { "authors": "Yufei Wang; Renjie Wan; Wenhan Yang; Haoliang Li; Lap-Pui Chau; Alex Kot", "journal": "", "ref_id": "b48", "title": "Low-light image enhancement with normalizing flow", "year": "2022" }, { "authors": " Wei", "journal": "", "ref_id": "b49", "title": "", "year": "2018" }, { "authors": "Chen Wei; Wenjing Wang; Wenhan Yang; Jiaying Liu", "journal": "", "ref_id": "b50", "title": "Deep retinex decomposition for low-light enhancement", "year": "2018" }, { "authors": " Whang", "journal": "", "ref_id": "b51", "title": "", "year": "2022" }, { "authors": "Jay Whang; Mauricio Delbracio; Hossein Talebi; Chitwan Saharia; Alexandros G Dimakis; Peyman Milanfar", "journal": "", "ref_id": "b52", "title": "Deblurring via stochastic refinement", "year": "2022" }, { "authors": "Yang ", "journal": "", "ref_id": "b53", "title": "", "year": "2021" }, { "authors": "Wenhan Yang; Wenjing Wang; Haofeng Huang; Shiqi Wang; Jiaying Liu", "journal": "IEEE Transactions on Image Processing", "ref_id": "b54", "title": "Sparse gradient regularized deep retinex network for robust low-light image enhancement", "year": "2021" }, { "authors": " Zhang", "journal": "", "ref_id": "b55", "title": "", "year": "2018" }, { "authors": "Richard Zhang; Phillip Isola; Alexei A Efros; Eli Shechtman; Oliver Wang", "journal": "", "ref_id": "b56", "title": "The unreasonable effectiveness of deep features as a perceptual metric", "year": "2018" }, { "authors": " Zhang", "journal": "", "ref_id": "b57", "title": "", "year": "2019" }, { "authors": "Yonghua Zhang; Jiawan Zhang; Xiaojie Guo", "journal": "", "ref_id": "b58", "title": "Kindling the darkness: A practical low-light image enhancer", "year": "2019" }, { "authors": " Zhang", "journal": "", "ref_id": "b59", "title": "", "year": "2021" }, { "authors": "Yonghua Zhang; Xiaojie Guo; Jiayi Ma; Wei Liu; Jiawan Zhang", "journal": "International Journal of Computer Vision", "ref_id": "b60", "title": "Beyond brightening lowlight images", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 84.35, 135.28, 182.29, 16.57 ], "formula_id": "formula_0", "formula_text": "q (x t |x t-1 ) := N (x t ; √ α t x t-1 , (1 -α t ) I) ," }, { "formula_coordinates": [ 3, 54, 184.83, 243, 31.23 ], "formula_id": "formula_1", "formula_text": "(x t |x 0 ) := N x t ; √ ᾱt x 0 , (1 -ᾱt ) I (2) When √" }, { "formula_coordinates": [ 3, 66.77, 317.97, 230.23, 68.63 ], "formula_id": "formula_2", "formula_text": "(x t-1 |x t , x 0 ) := N x t-1 ; μt (x t , x 0 ) , βt I , (3) μt (x t , x 0 ) := √ ᾱt-1 β t 1 -ᾱt x 0 + √ α t (1 -ᾱt-1 ) 1 -ᾱt x t , (4) βt := 1 -ᾱt-1 1 -ᾱt β t ,(5)" }, { "formula_coordinates": [ 3, 96.15, 411.13, 200.85, 23.55 ], "formula_id": "formula_3", "formula_text": "y θ (x t ) := 1 √ ᾱt (x t - √ 1 -ᾱt θ (x t )),(6)" }, { "formula_coordinates": [ 3, 77.59, 484.28, 219.41, 12.28 ], "formula_id": "formula_4", "formula_text": "p θ (x t-1 |x t ) := N x t-1 ; μt (x t , y θ (x t )) , βt I(7)" }, { "formula_coordinates": [ 4, 66.28, 270.62, 230.72, 26.83 ], "formula_id": "formula_5", "formula_text": "(x t |x t-1 ) := N x t ; √ α t x t-1 ↓ st/st-1 , (1 -α t ) I ,(9)" }, { "formula_coordinates": [ 4, 54, 498.71, 243.42, 82.15 ], "formula_id": "formula_6", "formula_text": "p θ (x t-1 |x t ) =              N (x t-1 ; √ ᾱt-1βt 1-ᾱt y θ (x t ) + √ αt(1-ᾱt-1) 1-ᾱt x t , 1-ᾱt-1 1-ᾱt β t I), if s t = s t-1 N (x t-1 ; √ ᾱt-1 (y θ (x t ) ↑ st/st-1 ) , (1 -ᾱt-1 )I), if s t > s t-1(" }, { "formula_coordinates": [ 4, 76.85, 695.2, 220.15, 8.96 ], "formula_id": "formula_7", "formula_text": "pos(I) = [sin(X), cos(X), sin(Y), cos(Y)](12)" }, { "formula_coordinates": [ 4, 353.12, 558.61, 204.88, 23.55 ], "formula_id": "formula_8", "formula_text": "y θ (x t ) := 1 √ ᾱt (x t - √ 1 -ᾱt ( t -δ t ))(13)" }, { "formula_coordinates": [ 4, 383.75, 580.91, 170.11, 29.81 ], "formula_id": "formula_9", "formula_text": ":= x 0 + √ 1 -ᾱt √ ᾱt δ t , (14" }, { "formula_coordinates": [ 4, 553.85, 595.46, 4.15, 8.64 ], "formula_id": "formula_10", "formula_text": ")" }, { "formula_coordinates": [ 4, 531.58, 625.5, 24.73, 19.41 ], "formula_id": "formula_11", "formula_text": "√ 1-ᾱt √ ᾱt" }, { "formula_coordinates": [ 4, 343.08, 674.3, 26.07, 19.41 ], "formula_id": "formula_12", "formula_text": "√ 1-ᾱT √ ᾱT" }, { "formula_coordinates": [ 5, 58.98, 181.77, 162.01, 36.34 ], "formula_id": "formula_13", "formula_text": "∇ θ -θ (x t , (x low ↓ st )) 1 8: if √ 1-ᾱt √ ᾱt > γ then 9:" }, { "formula_coordinates": [ 5, 65.7, 439.54, 218.85, 25.85 ], "formula_id": "formula_14", "formula_text": "p θ,c (x t-1 |x t ) := N x t-1 ; μt (x t , y c (y θ (x t ))) , βt I(" }, { "formula_coordinates": [ 5, 153.97, 565.78, 106.55, 19.41 ], "formula_id": "formula_15", "formula_text": "√ 1-ᾱt √ ᾱt > γ. We set γ = 1." }, { "formula_coordinates": [ 5, 319.98, 126.94, 185.95, 47.5 ], "formula_id": "formula_16", "formula_text": "y = y θ (x t , x low ) 5: y = y c (y ) if √ 1-ᾱt √ ᾱt > γ, else y = y 6: Sample ∼ N (0, I) if t > 1, else = 0 7:" }, { "formula_coordinates": [ 5, 315.5, 204.4, 237.19, 30.31 ], "formula_id": "formula_17", "formula_text": "x t-1 = √ ᾱt-1βt 1-ᾱt y + √ αt(1-ᾱt-1) 1-ᾱt x t + 1-ᾱt-1 1-ᾱt β t 11:" } ]
[ { "figure_ref": [ "fig_0" ], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b14", "b15", "b16", "b15", "b16", "b17", "b18", "b19", "b20", "b21", "b22", "b23", "b24", "b25", "b26", "b27", "b28", "b15", "b16", "b17", "b13" ], "table_ref": [], "text": "V ISUAL SLAM is an important technique of ego-motion estimation and scene perception, which has been widely used in navigation for drones [1], ground vehicles, self-driving cars [2], or other applications such as Augmented and Virtual Reality (AR and VR) [3]. The typical visual SLAM algorithm extracts point features [4], [5] from images for pose estimation and mapping. Recent methods [6], [7] even directly operate on pixels. However, it is well known that incorporating high-level features like lines [8], surfaces [9] or even semantic objects [10], [11] in the visual SLAM system will lead to better performance.\nOne common type of high-level feature is text objects. Scene texts play a key role in identifying locations with various forms such as road marks [12], [13], building or object signs [14], [15], room names [15], [16], [17], and other textual captions [16], [17], [18]. They help us to recognize landmarks, navigate in complex environments, and guide us to the destination. Detection and recognition of scene texts from images have been developing fast [19], [20], [21], [22], [23], [24], [25], [26] because of the boom of deep neural networks and the emergence of huge text datasets such as COCO-Text [27], DOST [28], and ICDAR [29]. As extracting scene texts from images becomes easy nowadays, one question raises whether texts can be integrated into a visual SLAM system to both yield better performance and generate high-quality 3D semantic text maps that could be useful for robot navigation and scene understanding, as well as augmented reality and human-computer interaction.\nTexts spotted in our daily life are mostly planar regions, at least for a single word or character if not the whole sentence. The rich texture and planar property of a text entity make the text object a good feature for tracking and localization. More importantly, the semantic messages that a text object delivers are invariant to appearance changes, hence text objects are also reliable features for matching even when the illumination or viewpoint changes significantly. Those characteristics of scene texts are certainly good for SLAM, while the key issue is how to integrate them into a visual SLAM system.\nThere are several attempts towards coupling SLAM with text features. A navigation system with human-computer interaction [16], [17] for blind people, assisted with text entities, is built upon the visual-inertial SLAM system shipped on Google Tango tablet. Similarly, Wang et al. proposed a method to extract text features [18], which are then used for fusion with Tango's SLAM outputs to recognize revisited places. The aforementioned works have shown the great potential of using text features with existing SLAM systems. However, they treat the SLAM system as a black box, which is unable to fully take advantage of the characteristics of scene texts that should be beneficial to SLAM.\nIn this paper, we present a novel SLAM system tightly arXiv:2305.10029v2 [cs.CV] 3 Jul 2023 coupled with semantic text features as shown in Fig. 1. Specifically, we integrate the text features into the SLAM pipeline by fully exploiting the favorable characteristics of scene texts. Geometrically, text features are treated as texture-rich planar patches used for camera pose estimation and back-end optimization to yield more accurate and robust estimation. Semantically, the meaning of those scene texts, invariant to appearance changes, are utilized for reliable place recognition and feature matching across scenes with large illumination or viewpoint changes. For the lack of SLAM benchmarks with rich texts, we collected a text-orientated dataset both indoor and outdoor with the ground truth carefully acquired for evaluation. We compare our SLAM system with state-of-the-art approaches. The results show that by tightly coupling with text objects, the SLAM system becomes more accurate and robust, and even works well under challenging conditions such as serious illumination changes (day and night), viewpoint changes, and occlusions, where existing SLAM systems usually fail. We also compared our text-based method with the stateof-the-art visual localization methods for loop closing. The results show that our text-based method outperforms those methods in text-rich environments with a much lower computational cost.\nThe technical contributions of our work include: 1) A novel visual SLAM framework that integrates text features in front-end pose tracking, back-end optimization, loop closing, and mapping. To our best knowledge, this is the first work that tightly integrates scene texts into the pipeline of visual SLAM. We also contribute a dataset in various text-rich environments for evaluation.\n2) We present both geometric and semantic representations of text features, as well as their observation and data association models within the SLAM pipeline.\n3) A novel loop closing technique relying on the semantic meaning of text features. With the help of semantic information, reliable loop closing can be achieved even in challenging scenarios, including serious illumination changes, occlusion, and drastically varying viewpoints.\nThis paper is extended from our previous work [14]. The major extension is incorporating the semantic information into the SLAM pipeline, especially for the semantic data association and loop closure, as well as additional experiments and analysis. Specifically, the extensions include a novel semantic representation of text objects together with its update scheme (Section 3.2), using the semantic information of text objects for loop closure (Section 4.5), and several improvements of the SLAM system (Section 4) such as text object culling, text selection in pose estimation, and feature sampling in the coarse-to-fine optimization. In addition, we present a challenging text-orientated dataset with ground truth. Additional tests in indoor, outdoor and day-night switching are also presented in Section 5." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Planar features", "publication_ref": [ "b2", "b34", "b35", "b36", "b37", "b39", "b40", "b37", "b39", "b18", "b19" ], "table_ref": [], "text": "Most scene texts can be treated as texture-rich planar features. Planar features have been studied in the visual SLAM community since the early stage. In early works [3] [34] [35], the planes in the scene were detected by RANSAC [36] among estimated 3D points and employed as novel features to replace points in the state. Since much fewer parameters are required to represent the map using planes, it reduces the computational cost significantly [37]. These works show that planar features improve both accuracy and robustness of a visual SLAM system. Existing methods require 3D information to discover the planar features, usually using a RGB-D camera [38] [39] [40]. However, this becomes difficult using only image sensors. An interesting idea [41] is to assume the region surrounding the detected feature point as a locally planar surface. The assumption seldom holds in realistic scenes, as feature points might be extracted from anywhere in the scene. By contrast, texts in realistic scenes are mostly located on planar surfaces. Unlike general planar features that usually require depth for detection [38] [39] [40], scene texts can be easily extracted by off-the-shelf text detectors [19], [20]." }, { "figure_ref": [], "heading": "Visual SLAM with semantics", "publication_ref": [ "b36", "b41", "b42", "b43", "b44", "b10", "b45", "b46", "b47", "b48", "b49", "b50", "b51", "b52", "b53", "b54", "b57" ], "table_ref": [], "text": "Integrating semantic information into visual SLAM systems has been receiving increasing interest in recent years [37], [42], [43], [44], [45]. One approach is to directly fuse the 2D semantic labels with the dense 3D map from RGB-D SLAM [11], [46], [47], or a dense visual SLAM [48]. Another approach is to take semantic objects as high-level features within the SLAM pipeline [49] [50], which requires prescanned 3D models to precisely fit the observation on the image. Though recent methods [51], [52], [53], [54] build the 3D representation of objects online with a depth camera, it is still difficult to be generalized to unseen objects with only video cameras. Other methods seek to use 3D bounding boxes [55] [56] [57] or quadrics [58] to represent objects, but such kind of approximation suffers from loss of accuracy. In this paper, we focus on the particular semantic object, i.e., scene texts. Unlike generic objects, text objects, such as road signs, shop signs, room numbers, and commercial boards, are texture-rich planar features and contain rich semantic information about environments. Those characteristics are more favorable to visual SLAM than those of general objects." }, { "figure_ref": [], "heading": "Text-aided navigation", "publication_ref": [ "b58", "b59", "b11", "b12", "b32", "b29", "b30", "b31", "b11", "b17", "b32", "b12", "b14", "b16", "b17", "b31", "b15", "b16" ], "table_ref": [], "text": "Scene Texts such as room numbers, road marks, route or traffic signs, and shop signs are naturally good visual landmarks to assist navigation. We summarize existing works on text-aided navigation in Tab. 1. In the early works [30] [31], indoor text labels such as room numbers or name tags were used as guidance for a robot to navigate in the lab environments. However, text-aided navigation was still in its infancy at that time, as the technique of detection and recognition of scene texts was still under early development [59], [60]. Ranganathan et al. [12] integrated standard road marks into the pre-built GPS+IMU+camera map to estimate the vehicle's ego-motion for autonomous driving. With the prior knowledge of a comprehensive geo-tagged street-level map (e.g. GoogleMaps or OpenStreetMap) and the compass information, Radwan et al. [13] extracted text information from the street signs to assist pose estimation in a 2D map. Similarly, Wang et al. [33] used the shop names for localization by taking the building's floor plan as a prior under the assumption of Manhattan world. Following the [30] room nameplates heuristic office map with a corridor and doors Robot navigation indoor Mata et al. 2001 [31] room nameplates heuristic office map with landmark annotation Robot navigation indoor Case et al. 2011 [32] room nameplates heuristic laser grid-based map with text annotation Robot navigation indoor Ranganathan et al. 2013 [12] road marks heuristic road surface marks map localization outdoor Wang et al. 2015 [18] artificial tags heuristic landmarker map loop closing indoor Wang et al. 2015 [33] store signs heuristic floorplan with text annotation localization indoor Radwan et al. 2016 [13] store signs heuristic geo-tagged street-level map with text annotation localization outdoor Hong et al. 2019 [15] street&store signs deep learning 2D imagery map localization indoor&outdoor Li et al. 2019 [17] room Wang et al. [18] proposed a spatial-level feature named 'junction' for the text extraction, and then used the text objects to improve loop closing performance based on Google Tango's SLAM outputs. Case et al. [32] annotated the text labels on the existing map generated by a laser SLAM system to help robots recognize named locations and understand humans' free-text queries. Rong et al. [16] presented an assistive blind navigation system with a text spotting function based on the Tango's SLAM system. Similarly, built on the SLAM system of Tango, a mobile solution [17] of assistive navigation system combined various sources, such as text recognition results and speech-audio interaction, for blind and visually impaired people to travel indoors independently. Existing text-aided navigation systems integrate text objects loosely by regarding the SLAM system as a black box. By contrast, our proposed method integrates the text objects tightly into the SLAM system to facilitate both camera tracking and semantic mapping. Moreover, the semantic information from the established map is used for loop closing and camera localization to achieve good performance even under challenging conditions." }, { "figure_ref": [], "heading": "SEMANTIC TEXT FEATURES", "publication_ref": [], "table_ref": [], "text": "A semantic text feature is represented as a planar and bounded patch with its unique semantic meaning. We describe the geometric model (including the parameterization and observation models) of a text object, as well as the way to represent and update the semantic information in the following sections." }, { "figure_ref": [], "heading": "Geometric model", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Parameterization", "publication_ref": [], "table_ref": [], "text": "Text objects are regarded as planar and bounded patches. Each text patch (enclosed by a bounding box) is anchored to a camera frame, named as the host frame, which is the first frame where the text object appears as shown in Fig. 2. Within the host frame, the plane where the text patch lies is given by n T p + d = 0, where n = (n 1 , n 2 , n 3 ) T ∈ R 3 is the normal of the plane and d ∈ R is related to the distance from Fig. 2. A text object is compactly parameterized by θ. The inverse depth ρ of a text point p can be computed by ρ = 1/z = θ T m and its projection onto the target view Ct is a homography transform with respect to the relative pose T between the two views.\nthe plane to the origin of the host frame; p ∈ R 3 represents the 3D point on the plane.\nA straightforward parameterization of a text plane could be directly using the four parameters (n 1 , n 2 , n 3 , d) of the plane equation. However, this is an over-parameterization that leads to rank deficiency in the nonlinear least-squares optimization. We hence adopt a compact parameterization that contains only three parameters.\nθ = (θ 1 , θ 2 , θ 3 ) T = -n/d.(1)\nWe'll show that this parameterization is closely related to the inverse depth of the 3D point on the text plane. Within the host frame, each 3D point p ∈ R 3 observed on the image can be represented by its normalized image coordinates m = (u, v) T and its inverse depth ρ = 1/z. The 3D coordinates of this point are computed as p = (uz, vz, z) T = z m, where m represents the homogeneous coordinates of m. If the 3D point locates on the text plane, we have z • n T m + d = 0. The inverse depth ρ of this 3D point is then computed as\nρ = 1/z = -n T /d m = θ T m.(2)\nThat is, we can use a simple dot product to quickly infer the inverse depth of a text point from its 2D coordinates, given the text parameters θ.\nOn the other hand, if we have at least three points on the text patch (for example, three corners of the bounding box), with their inverse depth value, we can immediately obtain the text parameters by solving\n   mT 1 . . . mT n    θ =    ρ 1 . . . ρ n    , n ≥ 3.(3)\nThis allows us to quickly initialize the text parameters from the depth value of three corners of the text bounding box. Properties such as the boundary of a text object are kept in our system. Those properties are acquired from a text detector as we'll describe later." }, { "figure_ref": [ "fig_7" ], "heading": "Observation", "publication_ref": [ "b5", "b6" ], "table_ref": [], "text": "To update the parameters of a text object, an observation model should be defined. Here we choose an observation model that measures the difference between the detected text object and the projection of the estimated 3D text object on the image. In the first step, we need to project the 3D text object from the host frame onto the target image plane.\nLet T h , T t ∈ SE(3) represent the transformations of the host frame and the target frame with respect to the world frame. The relative pose between the host frame and the target frame is computed as T = T -1 t T h . We let R, t be the rotation and translation of T. Given the text parameters θ and the observed text point (with homogenous coordinates m ) in the host image, the 3D coordinates of point p in the host frame are :\np = m/ρ = m/(θ T m).(4)\nThe point is then transformed into the target frame by\np ′ = Rp + t.(5)\nLet m′ be the homogeneous coordinates of the projected point on the target image plane. We have\nm′ ∼ ρ p ′ = R m + ρt ⇒ m′ ∼ R m + t θ T m,(6)\nwhere ∼ means equivalence up to a scale. Therefore, the process of projecting a 3D text object on the target image plane is a homography mapping of the text points from the host image plane to the target image plane, namely\nm′ ∼ H m,(7)\nwhere H = R + tθ T ∈ R 3×3 is a homography matrix that relies on the relative pose R, t and the text parameters θ. For convenience, we write the projection process as a function:\nm ′ = h(m, T h , T t , θ),(8)\nwhere m denotes the observed text point on the host image plane, and m ′ represents the projected text point on the target image plane. θ represents the text parameters.\nWe take each text region as a single patch and align it to other frames directly by minimizing the difference between them instead of detecting the word once again. Motivated by directed approaches [6] [7], our observation model computes the photometric error between the extracted text object and the projected one on the image. As we shall see in the experiments (see Fig. 9), using direct approaches will lead to better accuracy and robustness, particularly for blurry images. The biggest issue of the direct approach is handling the illumination changes. Existing work [7] adopts an affine model to address intensity changes, but it requires extra parameters involved in optimization and sophisticated photometric calibration to guarantee performance. We choose to use zero-mean normalized cross-correlation (ZNCC) as the matching cost to handle illumination changes.\nLet Ω be the set of pixels within the text region, and m ∈ Ω be a text pixel. The normalized intensities for text pixels are:\nĨ(m) = (I(m) -ĪΩ )/σ Ω ,(9)\nwhere ĪΩ and σ Ω stand for the average intensity and the standard deviation of the pixels in the text region Ω. The text patch in the host image and the predicted one in the target image ( 8) are then compared by :\nZN CC(I h , I t ) = m∈Ω Ĩh (m) Ĩt (m ′ ).(10)\nThe ZNCC cost is between -1 and 1. The larger ZNCC cost indicates the two patches are more similar. However, it is difficult to directly use the ZNCC cost within the optimization framework of the nonlinear least-squares problem. We hence adopt a variant form of ZNCC as the cost function\nE(I h , I t ) = m∈Ω ( Ĩh (m) -Ĩt (m ′ )) 2 . (11\n)\nThough the cost function is similar to the SSD (Sum of Squared Difference) cost, it contains an additional normalization process to ensure the robustness to illumination changes. If we expand this cost function as :\nm∈Ω ( Ĩh (m) 2 + Ĩt (m ′ ) 2 ) -2 m∈Ω Ĩh (m) Ĩt (m ′ ),(12)\nwe can discover that minimizing this cost function is equivalent to maximizing the ZNCC cost, because Ĩh (m) 2 = N and Ĩt (m ′ ) 2 = N , where N is a constant number of pixels within the text region Ω. The photometric error of a text object π with respect to the target frame t is defined as :\nE π,t photo = m∈Ω π ϕ(( Ĩh (m) -Ĩt (h(m, T h , T t , θ π ))) 2 ),(13)\nwhere ϕ(•) is the Huber loss function to handle possible outliers. Here, we use Ω π to represent the text region on the host image plane. As we'll describe later, to make the computation faster, we do not use all the pixels within the text region, instead select only some of them as the reference pixels to compute the photometric error." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "Semantic information management", "publication_ref": [ "b18" ], "table_ref": [], "text": "The semantic meanings of scene texts are valuable information for scene understanding and also benefit data association in SLAM because they are invariant to appearance changes. We represent the semantic information X of a text object by two parts: its meaning s and a semantic cost g sem\nX = {s, g sem } ,(14)\nwhere the text meaning s is a text string and the semantic cost g sem describes the quality of the estimated text meaning. Lower semantic costs indicate better qualities. As illustrated in Fig. 3, the semantic information of a text object is initialized from the first observation and continuously updated when new observations arrive.\nFor the text object recognized at each frame, we extract its current semantic information, X = {ŝ, ĝsem }. Here ŝ is the text strings from the recognition results of the text extractor. The semantic cost ĝsem is defined as\nĝsem = λg mean + g geo . (15\n)\nHere g mean represents the meaning cost with respect to the confidence of the text extractor and g geo is a cost describing if the text object is well posed towards the camera. The weight λ is used to balance two sources of information, which is set as 200 in our implementation. The smaller ĝsem implies more reliable observed semantic information.\nThe meaning cost g mean in ( 15) is set as g mean = 1 -g recg , where g recg comes from the confidence of text extraction [19] and is usually located in the range of [0, 1]. The larger confidence implies a more reliable recognition result. Here we use the minus operation to keep its consistency with other components. Some cases such as image blur or occlusion (take the fourth frame K 3 in Fig. 3 for example) will lead to a larger g mean , indicating the extracted text meaning is unreliable.\nThe geometric cost g geo in ( 15) is defined as g geo = l + λ ′ (1 + o T n/(∥o∥∥n∥)), which consists of two terms. The first term measures the distance between the text object center and the camera center. The second term measures the difference between the viewing direction o and the normal direction n of the text object as visualized in Fig. 3. The weight λ ′ is set to 10 in our implementation.\nThe semantic information of a newly detected text object is initialized as the first observation information X 0 ← X . Whenever a new observation X = {ŝ, ĝsem } arrives, the semantic information of the text object is updated by\nX k ← arg min X k-1 , X (g sem k-1 , ĝsem ).(16)\nIn other words, the semantic information with the smallest semantic cost is selected. Based on this strategy, the extracted information under good conditions (legible and nonoccluded text patches in the right orientation and close to the viewpoint) is preferred. Hence the semantic information will be more accurate with more high-quality observations available." }, { "figure_ref": [ "fig_2" ], "heading": "TEXTSLAM SYSTEM", "publication_ref": [], "table_ref": [], "text": "Our SLAM system, TextSLAM, is built upon a point-based system and adopts the keyframe-based framework to integrate the text features tightly. The mixture of point features and text features allows our system to work properly even in the scenes without any text signs. Fig. 4 illustrates the flowchart of TextSLAM. We'll introduce the key components in the following sections." }, { "figure_ref": [ "fig_16", "fig_0", "fig_16" ], "heading": "Initialization of text objects", "publication_ref": [ "b18", "b19", "b20", "b21", "b22", "b23", "b24", "b25", "b18", "b28", "b60", "b61", "b6" ], "table_ref": [], "text": "A text object is initialized only when it is correctly extracted from the image. Extracting scene texts from images is highly challenging until deep neural networks have been used.\nRecently, text detectors based on convolution neural networks [19], [20], [21], [22], [23], [24], [25], [26] have achieved impressive performances. We adopt AttentionOCR [19] as the text extractor in our implementation, which ranks at the top on ICDAR2019 Robust Reading Challenge on Arbitraryshaped Text [29] and supports multiple languages. Some text extraction results are shown in Fig. 19. The outputs are arbitrary-orientation quadrilaterals enclosing the text regions. Note that our system is not limited to any particular text extractors. The text extractor can be replaced if more advanced ones are available.\nTo initialize the parameters θ of a text object newly detected in a keyframe, we track the FAST [61] feature points within the text region by Kanade-Lucas-Tomasi [62] tracker until the next keyframe. Let m i ↔ m ′ i be the corresponding points in both keyframes, and R, t be the relative pose between the two frames. From (7), we have m′ i × H mi = 0. By taking H = R + tθ T , we then obtain\n[ m′ i ] × t mT i θ = -[ m′ i ] × R mi ,(17)\nwhere mi and m′ i are the homogeneous coordinates of m i and m ′ i and [•] × denotes the skew symmetric matrix corresponding to the cross product. Note that the rank of the matrix on the left hand side is one. It requires at least three pairs of corresponding text points to solve θ.\nAfter initialization of the parameters of a text object, we also keep the four corners of the quadrilateral indicating the text region. The newly initialized text objects are kept being updated in the following frames. They are inserted into the 3D text map whenever the two rules are met: 1) the text object has been observed in at least n min (4 in our implementation) frames; 2) the text parameters converge to a relatively stable state. For the second rule, we check the normal of the text plane changes if larger than 25 • in our implementation. Once the parameters of a text object have been initialized, we also keep its semantic information being updated as described in Section 3.2. Several text map examples are visualized in Fig. 1 and Fig. 19.\nAfter successful initialization, each text is aligned to other frames directly by minimizing the photometric error. Our method only requires sparse text detection to initialize texts that newly occurred and can track them reliably in the following frames without any extra detection inputs." }, { "figure_ref": [], "heading": "Camera pose estimation with text objects", "publication_ref": [ "b12", "b60", "b62", "b17", "b10" ], "table_ref": [], "text": "Both points and text objects are involved in camera pose estimation. We select text objects observed by previous 2 keyframes for pose estimation and exclude those that are behind the camera (at least one vertice of the text quadrilateral is behind the camera) or whose orientation is perpendicular to the current viewing direction. We also exclude those text objects whose appearance in the current frame changes much compared with that in the host frame because of occlusion. We use ZNCC for comparison and exclude those text objects with ZNCC less than 0.1 in our implementation. Camera pose estimation is done by minimizing the following cost function\nE(T t ) = E point (T t ) + λ w E text (T t ),(18)\nwhere T t ∈ SE(3) represents the current camera pose at frame t. The first term E point represents the sum of reprojection errors of point features :\nE point (T t ) = i ϕ(∥m i -P(T t , X i )∥ 2 ),(19)\nwhere m i is the 2D coordinates of the observed point in the image and P(T t , X i ) represents the projection of the 3D point X i onto the image plane. Here ϕ(•) is the Huber loss function to handle outliers. The second term E text contains only photometric errors of text objects, namely,\nE text = j E πj ,t photo . (20\n)\nHere E πj ,t photo represents the photometric error of the j-th text object, which is defined in (13). Though we may use all the pixels within the text region to evaluate the photometric errors, an efficient way is to use a small part of them. Since the text region is full of textures, we adopt the FAST points [61] within the text regions as the representative pixels. We then follow [63] to use an eight-pixel pattern around each representative pixel to compute the photometric errors.\nThe trade-off between the two terms in (18) needs to be regulated by the weight λ w since they are in different units (position difference vs intensity difference). The weight λ w is computed as λ w = σ rep /σ photo . σ rep represents the standard deviation of the reprojection error of a pair of corresponding points (in both x and y directions) and σ photo represents the standard deviation of the photometric error of a text object as defined in (11). Those standard deviations can be acquired through a small set of training data (given corresponding points and text patches).\nOptimization of the cost function ( 18) is a nonlinear leastsquares problem. As the photometric cost E text is highly nonlinear, it requires a good initial guess of T t to avoid being trapped in a local minimum. We firstly use a constant velocity model to predict the camera pose and then apply a coarse-to-fine strategy for efficient optimization.\nSpecifically, we downsize the images by 1/2 recursively to build an image pyramid with three levels. Both the sampled points in the text region and detected feature points outside the text regions are down-sampled to reduce the number of variables to be optimized at coarse levels. The optimization starts from the coarsest level. The result is used to initialize the optimization process at the next level until reaching the final level. To downsample the text points in the next level, we divide the bounding box of a text object into a grid. For each cell in the grid, we select the point with the largest gradient. The number of cells for sampling is set to be N 0 /4 l + 100, where N 0 is the number of points in the original resolution. Downsampling the feature points outside text regions works in a similar way, where the whole image is divided into cells for sampling.\nDuring coarse-to-fine optimization, those points (including the text points) with large errors are marked as outliers and discarded. For each text object, when more than 99% text points are marked as outliers, this text object is marked as an outlier at this frame." }, { "figure_ref": [], "heading": "Text objects culling", "publication_ref": [], "table_ref": [], "text": "To ensure the good quality of the 3D text map, we drop those text objects from further processing which have been frequently recognized as outliers in camera pose estimation. Specifically, let #F bad , #F good be the number of frames where the text object is marked and not marked as an outlier respectively. We check if the following conditions hold for the text object after finishing each bundle adjustment in our implementation:\n1) the text object is marked as an inlier in at least two frames (#F good > 2); 2) the number of bad frames is less than the number of good frames and also less than a preset limit ( #F bad < 0.9#F good and #F bad < 40).\nIf one of those conditions is not met, the text object is set as 'bad object' and excluded from future processing." }, { "figure_ref": [], "heading": "Bundle Adjustment with text objects", "publication_ref": [ "b3" ], "table_ref": [], "text": "We apply bundle adjustment from time to time in a local window of keyframes similar to existing SLAM systems [4].\nThe cost function of bundle adjustment also consists of the point part and the text part :\nE(x) = E point (x) + λ w E text (x).(21)\nThe cost function resembles that of camera pose estimation while involving more parameters to be optimized. The variable x includes the camera poses of keyframes in the local window, the inverse depth of point features, and the text parameters. We also adopt a coarse-to-fine method to optimize (21) as described in camera pose estimation." }, { "figure_ref": [], "heading": "Loop closing using scene texts", "publication_ref": [], "table_ref": [], "text": "Scene texts are reliable landmarks for place recognition because their meanings are invariant to changing illuminations or viewpoints. We present how to use scene texts to detect revisited places and also integrate them to correct the accumulated error as in our SLAM system." }, { "figure_ref": [], "heading": "Detection of loop candidates", "publication_ref": [ "b63", "b64", "b65", "b3" ], "table_ref": [], "text": "To detect possible loops, we need to compare the latest keyframe with old keyframes. Existing visual SLAM systems usually use the bag-of-visual-words vectors [64], [65] for comparison. The visual words, clustered from the feature descriptors, rely on the image appearances that may change drastically, leading to false or missing loop detection. By contrast, the meanings extracted from those text objectsthe text strings or the real words -will not change with image appearances. Hence our idea for loop closing is to use those 'real words' instead of 'visual words' for searching the similar keyframes.\nOur searching process consists of two steps. The first step is to match reliable words (have been refined by multiple covisible frames) observed in the latest keyframe to existing 3D text objects in the map. Directly matching a 3D text map is far more efficient than matching the 2D detections on the historical frames because the latter requires much more comparisons due to the repeated observations of a single word, as discussed in Section 5.3.2. The second step is to select loop candidates from the keyframes associated with those matched 3D text objects (note that the keyframes within the sliding window of bundle adjustment are excluded from selection).\nTo match a word in the query frame to a 3D text object in the map, we directly compare their meanings (text stings) s i , s j by\ns(s i , s j ) = max(|s i |, |s j |) -d(s i , s j ) max(|s i |, |s j |) ∈ (0, 1], (22\n)\nwhere |s| is the length of a string s and d(s i , s j ) is the Levenshtein distance [66] between two strings, which measures the minimum operations changing one string s i to the other string s j , including deletion, insertion and substitution. For example, changing 'seed' to 'seek' needs 1 operation:\nsubstituting 'd' with 'k'. So the distance is 1. The two strings are matched when the similarity score s(s i , s j ) is above a threshold. With such a similarity score, it allows two strings to be matched even they are not exactly the same, which may happen when the text object is partially occluded or falsely recognized. The threshold of being matched or not is selected based on the best matching result. If one text object in the query frame is exactly matched to a text object in the map, s(s i , s j ) = 1, we require all the text objects to be exactly matched by setting the threshold to be 1. Otherwise, we set the threshold proportional to the maximum matching score s max by max( 2 3 s max , 0.35) to address partial occlusion or false recognition of text objects empirically, where 0.35 served as the minimal threshold, dedicating the two texts are matched when at least one-third characters are same among the entirety. This adaptive threshold scheme increases the robustness of our system in different scenes.\nThe candidate keyframes (the top ten are selected) for loop closing are selected from the keyframes associated with those matched text objects where the number of matched text objects is greater than a threshold s min , which is set to be proportional to (60%) the minimum number of covisibile text objects in the keyframe connected to the latest frame in the covisibility graph [4], while being larger than three for outdoor scenes and two for indoor scenes in our experiments. " }, { "figure_ref": [ "fig_3", "fig_4", "fig_17", "fig_19" ], "heading": "Compute the relative transformations", "publication_ref": [ "b3", "b66", "b3" ], "table_ref": [], "text": "The relative transformation between the current keyframe and the loop frame is required to be estimated to close the loop. We follow [4] to compute the similarity transformation between the current keyframe and the loop keyframe and the key is to obtain point correspondences between the two frames. However, it becomes highly challenging to acquire correct point correspondences when the illumination or viewpoint varies significantly. Since text objects are matched in loop detection by their semantic meanings, they can be used as a reliable prior for searching point-level correspondences even when illumination or viewpoint changes dramatically. Specifically, we search the point correspondences based on text points within the matched text regions instead of the whole images. We find that the contrast of a text object in the image does not change as much as we expect under different illuminations (unlike the color or intensity) as shown Fig. 5. Therefore, the BRIEF descriptor [67], relying on the relative difference of a pair of pixels, works well for matching the text points within the limited regions of two matched text signs, while it leads to a lot of false correspondences if matching is conducted on the whole images as shown in Fig. 6. The text-guided point matching is robust and accurate even across the day and night as the experimental results show (Fig. 20 and Fig. 22).\nSimilar to [4], after we obtain the 3D to 3D correspondences from the matched text points, we use RANSAC to compute the similarity transformation and optimize it. Next, we perform a guided search to obtain more point correspondences outside the text regions. We then optimize the similarity transformation again and accept those loop candidates with sufficient inliers." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_12", "fig_6" ], "heading": "Data collection", "publication_ref": [], "table_ref": [], "text": "For the absence of SLAM benchmark datasets with text objects, we collected image sequences with scene texts in both indoor and outdoor scenes for evaluation. We use different devices for data collection in indoor and outdoor scenes. Our device for indoor scenes is shown in Fig. 16. It consists of an RGB camera (Intel's RS-D455) for capturing the color images and several optical markers for obtaining ground truth via a motion capture system. Our device for outdoor scenes is shown in Fig. 8. It consists of three RGB cameras recording multiple image sequences in different viewing directions simultaneously. We'll discuss how to acquire the ground truth trajectories for outdoors in the later sections. Image sequences are resized to 640 × 480 in all the tests." }, { "figure_ref": [ "fig_12" ], "heading": "Indoor tests", "publication_ref": [ "b26", "b63", "b62" ], "table_ref": [], "text": "Indoor tests were conducted within a laboratory. A room with a motion capture system was used to obtain the ground truth trajectories of millimeter accuracy as shown in Fig. 16. The room was placed with random texts and those text strings were sampled from COCO-Text [27], which is a large-scale text-orientated natural image dataset, where the fonts and sizes are randomly selected.\nWe compare our TextSLAM system with the state-of-theart visual SLAM systems: ORB-SLAM [64] and DSO [63], where ORB-SLAM uses point features and DSO directly operates on raw pixels. By contrast, our system uses both points and text features and is not limited to text-rich scenes -if no text has been detected, our system can use only point features. To evaluate the effectiveness of integrating text objects into the SLAM pipeline, we also present the results of our system with only point features enabled (Our pointonly baseline). The middle bar '-' indicates the algorithm fails to finish the whole trajectory." }, { "figure_ref": [ "fig_7", "fig_7" ], "heading": "Evaluation of camera trajectory", "publication_ref": [ "b63", "b62", "b62" ], "table_ref": [], "text": "The bold texts indicate the best results and the underlined texts highlight the better result between TextSLAM and the point-only baseline.\nIn this test, we evaluate the performance of trajectory estimation of different systems by using the relative pose error (RPE) and the absolute pose error (APE). Loop closing was disabled for all the systems. Ten indoor sequences were used for evaluation. The results are shown in Tab. 2. We can see that our TextSLAM system performs better than our point-only baseline, demonstrating the benefit of using the high-level text features. Our system also outperforms both ORBSLAM [64] and DSO [63] in most sequences and only performs slightly worse than DSO [63] in a few sequences (Indoor 09, 10). Though the test scene with text labels is highly textured which is ideal for the point-based algorithms, our TextSLAM still performs the best among all the systems, again indicating the benefit from integrating text objects in the SLAM pipeline.\nWe also evaluate the robustness of the proposed method under fast camera motion. Considering that commercial cameras, such as GoPro, are prone to image blur under fast motion, we use GoPro to collect image sequences under rapid motion. The rapid motion causes severe image blur as shown in Fig. 9, making our point-only baseline fail in all the cases. ORB-SLAM works properly because of its well-implemented relocalization mechanism and it simply skips bad keyframes with image blur. By contrast, no relocalization is implemented in our system. Without the relocalization mechanism, DSO also fails in one test but still performs better than our point-only baseline. By contrast, our text-based method works well in those tests. The text objects are successfully tracked as shown in Fig. 9 and the trajectories are more accurate than ORB-SLAM and DSO as shown in Tab. 3. This is largely due to tracking the text object as a whole by directly optimizing well-designed photometric errors." }, { "figure_ref": [ "fig_12", "fig_8", "fig_9", "fig_0", "fig_0" ], "heading": "Evaluation of 3D Text Maps", "publication_ref": [ "b67" ], "table_ref": [], "text": "Our TextSLAM system can directly produce a 3D text map. It would be interesting to evaluate the quality of the 3D text map. Since no other SLAM system generates text maps directly, we implement a baseline method by fitting the text planes to the 3D map points generated from ORB-SLAM The bar '-' indicates the algorithm fails to finish the whole trajectory.\nand DSO within text regions using three-point RANSAC [68].\nTo evaluate the quality of 3D text maps, we acquire the ground truth plane equation (n gt , d gt ) by placing optical markers on the text objects as shown in Fig. 16. We use the angular error and the distance error between the estimated text plane and the ground truth for evaluation. The angular error measures the difference between the estimated normal n t of the text plane and the ground truth n gt , namely α = arccos(|n T t n gt |/∥n t ∥∥n gt ∥). The distance error is measured by the distance between the 3D text points and the ground truth text planes, where the four corners of the text quadrilateral are selected as the 3D text points.\nThe statistics distributions of angular errors and distance errors over ten sequences are shown in Fig. 11 and Fig. 12, respectively. The visual comparison of one test sequence is presented in Fig. 10. The results show that the 3D text map produced by TextSLAM is better than that of the plane fitting baselines based on either ORB-SLAM or DSO. This is largely due to the parameters of a text object being estimated as a whole in TextSLAM, while in ORB-SLAM or DSO, text points are estimated separately without considering they are on the same plane. Therefore, the fitted planes are noisy as shown in Fig. 10." }, { "figure_ref": [ "fig_12", "fig_1", "fig_0", "fig_10", "fig_11", "fig_10", "fig_11", "fig_1" ], "heading": "Evaluation of loop closing", "publication_ref": [ "b68" ], "table_ref": [], "text": "Additional sequences were recorded to evaluate the loop closing performance within two indoor scenes. The first scene is in a small room equipped with a motion capture system that provides the ground truth trajectories all the time. Some printed texts were randomly placed in the room similar to the first experiment as shown in Fig. 16. The second scene spans the whole floor of a building and contains some sparse text signs or labels as shown in Fig. 13. We started and ended recording within the same room with the motion capture system to acquire the ground truth poses in the start and end parts of a trajectory. We follow [69] Fig. 10. Though RANSAC was adopted, plane fitting on the point clouds from ORB-SLAM still produced noisy results (as shown in the bottom row). DSO performs better because much more points were taken into the computation. By contrast, TextSLAM avoids such problems by tracking a text object as a whole via directly optimizing the photometric errors. to compute the positional errors using the partially available ground truth.\nThe results are shown in Tab. 4 and Tab. 5, where the ORB-SLAM's results are also presented for comparison. We also visualize some results in Fig. 14 and Fig. 15. In those tests, ORB-SLAM failed to detect most loops, while TextSLAM can detect all the loops correctly. The reason is that the viewpoint of the current frame changes significantly from that of the loop frame (see Fig. 14 and Fig. 15), making ORB features difficult to be matched. By contrast, our Fig. 13. The indoor datasets used for loop closing tests were collected within a laboratory environment where some sparse text signs are available. We collected the test sequences in day and night and also turned on and off the lights to make the tests more challenging.\nTextSLAM uses the semantic meaning of those text objects to detect loop frames. The semantic meaning of a text object will keep unchanged with viewpoint changes. The results suggest that text objects can be used as reliable landmarks for loop closing even though they are distributed sparsely in the scene. " }, { "figure_ref": [ "fig_5", "fig_13", "fig_6", "fig_6" ], "heading": "Outdoor tests", "publication_ref": [ "b69", "b70", "b71" ], "table_ref": [], "text": "In this experiment, we test our TextSLAM system in a commercial plaza during the day and night. Some pictures are shown in Fig. 7. As we can see, the environment is full of text objects with various sizes, fonts, backgrounds, and languages, as well as various challenges including complex occlusions, the reflection of the glass, and moving pedestrians.\nThe ground truth camera trajectories are required for evaluation. One possible solution is to use the RTK GPS receiver to obtain the camera trajectories in centimeter-level accuracy. However, it is not feasible because we found that satellite signals were occluded by the surrounding buildings. Instead, we use the struct-from-motion technique to obtain the ground truth following the idea of [70], [71]. We collected a full set of image sequences to densely cover the scene and ran COLMAP [72] to obtain the camera pose for each image. After that, the camera poses obtained from COLMAP are treated as the ground truth and used to evaluate the SLAM performance. The 3D map and camera trajectories from COLMAP of the outdoor scene are visualized in Fig. 17. We selected eight sequences among the full set of image sequences for evaluation. To cover the scene more efficiently, we use three cameras with different headings to capture the images in different viewpoints as shown in Fig. 8.\nSince COLMAP produces 3D structures with an unknown scale, we need to calibrate the scale by a reference distance. As shown in Fig. 8, we placed two checkerboards in the scene. Their orientation was kept the same such that the distances between corresponding points on the board are identical. The checkerboard corners can be extracted and matched, whose 3D coordinates can be estimated from the known camera poses produced by structure-from-motion. We compared the estimated distance from structure-frommotion and the measured distance by a laser rangefinder to resolve the unknown scale.\nTo evaluate the accuracy of the ground truth, we chose five reference points whose real-world coordinates were precisely measured by a laser rangefinder. We then placed the camera at those reference points to capture extra images and fed them into the COLMAP pipeline. Those estimated locations were aligned with the real-world coordinates to evaluate the accuracy of the ground truth approximately. We found that the average localization error is about 8.45cm within the test area around 5500m 2 , which is sufficient for our evaluation." }, { "figure_ref": [ "fig_15", "fig_15", "fig_16" ], "heading": "Day tests", "publication_ref": [], "table_ref": [], "text": "In this experiment, we evaluate our methods with the image sequences collected during the day. We also present the results of ORB-SLAM and DSO for comparison. The results are shown in Tab. 6. TextSLAM can correctly recognize revisited places and close the loop in all test sequences, achieving the best accuracy among all the methods. ORB-SLAM fails to detect most loops because of large viewpoint changes and performs similar to DSO that has no loop closing function. To be more clear, we visualize estimated trajectories for typical sequences in Fig. 18, as well as the loop image pairs with text objects detected by TextSLAM. As shown in Fig. 18, the viewpoints change significantly between the current keyframe and the loop keyframe, making the BoW-based method (ORB-SLAM) fail to detect those loops. By contrast, the semantic message of a text object is invariant to appearance changes, hence TextSLAM is able to detect the correct loop via using this high-level information. When the viewpoint changes only slightly, e.g. Outdoor 5 in Tab. 6, the well-implemented ORB-SLAM can correctly close the loop and produce results as accurate as ours. Extra results of TextSLAM are presented in Fig. 19, including the reconstructed 3D text map for all scene texts existing in the test scene, as well as their 2D observations. The 3D semantic text map could have potential in multiple applications, including scene understanding, navigation, augmented reality, and human-computer interaction. " }, { "figure_ref": [ "fig_17" ], "heading": "Day-night tests", "publication_ref": [ "b72", "b73", "b64", "b63", "b14", "b14", "b73", "b74", "b75", "b72", "b73", "b74", "b75", "b76", "b72", "b73", "b76", "b72", "b73", "b77", "b77", "b74", "b75", "b76", "b77", "b74", "b75", "b76", "b14", "b14", "b14" ], "table_ref": [], "text": "Illumination change is one challenge that SLAM usually encounters. An extreme case is the day night variation. For example, we already built a map in the day, while we may want to reuse it at night. To evaluate the performance towards this change, we collect night sequences in the same path as collecting those day sequences.\nTo show how texts help matching scene points across day and night, we implemented the localization-only version for both the TextSLAM (TextSLAM Loop) and ORB-SLAM systems. Based on the 3D model generated from the day sequence via each SLAM method, we test the localization performance using the night sequence. We present visual comparisons in Fig. 20. The results show that our method can correctly locate a lot of frames, while ORB-SLAM works for only a few frames, implying the robustness of our method under such large illumination changes.\nWe also compare our method with the state-of-the-art visual localization methods in the day-night tests. Because COLMAP failed to generate the ground truth trajectory of the night sequence, we follow the image retrieval evaluation protocol to use precision and recall for comparison. Additionally, the runtime is measured to show the efficiency of the approaches. Our day-night test contains 987 day keyframes as the database and 145 night keyframes as the queries. To obtain the ground truth of the night queries, we manually label each image pair among all 143115 (987×145) pairs by checking if the two images belong to the same location.\nWe compared the following methods from image retrieval, place recognition, and visual localization. Some of them are the top-ranked open source implementations (3rd, 4th, 34th ranked) in the Long-Term Visual Localization benchmark [73].\n• NetVLAD [74], the state-of-the-art deep learningbased image retrieval method.\n• DBoW2 [65], which is widely used in existing visual SLAM systems [64].\n• TextPlace [15], the place recognition method which uses 2D text extractions as the localization cue. Because TextPlace does not open its source code, we reimplement it for comparison, named as TextPlace [15] ReImplement in results.\n• NetVLAD [74]+SuperPoint [75]+SuperGlue [76] (abbr. NetVLAD+SP+SG). 3rd ranked method in [73].\n• NetVLAD [74]+SuperPoint [75]+SuperGlue [76]+Patch2Pix [77] (abbr. NetVLAD+SP+SG+PP). 4th ranked method in [73].\n• NetVLAD [74]+Patch2Pix [77] (abbr. NetVLAD+PP). 34th ranked method in [73].\n• NetVLAD [74]+SIFT [78] (abbr. NetVLAD+SIFT).\nHere, SIFT [78], SuperPoint [75], SuperGlue [76], and Patch2Pix [77] are used to refine the retrieval results from NetVLAD. SIFT [78] is a classic feature detector and descriptor. SuperPoint [75] is a learned interest point detector and descriptor. SuperGlue [76] finds pointwise correspondences using a graph neural network with an attention mechanism. Patch2Pix [77] searches correspondences in a detect-to-refine manner (patch-level to pixel match). The results from TextSLAM localization-version are displayed as The results show that TextSLAM (with and without geometric validation) outperforms the state-of-the-art NetVLAD-based methods on this day-night test, while the latter was particularly trained to address significant illumination changes. Interestingly, another text-based approach (Textplace [15]) also performs better than NetVLAD-based methods. It implies that feature matching can greatly benefit from text semantics in text-rich environments. TextSLAM performs better than Textplace [15] because Textplace [15] uses 2D-2D text matching for image retrieval which is prone to false text detection and recognition. By contrast, TextSLAM matches the 2D text objects directly with the 3D text map where the semantics of 3D text objects are derived from multiple observations and hence are more reliable. The 2D-3D matching strategy also makes TextSLAM highly efficient as shown in Tab. 7 because much fewer text objects in the 3D map are required to be matched." }, { "figure_ref": [], "heading": "Runtime Analysis", "publication_ref": [ "b18", "b76", "b74", "b75" ], "table_ref": [], "text": "We selected four typical sequences from above experiments to test the runtime of TextSLAM. We ran TextSLAM in a single thread on an Intel Core i7-9700K desktop computer with 32-GB RAM. The text extractor [19] using a neural The abbreviations 'PP', 'SP', 'SG' represent Patch2Pix [77], SuperPoint [75],\nand SuperGlue [76], respectively.\nnetwork ran on the NVIDIA GeForce RTX 2080 Ti. The runtimes of major components in the system are present in Tab. 8. We can see that text extraction requires much more time than other front-end components (point extraction and pose estimation). It is therefore the bottleneck of TextSLAM's efficiency. But this problem can be solved if a highly efficient text extractor appears. We also present the running time of our text-based loop closing in Tab. 9, which includes the average single-threaded runtime for loop detection (Section 4.5.1), relative transformations calculation (Section 4.5.2), and loop correction." }, { "figure_ref": [], "heading": "LIMITATION", "publication_ref": [], "table_ref": [], "text": "Here, we discuss the limitations of our method. Firstly, TextSLAM relies on the text objects in the scene. When no text object exists, TextSLAM will switch into the point-only mode. So no high-level information can be used for improving SLAM in this case. Fortunately, many daily scenes, " }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we fully explore the text objects both geometrically and semantically and propose a novel visual SLAM approach tightly coupled with the semantic text features, where text objects are regarded as local planar patches with rich textual and semantic meaning. We tested our method in various indoor and outdoor environments involving challenges such as fast camera motions, viewpoint changes, and day-night illumination shifts. The results show that with the help of scene texts, our method outperforms the state-ofthe-art methods including SLAM and visual localization in terms of accuracy and robustness, indicating the benefits of integrating semantic text objects into visual SLAM. The 3D text map produced by our system can serve as an important medium to bridge humans and robots. We hope our work could inspire more explorations to the semantic texts in various applications in robotics, navigation, humancomputer interaction, AR and VR, etc." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "The authors would like to thank Shanghai Alpha square for the support of data collection." } ]
We propose a novel visual SLAM method that integrates text objects tightly by treating them as semantic features via fully exploring their geometric and semantic prior. The text object is modeled as a texture-rich planar patch whose semantic meaning is extracted and updated on the fly for better data association. With the full exploration of locally planar characteristics and semantic meaning of text objects, the SLAM system becomes more accurate and robust even under challenging conditions such as image blurring, large viewpoint changes, and significant illumination variations (day and night). We tested our method in various scenes with the ground truth data. The results show that integrating texture features leads to a more superior SLAM system that can match images across day and night. The reconstructed semantic 3D text map could be useful for navigation and scene understanding in robotic and mixed reality applications. Our project page: https://github.com/SJTU-ViSYS/TextSLAM.
TextSLAM: Visual SLAM with Semantic Planar Text Features
[ { "figure_caption": "Fig. 1 .1Fig. 1. TextSLAM can produce 3D text maps and match text objects correctly despite significant illumination changes. Left column: semantically matched text objects and text-guided point correspondences (green lines) between a night query image and a day image in the day-and-night test. The detected texts are shown in yellow rectangles. Middle column: 3D text maps and camera trajectory in the bird-eye view. The text objects are illustrated in gray boxes and their normal directions are shown in red. Right column: tracked texts in the image (in yellow rectangles) and the zoomed-in view of the 3D text map.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig.3. The semantic information of a text object is updated whenever a new observation comes. The top row shows the text object and the camera trajectory as well as four keyframes. Note that in the fourth keyframe K 3 , the text object is occluded by the tree and is therefore partially observed. The second row shows the observed semantic information extracted from each frame, which consists of the meaning of the text object (represented by a string) and the semantic costs (including the meaning and geometric parts, the smaller the better). The third row demonstrates the semantic information of the text object is updated from the observed one at each frame. The semantic information with the smallest semantic cost is kept when updating.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. An overview of our TextSLAM system.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. The first two columns show the images captured at the same location in both day and night. The text sign with red rectangles is enlarged to show the dramatic appearance changes between day and night. The third column shows the text images after histogram equalization, where the top and the bottom are night and day images. Note that the contrast of the text patches does not change as much as we expect.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig. 6. Top row: The point correspondences by matching BRIEF descriptors. Bottom row: Result of our text-guided point matching.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .7Fig. 7. Our outdoor datasets were collected in a commercial center, which is full of text signs with different sizes, fonts, and languages. The datasets consist of test sequences collected during both the day and night.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Fig. 8 .8Fig. 8. The outdoor test scene is shown on the left. The data collection device equipped with three RS-D455 cameras is presented on the right.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Fig. 9 .9Fig. 9. TextSLAM is robust to blurry images caused by rapid camera motions. The estimated 3D text map and camera trajectory of TextSLAM are shown on the left. By contrast, the point-only method failed to track feature points on severely blurry images as shown on the right.", "figure_data": "", "figure_id": "fig_7", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Fig. 11 .11Fig. 11. The statistic distribution of the angular errors. The results of TextSLAM, plane-fitting baselines based on DSO and ORB-SLAM are illustrated in red, blue, and green respectively.", "figure_data": "", "figure_id": "fig_8", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Fig. 12 .12Fig. 12. The statistic distribution of the distance errors. The results of TextSLAM, plane-fitting baselines based on DSO and ORB-SLAM are illustrated in red, blue, and green respectively.", "figure_data": "", "figure_id": "fig_9", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Fig. 14 .14Fig. 14. Visualization of TextSLAM results in a small indoor scene. The trajectories of the two methods are shown on the left. The results of TextSLAM, ORB-SLAM, and the ground truth are illustrated in red, green, and gray respectively. The query frame B and the detected loop frame A are shown on the right, where the viewpoint changes significantly.", "figure_data": "", "figure_id": "fig_10", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Fig. 15 .15Fig. 15. Visualization of TextSLAM results in a large indoor scene. Top row: The camera trajectories of TextSLAM, ORB-SLAM, and the ground truth are visualized in red, green, and orange respectively. The ground truth (orange line) is at the start and end parts of each trajectory. Bottom row: The query frame B and the detected loop frame A in TextSLAM are visualized, where the matched text objects are highlighted in yellow boxes. We can see that loops are correctly detected in TextSLAM despite large viewpoint changes.", "figure_data": "", "figure_id": "fig_11", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Fig. 16 .16Fig. 16. The indoor test scene is shown on the left. The data collection device equipped with an RS-D455 camera is presented on the right.", "figure_data": "", "figure_id": "fig_12", "figure_label": "16", "figure_type": "figure" }, { "figure_caption": "Fig. 1717Fig. 17. The structure-from-motion model that we used as ground truth in the outdoor tests. Three surround-view cameras were used for data collection as illustrated in the enlarged area.", "figure_data": "", "figure_id": "fig_13", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "Fig. 17. The structure-from-motion model that we used as ground truth in the outdoor tests. Three surround-view cameras were used for data collection as illustrated in the enlarged area.", "figure_data": "", "figure_id": "fig_14", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 18 .18Fig. 18. Results of outdoor tests.Top row: The camera trajectories of TextSLAM, ORB-SLAM, DSO, and the ground truth are visualized in red, green, blue, and gray respectively. The loop frames detected by TextSLAM are also visualized (B represents the query frame and A is the detected loop frame). Bottom row: The semantic meanings of those matched text objects between the loop frame and the query frame are presented, where the matched pair are indicated by the same number. Our method allows those strings to be exactly matched (in black) or partially matched (in red). Some false matching results are shown in brown, which are excluded from the geometric verification during loop closing.", "figure_data": "", "figure_id": "fig_15", "figure_label": "18", "figure_type": "figure" }, { "figure_caption": "Fig. 19 .19Fig. 19. Extra results of outdoor tests. First column: The full mapping and localization results are shown. Second column: The text detection results of three numbered locations are visualized. Third to fifth columns: The 3D text map in marked locations are zoomed in for more details.", "figure_data": "", "figure_id": "fig_16", "figure_label": "19", "figure_type": "figure" }, { "figure_caption": "Fig. 20 .20Fig. 20. Top row: The blue trajectories are estimated by TextSLAM and ORB-SLAM respectively using day sequences, while the magenta trajectories are the localization results by registering the night images to the 3D map built during the day. We shift the night trajectories and connect the loop frames by green lines for better illustration. Bottom row: We also visualized the matched points by TextSLAM in six different places. We can see that text-guided point matching correctly matched most of the text points despite large illumination changes.", "figure_data": "", "figure_id": "fig_17", "figure_label": "20", "figure_type": "figure" }, { "figure_caption": "Fig. 21 .21Fig. 21. The image retrieval results of DBoW2, NetVLAD, TextPlace ReImplement and TextSLAM Loop w/o Check, respectively. The correct and wrong results are shown in green and red boxes, respectively.", "figure_data": "", "figure_id": "fig_18", "figure_label": "21", "figure_type": "figure" }, { "figure_caption": "Fig. 22 .22Fig. 22. The point correspondence results of TextSLAM Loop, NetVLAD+SP+SG+PP, NetVLAD+SP+SG, NetVLAD+PP, and NetVLAD+SIFT, respectively. We can see that TextSLAM obtains correct point correspondence based on the invariant semantic text meaning. The deep-learningbased methods also work well during day-night change, as shown in the second and third columns. One failure case of the deep-learning-based method is the fourth column, where the similar blue boards (in red boxes) confuse the approach. SIFT also failed because of two similar boards (in red boxes) in different locations.", "figure_data": "", "figure_id": "fig_19", "figure_label": "22", "figure_type": "figure" }, { "figure_caption": "Fig. 23 .23Fig. 23. Recall and precision of the top-10 results. We normalize the recalls by dividing their maximum possible value: 10/{average number of ground-truth pairs per query} for better visualization.", "figure_data": "", "figure_id": "fig_20", "figure_label": "23", "figure_type": "figure" }, { "figure_caption": "list of existing text-aided navigation approaches", "figure_data": "MethodText objectsText extractionMapTaskSceneTomono et al. 2000", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results of indoor tests. RPE (1 m) and APE (m)", "figure_data": "Seq.ORB-SLAM APE RPEDSO APE RPEOur point-only APE RPETextSLAM APE RPEIndoor 01 0.068 0.062 0.069 0.066 0.0860.0760.0670.055Indoor 02 0.092 0.075 0.070 0.060 0.0670.0550.0680.055Indoor 03 0.094 0.140 0.083 0.068 0.0900.0600.0760.111Indoor 04 0.078 0.061 0.075 0.062 0.0710.0470.0760.060Indoor 05 0.084 0.071 0.079 0.052 0.0720.0480.0580.037Indoor 06 0.089 0.070 0.074 0.054 0.0840.0590.073 0.050Indoor 07 0.069 0.051 0.069 0.055 0.0450.0310.0320.035Indoor 08 0.081 0.077 0.068 0.055 0.0510.0460.0490.041Indoor 09 0.101 0.070 0.076 0.055--0.0960.059Indoor 10 0.075 0.062 0.071 0.047 0.0940.0650.076 0.053", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Results of rapid motion tests. RPE (0.1 m) and APE (m)", "figure_data": "Seq.ORB-SLAM APE RPEDSO APE RPEPoint-only APE RPETextSLAM APE RPERapid 010.0610.122----0.0600.104Rapid 020.0360.0800.0270.041--0.0200.056Rapid 030.0850.1420.1130.083--0.0580.107", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Loop tests in a small indoor scene. RPE (1 m) and APE (m)", "figure_data": "Seq. AIndoorLoop 01 AIndoorLoop 02 AIndoorLoop 03 AIndoorLoop 04 AIndoorLoop 05 AIndoorLoop 06 AIndoorLoop 07ORB-SLAM LOOP APE √ 0.005 0.046 RPE × 0.075 0.077 × 0.076 0.090 √ 0.028 0.035 √ 0.008 0.034 × 0.068 0.069 × 0.083 0.071TextSLAM LOOP APE √ 0.010 0.031 RPE √ 0.018 0.030 √ 0.007 0.042 √ 0.026 0.027 √ 0.017 0.040 √ 0.010 0.027 √ 0.017 0.032AIndoorLoop 08×0.082 0.086", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Loop test in a large indoor scene. RPE (1 m) and APE (m)", "figure_data": "Seq. LIndoorLoop 01 LIndoorLoop 02 LIndoorLoop 03 LIndoorLoop 04 LIndoorLoop 05 LIndoorLoop 06 LIndoorLoop 07 LIndoorLoop 08ORB-SLAM LOOP APE × 0.994 0.770 RPE × 1.669 1.177 × 2.102 0.595 × 0.192 0.202 × 0.251 0.127 × 0.179 0.163 × 0.206 0.328 × 0.291 0.155TextSLAM LOOP APE √ 0.062 0.111 RPE √ 0.057 0.071 √ 0.192 0.374 √ 0.023 0.075 √ 0.047 0.065 √ 0.032 0.041 √ 0.031 0.230 √ 0.031 0.042LIndoorLoop 09×0.377 0.202", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "", "figure_data": "Results in an outdoor commercial center during the day. RPE (10 m)and APE (m)Seq. Outdoor 1 Outdoor 2 Outdoor 3 Outdoor 4 Outdoor 5 Outdoor 6 Outdoor 7 Outdoor 8ORB-SLAM LOOP APE RPE APE RPE LOOP APE RPE DSO TextSLAM × 1.159 0.379 1.175 0.393 √ 0.688 0.389 × 1.340 0.511 1.280 0.457 √ 0.561 0.470 × 1.213 0.347 1.423 0.337 √ 0.807 0.317 √ 0.116 0.108 1.511 0.462 √ 1.624 0.759 √ 0.175 0.094 1.410 0.523 √ 0.219 0.173 × 1.491 0.462 1.450 0.299 √ 0.412 0.238 × 1.279 0.457 1.572 0.369 √ 0.563 0.307 × 1.358 0.529 1.642 0.620 √ 0.446 0.249The tick '√' indicates a success loop closing and '×' indicates no loop has beenfound. The smallest errors are in bold texts.", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Average runtimes of localization methods. (s)", "figure_data": "MethodsImage retrieval Point matching Geometric checkNetVLAD [74]0.055 ± 0.001--NetVLAD+PP0.055 ± 0.001 1.427 ± 0.157 7.999 ± 0.508NetVLAD+SP+SG0.055 ± 0.001 1.625 ± 0.034 5.992 ± 0.683NetVLAD+SP+SG+PP0.055 ± 0.001 2.183 ± 0.168 6.332 ± 0.674TextPlace [15] ReImplement 0.358 ± 0.077--TextSLAM Loop0.005 ± 0.002 0.037 ± 0.011 0.083 ± 0.018", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Runtime analysis of the loop of our method.The superscript 1 and 2 indicate the first loop and the second loop of LIndoorLoop 01, respectively.", "figure_data": "Seq.Loop detection Sim3 calculation Loop correctionLIndoorLoop 070.002s0.095s23.761sLIndoorLoop 01 10.009s0.195s33.626sLIndoorLoop 01 20.008s0.127s64.380sOutdoor 10.008s0.360s85.855s", "figure_id": "tab_8", "figure_label": "9", "figure_type": "table" } ]
Boying Li; Danping Zou; Yuan Huang; Xinghan Niu; Ling Pei; Wenxian Yu
[ { "authors": "L Heng; D Honegger; G H Lee; L Meier; P Tanskanen; F Fraundorfer; M Pollefeys", "journal": "Journal of Field Robotics", "ref_id": "b0", "title": "Autonomous visual mapping and exploration with a micro aerial vehicle", "year": "2014" }, { "authors": "H Lategahn; A Geiger; B Kitt", "journal": "IEEE", "ref_id": "b1", "title": "Visual slam for autonomous ground vehicles", "year": "2011" }, { "authors": "D Chekhlov; A P Gee; A Calway; W Mayol-Cuevas", "journal": "IEEE Computer Society", "ref_id": "b2", "title": "Ninja on a plane: Automatic discovery of physical planes for augmented reality using visual slam", "year": "2007" }, { "authors": "R Mur-Artal; J M M Montiel; J D Tardos", "journal": "IEEE Transactions on Robotics", "ref_id": "b3", "title": "Orb-slam: a versatile and accurate monocular slam system", "year": "2015" }, { "authors": "A J Davison", "journal": "IEEE", "ref_id": "b4", "title": "Real-time simultaneous localisation and mapping with a single camera", "year": "2003" }, { "authors": "C Forster; M Pizzoli; D Scaramuzza", "journal": "IEEE", "ref_id": "b5", "title": "Svo: Fast semi-direct monocular visual odometry", "year": "2014" }, { "authors": "J Engel; V Koltun; D Cremers", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b6", "title": "Direct sparse odometry", "year": "2018" }, { "authors": "H Zhou; D Zou; L Pei; R Ying; P Liu; W Yu", "journal": "IEEE Transactions on Vehicular Technology", "ref_id": "b7", "title": "Structslam: Visual slam with building structure lines", "year": "2015" }, { "authors": "A J Trevor; J G Rogers; H I Christensen", "journal": "IEEE", "ref_id": "b8", "title": "Planar surface slam with 3d and 2d sensors", "year": "2012" }, { "authors": "K.-N Lianos; J L Schonberger; M Pollefeys; T Sattler", "journal": "", "ref_id": "b9", "title": "Vso: Visual semantic odometry", "year": "2018" }, { "authors": "J Mccormac; A Handa; A Davison; S Leutenegger", "journal": "IEEE", "ref_id": "b10", "title": "Semanticfusion: Dense 3d semantic mapping with convolutional neural networks", "year": "2017" }, { "authors": "A Ranganathan; D Ilstrup; T Wu", "journal": "IEEE", "ref_id": "b11", "title": "Light-weight localization for vehicles using road markings", "year": "2013" }, { "authors": "N Radwan; G D Tipaldi; L Spinello; W Burgard", "journal": "IEEE", "ref_id": "b12", "title": "Do you see the bakery? leveraging geo-referenced texts for global localization in public maps", "year": "2016" }, { "authors": "B Li; D Zou; D Sartori; L Pei; W Yu", "journal": "IEEE", "ref_id": "b13", "title": "Textslam: Visual slam with planar text features", "year": "2020" }, { "authors": "Z Hong; Y Petillot; D Lane; Y Miao; S Wang", "journal": "", "ref_id": "b14", "title": "Textplace: Visual place recognition and topological localization through reading scene texts", "year": "2019" }, { "authors": "X Rong; B Li; J P Mu Ñoz; J Xiao; A Arditi; Y Tian", "journal": "Springer", "ref_id": "b15", "title": "Guided text spotting for assistive blind navigation in unfamiliar indoor environments", "year": "2016" }, { "authors": "B Li; J P Munoz; X Rong; Q Chen; J Xiao; Y Tian; A Arditi; M Yousuf", "journal": "IEEE Transactions on Mobile Computing", "ref_id": "b16", "title": "Vision-based mobile indoor assistive navigation aid for blind people", "year": "2019" }, { "authors": "H.-C Wang; C Finn; L Paull; M Kaess; R Rosenholtz; S Teller; J Leonard", "journal": "IEEE", "ref_id": "b17", "title": "Bridging text spotting and slam with junction features", "year": "2015" }, { "authors": "J Zhang; W Wang; D Huang; Q Liu; Y Wang", "journal": "", "ref_id": "b18", "title": "A feasible framework for arbitrary-shaped scene text recognition", "year": "2019" }, { "authors": "X Zhou; C Yao; H Wen; Y Wang; S Zhou; W He; J Liang", "journal": "IEEE", "ref_id": "b19", "title": "East: an efficient and accurate scene text detector", "year": "2017" }, { "authors": "P Lyu; M Liao; C Yao; W Wu; X Bai", "journal": "", "ref_id": "b20", "title": "Mask textspotter: An end-to-end trainable neural network for spotting text with arbitrary shapes", "year": "2018" }, { "authors": "H Wang; P Lu; H Zhang; M Yang; X Bai; Y Xu; M He; Y Wang; W Liu", "journal": "", "ref_id": "b21", "title": "All you need is boundary: Toward arbitrary-shaped text spotting", "year": "2020" }, { "authors": "M Liao; Z Wan; C Yao; K Chen; X Bai", "journal": "", "ref_id": "b22", "title": "Real-time scene text detection with differentiable binarization", "year": "2020" }, { "authors": "M He; M Liao; Z Yang; H Zhong; J Tang; W Cheng; C Yao; Y Wang; X Bai", "journal": "IEEE", "ref_id": "b23", "title": "Most: A multi-oriented scene text detector with localization refinement", "year": "2021" }, { "authors": "Y Zhu; C Yao; X Bai", "journal": "Frontiers of Computer Science", "ref_id": "b24", "title": "Scene text detection and recognition: Recent advances and future trends", "year": "2016" }, { "authors": "X.-C Yin; Z.-Y Zuo; S Tian; C.-L Liu", "journal": "IEEE Transactions on Image Processing", "ref_id": "b25", "title": "Text detection, tracking and recognition in video: a comprehensive survey", "year": "2016" }, { "authors": "A Veit; T Matera; L Neumann; J Matas; S Belongie", "journal": "", "ref_id": "b26", "title": "Cocotext: Dataset and benchmark for text detection and recognition in natural images", "year": "2016" }, { "authors": "M Iwamura; T Matsuda; N Morimoto; H Sato; Y Ikeda; K Kise", "journal": "Springer", "ref_id": "b27", "title": "Downtown osaka scene text dataset", "year": "2016" }, { "authors": "C K Chng; Y Liu; Y Sun; C C Ng; C Luo; Z Ni; C Fang; S Zhang; J Han; E Ding", "journal": "IEEE", "ref_id": "b28", "title": "Icdar2019 robust reading challenge on arbitrary-shaped text-rrc-art", "year": "2019" }, { "authors": "M Tomono; S Yuta", "journal": "IEEE", "ref_id": "b29", "title": "Mobile robot navigation in indoor environments using object and character recognition", "year": "2000" }, { "authors": "M Mata; J M Armingol; A De La Escalera; M A Salichs", "journal": "IEEE", "ref_id": "b30", "title": "A visual landmark recognition system for topological navigation of mobile robots", "year": "2001" }, { "authors": "C Case; B Suresh; A Coates; A Y Ng", "journal": "IEEE", "ref_id": "b31", "title": "Autonomous sign reading for semantic mapping", "year": "2011" }, { "authors": "S Wang; S Fidler; R Urtasun", "journal": "", "ref_id": "b32", "title": "Lost shopping! monocular localization in large indoor spaces", "year": "2015" }, { "authors": "A P Gee; D Chekhlov; W W Mayol-Cuevas; A Calway", "journal": "", "ref_id": "b33", "title": "Discovering planes and collapsing the state space in visual slam", "year": "2007" }, { "authors": "A P Gee; D Chekhlov; A Calway; W Mayol-Cuevas", "journal": "IEEE Transactions on Robotics", "ref_id": "b34", "title": "Discovering higher level structure in visual slam", "year": "2008" }, { "authors": "M Y Yang; W F Örstner", "journal": "", "ref_id": "b35", "title": "Plane detection in point cloud data", "year": "2010" }, { "authors": "A J Davison; J Ortiz", "journal": "", "ref_id": "b36", "title": "Futuremapping 2: Gaussian belief propagation for spatial ai", "year": "2019" }, { "authors": "J Sturm; N Engelhard; F Endres; W Burgard; D Cremers", "journal": "IEEE", "ref_id": "b37", "title": "A benchmark for the evaluation of rgb-d slam systems", "year": "2012" }, { "authors": "L Ma; C Kerl; J St; D Cremers", "journal": "IEEE", "ref_id": "b38", "title": "Cpa-slam: Consistent plane-model alignment for direct rgb-d slam", "year": "2016" }, { "authors": "P Kim; B Coltin; H ; Jin Kim", "journal": "", "ref_id": "b39", "title": "Linear rgb-d slam for planar environments", "year": "2018" }, { "authors": "N Molton; A J Davison; I D Reid", "journal": "Citeseer", "ref_id": "b40", "title": "Locally planar patch features for real-time structure from motion", "year": "2004" }, { "authors": "M Sualeh; G.-W Kim", "journal": "International Journal of Control, Automation and Systems", "ref_id": "b41", "title": "Simultaneous localization and mapping in the epoch of semantics: a survey", "year": "2019" }, { "authors": "A J Davison", "journal": "", "ref_id": "b42", "title": "Futuremapping: The computational structure of spatial ai systems", "year": "2018" }, { "authors": "A Rosinol; M Abate; Y Chang; L Carlone", "journal": "IEEE", "ref_id": "b43", "title": "Kimera: an opensource library for real-time metric-semantic localization and mapping", "year": "2020" }, { "authors": "Y Chang; Y Tian; J P How; L Carlone", "journal": "IEEE", "ref_id": "b44", "title": "Kimera-multi: a system for distributed multi-robot metric-semantic simultaneous localization and mapping", "year": "2021" }, { "authors": "M Runz; M Buffier; L Agapito", "journal": "IEEE", "ref_id": "b45", "title": "Maskfusion: Real-time recognition, tracking and reconstruction of multiple moving objects", "year": "2018" }, { "authors": "M Grinvald; F Furrer; T Novkovic; J J Chung; C Cadena; R Siegwart; J Nieto", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b46", "title": "Volumetric instance-aware semantic mapping and 3d object discovery", "year": "2019" }, { "authors": "S Zhi; M Bloesch; S Leutenegger; A J Davison", "journal": "", "ref_id": "b47", "title": "Scenecode: Monocular dense semantic reconstruction using learned encoded scene representations", "year": "2019" }, { "authors": "D Gálvez-L Ópez; M Salas; J D Tard; J Montiel", "journal": "Robotics and Autonomous Systems", "ref_id": "b48", "title": "Real-time monocular object slam", "year": "2016" }, { "authors": "R F Salas-Moreno; R A Newcombe; H Strasdat; P H Kelly; A J Davison", "journal": "", "ref_id": "b49", "title": "Slam++: Simultaneous localisation and mapping at the level of objects", "year": "2013" }, { "authors": "J Mccormac; R Clark; M Bloesch; A Davison; S Leutenegger", "journal": "IEEE", "ref_id": "b50", "title": "Fusion++: Volumetric object-level slam", "year": "2018" }, { "authors": "T Laidlow; A J Davison", "journal": "", "ref_id": "b51", "title": "Simultaneous localisation and mapping with quadric surfaces", "year": "2022" }, { "authors": "K Mazur; E Sucar; A J Davison", "journal": "", "ref_id": "b52", "title": "Feature-realistic neural fusion for real-time, open set scene understanding", "year": "2022" }, { "authors": "B Xu; A J Davison; S Leutenegger", "journal": "", "ref_id": "b53", "title": "Learning to complete object shapes for object-level mapping in dynamic scenes", "year": "2022" }, { "authors": "S Yang; S Scherer", "journal": "IEEE Transactions on Robotics", "ref_id": "b54", "title": "Cubeslam: Monocular 3-d object slam", "year": "2019" }, { "authors": "S Yang; S Schere", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b55", "title": "Monocular object and plane slam in structured environments", "year": "2019" }, { "authors": "J Dong; X Fei; S Soatto", "journal": "", "ref_id": "b56", "title": "Visual-inertial-semantic scene representation for 3d object detection", "year": "2017" }, { "authors": "L Nicholson; M Milford; N S Ünderhauf", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b57", "title": "Quadricslam: Dual quadrics from object detections as landmarks in object-oriented slam", "year": "2018" }, { "authors": "D Létourneau; F Michaud; J.-M Valin", "journal": "EURASIP Journal on Advances in Signal Processing", "ref_id": "b58", "title": "Autonomous mobile robot that can read", "year": "2004" }, { "authors": "X Liu; J Samarabandu", "journal": "IEEE", "ref_id": "b59", "title": "An edge-based text region extraction algorithm for indoor mobile robot navigation", "year": "2005" }, { "authors": "E Rublee; V Rabaud; K Konolige; G Bradski", "journal": "IEEE", "ref_id": "b60", "title": "Orb: An efficient alternative to sift or surf", "year": "2011" }, { "authors": "J Shi; C Tomasi", "journal": "", "ref_id": "b61", "title": "Good features to track", "year": "1994" }, { "authors": "J Engel; V Koltun; D Cremers", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b62", "title": "Direct sparse odometry", "year": "2017" }, { "authors": "R Mur-Artal; J D Tard Ós", "journal": "IEEE Transactions on Robotics", "ref_id": "b63", "title": "Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras", "year": "2017" }, { "authors": "D Gálvez-L Ópez; J D Tardos", "journal": "IEEE Transactions on Robotics", "ref_id": "b64", "title": "Bags of binary words for fast place recognition in image sequences", "year": "2012" }, { "authors": "V I Levenshtein", "journal": "Soviet physics doklady", "ref_id": "b65", "title": "Binary codes capable of correcting deletions, insertions, and reversals", "year": "1966" }, { "authors": "M Calonder; V Lepetit; C Strecha; P Fua", "journal": "Springer", "ref_id": "b66", "title": "Brief: Binary robust independent elementary features", "year": "2010" }, { "authors": "M A Fischler; R C Bolles", "journal": "Communications of the ACM", "ref_id": "b67", "title": "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography", "year": "1981" }, { "authors": "D Zou; Y Wu; L Pei; H Ling; W Yu", "journal": "IEEE Transactions on Robotics", "ref_id": "b68", "title": "Structvio: Visualinertial odometry with structural regularity of man-made environments", "year": "2019-08" }, { "authors": "T Sattler; A Torii; J Sivic; M Pollefeys; H Taira; M Okutomi; T Pajdla", "journal": "", "ref_id": "b69", "title": "Are large-scale 3d models really necessary for accurate visual localization?", "year": "2017-07" }, { "authors": "T Sattler; W Maddern; C Toft; A Torii; L Hammarstrand; E Stenborg; D Safari; M Okutomi; M Pollefeys; J Sivic; F Kahl; T Pajdla", "journal": "", "ref_id": "b70", "title": "Benchmarking 6dof outdoor visual localization in changing conditions", "year": "2018-06" }, { "authors": "J L Schonberger; J.-M Frahm", "journal": "", "ref_id": "b71", "title": "Structure-from-motion revisited", "year": "2016" }, { "authors": "C V R G Chalmers", "journal": "", "ref_id": "b72", "title": "Longterm visual localization benchmark", "year": "2019" }, { "authors": "R Arandjelovic; P Gronat; A Torii; T Pajdla; J Sivic", "journal": "", "ref_id": "b73", "title": "Netvlad: Cnn architecture for weakly supervised place recognition", "year": "2016-06" }, { "authors": "D Detone; T Malisiewicz; A Rabinovich", "journal": "", "ref_id": "b74", "title": "Superpoint: Selfsupervised interest point detection and description", "year": "2018" }, { "authors": "P.-E Sarlin; D Detone; T Malisiewicz; A Rabinovich", "journal": "", "ref_id": "b75", "title": "Superglue: Learning feature matching with graph neural networks", "year": "2020" }, { "authors": "Q Zhou; T Sattler; L Leal-Taixe", "journal": "", "ref_id": "b76", "title": "Patch2pix: Epipolar-guided pixel-level correspondences", "year": "2021" }, { "authors": "P C Ng; S Henikoff", "journal": "Nucleic acids research", "ref_id": "b77", "title": "Sift: Predicting amino acid changes that affect protein function", "year": "2003" } ]
[ { "formula_coordinates": [ 3, 382.75, 520.48, 181.25, 11.72 ], "formula_id": "formula_0", "formula_text": "θ = (θ 1 , θ 2 , θ 3 ) T = -n/d.(1)" }, { "formula_coordinates": [ 3, 373, 667.33, 191, 11.56 ], "formula_id": "formula_1", "formula_text": "ρ = 1/z = -n T /d m = θ T m.(2)" }, { "formula_coordinates": [ 4, 120.83, 73.49, 179.17, 41.41 ], "formula_id": "formula_2", "formula_text": "   mT 1 . . . mT n    θ =    ρ 1 . . . ρ n    , n ≥ 3.(3)" }, { "formula_coordinates": [ 4, 123.43, 368.37, 176.57, 11.56 ], "formula_id": "formula_3", "formula_text": "p = m/ρ = m/(θ T m).(4)" }, { "formula_coordinates": [ 4, 146.14, 404.13, 153.86, 11.56 ], "formula_id": "formula_4", "formula_text": "p ′ = Rp + t.(5)" }, { "formula_coordinates": [ 4, 123.45, 451.14, 176.55, 22.15 ], "formula_id": "formula_5", "formula_text": "m′ ∼ ρ p ′ = R m + ρt ⇒ m′ ∼ R m + t θ T m,(6)" }, { "formula_coordinates": [ 4, 152.46, 533.65, 147.54, 9.62 ], "formula_id": "formula_6", "formula_text": "m′ ∼ H m,(7)" }, { "formula_coordinates": [ 4, 125.19, 590.55, 174.81, 11.72 ], "formula_id": "formula_7", "formula_text": "m ′ = h(m, T h , T t , θ),(8)" }, { "formula_coordinates": [ 4, 384.3, 158.11, 179.7, 12.17 ], "formula_id": "formula_8", "formula_text": "Ĩ(m) = (I(m) -ĪΩ )/σ Ω ,(9)" }, { "formula_coordinates": [ 4, 360.43, 229.91, 203.58, 22.13 ], "formula_id": "formula_9", "formula_text": "ZN CC(I h , I t ) = m∈Ω Ĩh (m) Ĩt (m ′ ).(10)" }, { "formula_coordinates": [ 4, 361.23, 321.56, 198.82, 22.13 ], "formula_id": "formula_10", "formula_text": "E(I h , I t ) = m∈Ω ( Ĩh (m) -Ĩt (m ′ )) 2 . (11" }, { "formula_coordinates": [ 4, 560.04, 324.43, 3.96, 9.14 ], "formula_id": "formula_11", "formula_text": ")" }, { "formula_coordinates": [ 4, 328.47, 401.68, 235.53, 22.13 ], "formula_id": "formula_12", "formula_text": "m∈Ω ( Ĩh (m) 2 + Ĩt (m ′ ) 2 ) -2 m∈Ω Ĩh (m) Ĩt (m ′ ),(12)" }, { "formula_coordinates": [ 4, 317.84, 504.04, 246.16, 22.36 ], "formula_id": "formula_13", "formula_text": "E π,t photo = m∈Ω π ϕ(( Ĩh (m) -Ĩt (h(m, T h , T t , θ π ))) 2 ),(13)" }, { "formula_coordinates": [ 4, 404.91, 694.24, 159.09, 11.56 ], "formula_id": "formula_14", "formula_text": "X = {s, g sem } ,(14)" }, { "formula_coordinates": [ 5, 125.63, 448.28, 170.41, 11.56 ], "formula_id": "formula_15", "formula_text": "ĝsem = λg mean + g geo . (15" }, { "formula_coordinates": [ 5, 296.04, 450.7, 3.96, 9.14 ], "formula_id": "formula_16", "formula_text": ")" }, { "formula_coordinates": [ 5, 380.42, 331.39, 183.58, 21.03 ], "formula_id": "formula_17", "formula_text": "X k ← arg min X k-1 , X (g sem k-1 , ĝsem ).(16)" }, { "formula_coordinates": [ 6, 111.88, 131.05, 188.12, 10.76 ], "formula_id": "formula_18", "formula_text": "[ m′ i ] × t mT i θ = -[ m′ i ] × R mi ,(17)" }, { "formula_coordinates": [ 6, 96.55, 582.36, 203.45, 9.65 ], "formula_id": "formula_19", "formula_text": "E(T t ) = E point (T t ) + λ w E text (T t ),(18)" }, { "formula_coordinates": [ 6, 86.32, 638.51, 213.68, 21.54 ], "formula_id": "formula_20", "formula_text": "E point (T t ) = i ϕ(∥m i -P(T t , X i )∥ 2 ),(19)" }, { "formula_coordinates": [ 6, 133.45, 726.17, 162.59, 22.82 ], "formula_id": "formula_21", "formula_text": "E text = j E πj ,t photo . (20" }, { "formula_coordinates": [ 6, 296.04, 729.87, 3.96, 9.14 ], "formula_id": "formula_22", "formula_text": ")" }, { "formula_coordinates": [ 7, 103.91, 152.28, 196.09, 9.65 ], "formula_id": "formula_23", "formula_text": "E(x) = E point (x) + λ w E text (x).(21)" }, { "formula_coordinates": [ 7, 69.03, 660.29, 227.02, 23.26 ], "formula_id": "formula_24", "formula_text": "s(s i , s j ) = max(|s i |, |s j |) -d(s i , s j ) max(|s i |, |s j |) ∈ (0, 1], (22" }, { "formula_coordinates": [ 7, 296.04, 667.41, 3.96, 9.14 ], "formula_id": "formula_25", "formula_text": ")" } ]
2023-05-17
[ { "figure_ref": [], "heading": "Introduction 1.A General Overview", "publication_ref": [ "b37", "b69", "b32", "b72", "b26", "b31", "b100", "b7", "b27", "b31" ], "table_ref": [], "text": "One of the mantras that is repeated in every statistical course is that correlation does not imply causation. This is also observed in several disciplines, such as economics [38], biology [68], computer science [33,71] and philosophy [27]. Following [32], the main goal of a research study is often to assess the effect, if any, of an action on some outcome and not measuring a mere correlation. For example, this is true when it comes to decision making, since deciding which intervention must be taken is not straightforward and must be addressed properly to avoid any potential side effects. In order to identify and quantify a causal effect, the set of tools provided by causal discovery must be used accordingly. Here, the final goal is to decompose the total effect of an action into causal and non-causal effect, removing the bias that is introduced during the estimation process.\nCausal inference itself relies heavily on a formal description on the interactions between the observed variables, i.e. a casual graph. Such graphical representation is naïve in its concept, yet so effective when it comes to explainability. Following [99], it boils down to connect a cause to an effect (outcome) by drawing arrows from the former to the latter, to obtain a qualitative description of the system under study. This is in stark contrast with black-box techniques, where predictions about an outcome are made with a pure data-driven approach. Indeed, these methods fall short both in terms of explainability and decision making, as stated in [8,28,32]. Therefore, when causality is empowered through the instrument of graphical models, it is possible to overcome the current limitations of machine learning and deep learning tools, enabling the researcher to reach a higher level of understanding.\nWhen the causal graph is unknown, one may recover the cause-effect pairs by combining available data together with prior knowledge, whenever possible. The process of learning graphical structures with a causal interpretation is known as causal discovery. Recently, causal discovery has gained significant traction, especially when experimental data are available. However, this growth fragmented the landscape into multiple fields that differ for assumptions, problems and solutions, while aiming to the same goal. For this reason, this work summarizes the current status of causal discovery from both a theoretical and practical point of view, unifying shared concepts and addressing differences in the algorithms made available by the specialized scientific literature.\nThis survey is structured as follows. In Section 1, the reader is provided a general introduction to the causal discovery problem, along with an overview of previous works on the same topic. Section 2 is devoted to provide concepts, definitions and problems that are common across different approaches presented in the following pages. Section 3 explores the first set of algorithms in the observational setting, while Section 4 relaxes the acyclicity assumption. In Section 5, the scope is extended to cover the experimental scenario, where multiple interactions with the system of interest are taken into account. Section 6 and 7 report respectively on evaluation techniques and on practical applications of the discussed methodologies. Finally, Section 8 draws conclusions about the current landscape of causal discovery." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b68", "b26", "b64", "b67", "b64", "b68", "b67", "b68", "b28", "b64", "b26", "b67", "b87", "b68" ], "table_ref": [], "text": "To the best of our knowledge, six different surveys on causal discovery were published from 2019 to 2022. In particular, [67] acted as a meta-survey by checking the contents covered by the others against five topics, namely: theory, data, software, metrics and examples. A modified version of this checklist can be found in Table 1, which was adapted for a direct comparison with the structure of our survey.\nWhile every contribution provided adequate background knowledge and theoretical definitions involving the fundamental aspects of causal discovery, only few examples [27,63,66] reported evaluation data sets or metrics, and just two of them listed both [63,67]. The landscape is even more fragmented when Table 1: Comparison of recent surveys on causal discovery in terms of covered contents.\nobserved from a practical point of view: only two contributions [66,67] presented and discussed the availability of software tools to perform the described procedures, thus hindering the applicability of causal discovery to researchers approaching for the first time to this topic.\nIn particular, the work from [29] provides some insights on the discovery procedure from a deep-learning point of view. Authors in [63] tackled the problem of recovering the causal graph from time series data sets, while [27] restricted its attention to the most famous techniques. Moreover, [66] presented a general survey on the topic without a proper interventional section, as for [86] in the latent case. Finally, [67] covers causal inference and causal discovery from a high-level perspective, which is opposed to our in-depth approach focused on structural learning only.\nThis survey is designed in the light of the above considerations and aims to guide the inexpert reader through the forest of causal graphs to avoid common pitfalls when comparing and assessing the quality of results obtained by different causal discovery algorithms. It is worthwhile to mention that this survey is different from those published from 2019 to 2022 with respect to both theory and practice. Indeed, existing surveys introduce theoretical aspects of causal discovery while only few of them go into additional details. Another lack of existing surveys in term of theory is that very few of them discuss the difference between observational and interventional data. This survey has also many differences in terms of practice; i) Only a subset of existing surveys report on evaluation datasets and metrics, ii) no survey discusses how to tune strategies for choosing values of algorithm's hyperparameters, iii) only one survey reports on software packages, and iv) few surveys discuss practical applications of causal discovery methods." }, { "figure_ref": [], "heading": "Definitions and Notation", "publication_ref": [], "table_ref": [], "text": "This section gives the main definitions, concepts and assumptions on causality, together with the associated notation. In particular, we give the definition of causal model and the definition of causal discovery problem." }, { "figure_ref": [], "heading": "Notation", "publication_ref": [], "table_ref": [], "text": "We denote mathematical objects with capital letters, such as random variable X, and collections of objects with capital boldface letters, such as set X. Definition 2.1 (Graph). A graph G = (V, E) is a mathematical object represented by a tuple of two sets: a finite set of vertices V and a finite set of edges E ⊆ V × V. Definition 2.2 (Directed Graph). A directed graph (DG) G is a graph where the edge (X, Y ) is distinct from the edge (Y, X).\nIn particular, a directed edge (X, Y) is graphically represented by an arrow as X → Y , and induces a set of relationships between the vertices of the graph G. Given a vertex X, we denote its parents, i.e., the set of vertices that have an arrow into X, by P a(X), while we denote its children, i.e., the set of vertices that have an arrow out of X, by Ch(X). Recursively, any parent and parent of a parent (child and child of a child) of X is an ancestor An(X) (descendant De(X)) of X.\nThe vertices connected to X are said to be adjacent to X and denoted as Adj(X), while the vertices connected with an undirected edge to X are the neighbors N e(X). These two sets of vertices are identical in undirected graphs, but may be different in graphs with other mixed orientations.\nDefinition 2.3 (Path). A path π = (X -• • • -Y ) is a tuple of non repeating vertices, where each vertex is connected to the next in the sequence with an edge. Definition 2.4 (Directed Path). A directed path π = (X → • • • → Y ) is a tuple of non repeating vertices, where each vertex is connected to the next in the sequence with a directed edge." }, { "figure_ref": [], "heading": "Definition 2.5 (Cycle).", "publication_ref": [], "table_ref": [], "text": "A cycle is a path that starts and ends at the same vertex.\nDefinition 2.6 (Directed Acyclic Graph). A directed acyclic graph (DAG) is a directed graph G that has no cycles." }, { "figure_ref": [], "heading": "Causal Model", "publication_ref": [ "b7" ], "table_ref": [], "text": "Definition 2.7 (Causal Graph). A causal graph G [8] is a graphical description of a system in terms of cause-effect relationships, i.e. the causal mechanism." }, { "figure_ref": [], "heading": "Definition 2.8 (Direct and Indirect Cause", "publication_ref": [ "b27", "b7", "b57" ], "table_ref": [], "text": "). For each directed edge (X, Y ) ∈ E, X is a direct cause of Y and Y is a direct effect of X. Recursively, every cause of X that is not a direct cause of Y , is an indirect cause of Y .\nThis definition is formally enforced by the causal edge assumption [28], where: Definition 2.9 (Causal Edge Assumption). The value assigned to each variable X is completely determined by the function f given its parents:\nX i := f (P a(X i )) ∀X i ∈ V. (2.1)\nAs natural consequence of such definitions, we can define models that entail both the structural representation and the set of functions that regulate the underlying causal mechanism. Definition 2.10 (Structural Causal Model). A structural causal model (SCM) [8,56] is defined by the tuple M = (V, U, F, P ), where:\n• V is a set of endogenous variables, i.e. observable variables,\n• U is a set of exogenous variables, i.e. unobservable variables, where\nV ∩ U = ∅,\n• F is a set of functions, where each function f i ∈ F is defined as f i : (V ∪ U) p → V, with p the ariety of f i , so that f i determines completely the value of V i ,\n• P is a joint probability distribution over the exogenous variables P (U) = i P (U i ). Structural Causal Models are also known as Structural Equation Models (SEMs).\nThe joint exogenous distribution P is responsible for the non-deterministic nature of the model, adding a layer of uncertainty through a set of independent noise distributions. The unobserved terms U are represented in Figure 2.1 as dashed vertices with dashed edges. . In (a) X is a direct cause of Y and an indirect cause of Z, while Y is an effect, a direct effect, of X. An example of associated SCM is reported in (b), where the functional set F follows the causal edge assumption.\nX Y Z U XY U Z (a) M = (V, U, F, P ) V = {X, Y, Z} U = {U XY , U Z } F =      f X : X := 2U XY , f Y : Y := X + U XY , f Z : Z := 3Y + U Z P = U XY ∼ N (0, 1), U Z ∼ N (0, 1)(b)" }, { "figure_ref": [], "heading": "The Causal Discovery Problem", "publication_ref": [ "b101" ], "table_ref": [], "text": "The causal discovery problem [100] consists in selecting a causal graph as a possible explanation for a given data set.\nFormally, let G be the set of graphs defined over the variables V of a data set D and G * ∈ G be the true but unknown graph from which D has been generated." }, { "figure_ref": [], "heading": "Definition 2.11 (Causal Discovery Problem", "publication_ref": [ "b100", "b26", "b100", "b27", "b11", "b84", "b92", "b95" ], "table_ref": [], "text": "). The causal discovery problem [99] consists in recovering the true graph G * from the given data set D.\nA causal discovery algorithm is said to solve the causal discovery problem if and only if it converges to the true graph G * in the limit of the sample size. Definition 2.12 (Soundness and Completeness). A causal discovery algorithm is sound if it is able to solve the causal discovery problem, while it is complete if it outputs the most informative causal graph G that can be recovered from the input data set D, without making further assumptions. Definition 2.13 (Consistency of a Causal Graph). A causal discovery algorithm is consistent [27,99] if it outputs a graph G that induces a probability distribution consistent with the input data set D. Definition 2.14 (Identifiability of a Causal Graph). A causal discovery algorithm is said to identify [28] a graph G if it is able to determine the direction of any edge in G.\nIn the following pages we will see that some algorithms are able to identify the causal graph up-to its equivalence class, meaning that setting the direction of any of the remaining undirected edges would not induce a different probability distribution, i.e. it is not possible to choose a specific direction for that edge without further assumptions.\nMoreover, some of these methods are able to exploit only observational distributions, i.e. probability distributions that are induced by observation data set, while others are capable of taking advantage of interventional distributions, i.e. probability distributions that are generated by experimental data, where we intervene on the system of interest.\nFinally, even though the general formulation of the discovery problem is focused on the causal graph only, causal discovery algorithms are usually designed to find a solution w.r.t. a specific set of functions [12,83,91,94], e.g. non-linear equations." }, { "figure_ref": [], "heading": "Acyclicity and Faithfulness", "publication_ref": [ "b10", "b61", "b27" ], "table_ref": [], "text": "A graphical model is said to satisfy the Markov property if the associated joint probability distribution P (V) can be decomposed recursively as:\nP (V) = Xi∈V P (X i |P a(X i )) (2.2)\nThe probability factorization expressed in Equation 2.2 relies on the assumption that the relationships encoded by the graph match exactly the underlying conditional probability independencies:\nX ⊥ ⊥ P Y | Z =⇒ X ⊥ ⊥ G Y | Z (2.3)\nEssentially, it is assumed that probability independence (⊥ ⊥ P ) implies graphical independence (⊥ ⊥ G ), as stated in Equation 2. 3. This assumption is known as d-faithfulness or \"directed faithfulness\". In fact, the graphical model is required to rely on a DAG in order to satisfy the Markov property. More recently, extensions of the faithfulness assumption to the cyclic setting have been taken into consideration, e.g. σ-faithfulness [11,60], enabling the discovery of general non-acyclic DGs.\nIn order to test whether a variable X is conditionally independent from Y given a set Z in any probability distribution P faithful to G, one can use the d-separation criterion which is based on the concept of blocked path.\nIn particular, when Z blocks every path between X and Y , we say that X and Y are d-separated by Z. A path π is blocked depending on the presence of specific graphical patterns in it, as given in the following two definitions. Definition 2.15 (Fork, Chain & Collider). Let G be a DG and π be a path on G. Then, given three vertices X, Y and Z in π, we have the following:\n• X ← Y → Z is a fork on π, • X → Y → Z is a chain on π, and • X → Y ← Z is a collider on π.\nDefinition 2.16 (d-separation). Let G be a DG, π be a path on G and Z a subset of V. The path π is blocked [28] by Z if and only if π contains:\n• a fork A ← B → C or a chain A → B → C such that the middle vertex B is in Z, or • a collider A → B ← C such that middle vertex B, or any descendant of it, is not in Z.\nThe set Z d-separates X from Y if it blocks every path between X and Y1 . " }, { "figure_ref": [], "heading": "Equivalence Classes", "publication_ref": [ "b100", "b113", "b113", "b61", "b62", "b114", "b100", "b4", "b59" ], "table_ref": [], "text": "In the previous paragraphs we introduced the concept of causal graph as natural consequence of the causal edge assumption, where the functional set F is mapped to a directed graph G.\nThe naïve representation of a DAG does not allow to convey the (lack of) knowledge that typically arise during a discovery procedure. Here, we define formally other graphical representations, along with their interpretations. Definition 2.17 (Partially DAG). The graph G is a partially-directed acyclic graph (PDAG) if it can contain both undirected (-) and directed (→) edges. This alternative representation allows to distinguish a cause-effect pair (X → Y ) from a yet unknown relationship (X -Y ), where there is still uncertainty about the direction of the edge. PDAGs are also called patterns [99]. Definition 2.18 (Skeleton). Let G be a PDAG. The skeleton of G is the undirected graph resulting from changing any directed edge of G to undirected. Definition 2.19 (V-structure). Let G be a PDAG. A v-structure in G is a triple X → Y ← Z where X and Z are not adjacent. V-structures are also called unshielded colliders [111].\nIn the context of PDAGs, v-structures encode the conditional independencies that shape the associated probability distribution. Any edge that would change, by either adding or removing, any v-structure when reversed is said to be a compelled edge, as in Figure 2.3. Any compelled edge, along with the underlying skeleton, is a constraint for the set of observational distributions compatible with the given PDAG. Any non-compelled edge is called reversible. Definition 2.20 (Observational Equivalence). Two DAGs G and H are observationally Markov equivalent [111] if they have the same skeleton and the same v-structures, denoted as G ≡ H.\nHenceforth, the definition of equivalence stems from an observational point of view, where graphs are compared in terms of the observational probability that is faithful to the given structure. In fact, changing the orientation of an reversible edge leads to a different structure with an equivalent factorization of the associated probability distribution. Definition 2.21 (Observational Equivalence Class). Two DAGs G and H belong to the same observational Markov equivalence class (MEC) [60,61,112] if they are Markov equivalent. As generalization, the MEC of a graph G, denoted by [G], represents the set of possible DAGs that are observationally equivalent.\nSince MECs are defined in terms of skeletons and v-structures only, edges that are not part of any v-structure remain undirected, meaning that, given the limited knowledge, it is not possible to disentangle the relationship between the two variables. Definition 2.22 (Completed PDAG). A PDAG G is said to be completed [99] if any directed edge is compelled and any undirected edge is reversible w.r.t. MEC [G].\nThe usual representation of a MEC is a complete partially-directed acyclic graph (CPDAG), also called essential graphs [5] or maximally oriented graphs [58]. Although the discovery problem is focused on recovering the true graph G * from a data set D, it is not always possible to retrieve a specific instance, but rather its MEC [G * ]." }, { "figure_ref": [], "heading": "Sufficiency vs. Insufficiency", "publication_ref": [ "b47", "b10", "b24", "b80", "b115", "b21" ], "table_ref": [], "text": "In many applications, the collected variables are assumed to be sufficient to find the causes of a system of interest. This condition rarely holds true in real world scenarios [46]. Definition 2.23 (Causally Sufficient Set). The set of variables V is said to be causally sufficient if and only if every cause of any subset of V is contained in V itself.\nThat is, there are no unobserved variables U that affect the behaviour of the causal mechanism generating the data set D. If at least one latent cause exists, then V is causally insufficient, which means that there exists a non-empty set of unobserved variables U that contains at least a cause of V. In this case, G is only a sub-graph of the augmented graph G a [11,25] defined over V ∪ U, as depicted in Figure 2.1a.\nThe equivalence class related to constraint-based causal insufficient methods relies on the concept of mixed graph and its properties. Definition 2.24 (Mixed Graph). The graph G is a mixed graph (MG) [79,113] if it can contain undirected (-), directed (→) and bidirected (↔) edges.\nIn mixed graphs the focus is on the edge endpoints, also called marks, rather than on the edge itself. For example, the directed edge X → Y is decomposed in two marks: the one insisting on X-••• and the one insisting on •••→Y . For this reason, we refer to the former as the tail mark (-) and the latter as the arrowhead mark (>). Therefore, a bidirected edge is an edge with both marks set to arrowheads.\nIn a bidirected edge X ↔ Y , X is a spouse of Y and vice versa. Therefore, the set of vertices connected with a bidirected edge to X is the spouse set Sp(X). The graphical relationships inherited from partially directed graphs remain the same.\nThe fork, chain and collider patterns must be revised in the context of bidirectional edges. Let G be a MG and π a path on G. The pattern X * → Y ← * Z is a collider on Y , where ' * ' stands for a generic mark. Any other pattern is a non-collider. Definition 2.25 (M-separation). Let G be a MG, π be a path on G and Z a subset of V. The path π is blocked [22] by Z if and only if π contains:\n• a non-collider such that the middle vertex is in Z, or\n• a collider such that middle vertex, or any descendant of it, is not in Z.\nThe set Z m-separates X from Y if it blocks every path between X and Y ." }, { "figure_ref": [], "heading": "Definition 2.26 (Ancestral Graph). A mixed graph G is ancestral if:", "publication_ref": [], "table_ref": [], "text": "• G has no (directed) cycle, and\n• X ∈ Sp(Y ), then X ∈ An(Y ), and\n• X ∈ N e(Y ), then P a(X) = ∅ ∧ Sp(X) = ∅.\nThese conditions allow an insightful interpretation of arrowheads in mixed graphs. In particular, in ancestral graphs, an arrowhead implies non-ancestorship, which explains why these representations are particularly useful in defining causal relationships." }, { "figure_ref": [], "heading": "Definition 2.27 (Maximal Ancestral Graph", "publication_ref": [ "b75", "b95", "b96" ], "table_ref": [], "text": "). An ancestral graph is maximal (MAG) if any pair of non adjacent vertices are graphically separated (in terms of m-separation).\nAs for the previous definition of the Markov equivalence class of DAGs using a CPDAG, the MEC of a set of MAGs is represented using a partial ancestral graph (PAG). A mark that is present in the same location in any MAG of a MEC is called invariant. Definition 2.28 (Partial Ancestral Graph). The graph G is a partial ancestral graph (PAG) if it can contain any combination of the following edge marks: tail (-), arrowhead (→) and circle (•). Moreover, let [G] be the MEC associated to G, then:\n• G has the same adjacencies of [G], and\n• any arrowhead mark in G is invariant in [G],\nand\n• any tail mark in G is invariant in [G].\nAs direct consequence of this PAG definition, any circle mark present in G represents a variant mark in [G], as for reversible edges of CPDAGs. Thus, PAGs are the most informative representation of MECs for MAGs, hence, they satisfy the same completed definition of CPDAG.\nThe interpretation of PAGs can be tricky: Depending on additional assumptions, such as homoscedasticity or nonlinearity, some algorithms are able to identify the causal graph beyond its equivalence class and recover a single graph instance [74,94,95]." }, { "figure_ref": [], "heading": "Adding Prior Knowledge", "publication_ref": [ "b59", "b62" ], "table_ref": [], "text": "Sometimes a cause-effect pair is known to exist (or to not exist) a priori, e.g. through expert's elicitation. Following the causal edge assumption, we can explicitly represent pairs as directed edges, defining a knowledge base composed of required (or forbidden) causal statements. Definition 2.29 (Knowledge Base). A knowledge base K is defined as an ordered pair (R, F), where R is the set of required directed edges, while F is the set of forbidden directed edges.\nThe knowledge base K is a valid representation for the given background knowledge. There exists a class of algorithms that are capable of taking advantage of this prior knowledge [58,61], either by integrating such knowledge before the actual discovery step or by checking if the resulting graph is consistent a posteriori." }, { "figure_ref": [], "heading": "Algorithm Year", "publication_ref": [ "b17", "b115", "b3", "b76", "b66", "b70", "b14", "b106", "b33", "b95", "b116", "b81", "b36", "b78", "b23", "b30", "b114", "b104", "b62", "b39", "b39", "b83", "b79", "b12" ], "table_ref": [], "text": "Category Output Non-Linear Insufficient Cyclic Intervention PC [18] 1991 Constraint CPDAG FCI [113] 2008 Constraint PAG GES [4] 2013 Score CPDAG FGES [75] 2017 Score CPDAG ARGES [65] 2018 Hybrid CPDAG GFCI [69] 2016 Hybrid PAG HCR [15] 2018 Score DAG bQCD [105] 2020 Asymmetric PDAG LiNGAM [34,94] 2014 Asymmetric DAG NOTEARS [114] 2018 Score DAG CCD [80] 1996 Constraint PAG LiNG [48] 2012 Asymmetric DG dseptor [37] 2017 Exact MG bcause [77] 2020 Exact MG σ-CG [24] 2018 Constraint σ-CG GIES [31] 2012 Score CPDAG IGSP [112] 2018 Score CPDAG UT-IGSP [103] 2020 Score CPDAG FCI-JCI [61] 2020 Constraint PAG Ψ-PC [40] 2020 Constraint CPDAG Ψ-FCI [40] 2020 Constraint PAG backShift [82] 2015 Asymmetric MG bcause+ [78] 2020 Exact MG DCDI [13] 2020 Asymmetric DAG Table 2: Algorithms classified by supported () and unsupported settings." }, { "figure_ref": [], "heading": "Causal Discovery", "publication_ref": [], "table_ref": [], "text": "In this section we introduce the first class of causal discovery algorithms. Here, the hypothetical data set is represented by static observational data samples, neither interventional information nor time dependencies are taken into account. A summary of the explored algorithms can be found in Table 2." }, { "figure_ref": [], "heading": "Constraint-based Algorithms", "publication_ref": [], "table_ref": [], "text": "Constraint-based algorithms try to recover the causal graph by exploiting a set of conditional independence statements (CISs) obtained from a sequence of statistical tests. This class of methods translates conditional probability independence into graphical separation by assuming faithfulness (Subsection 2.4) of the underlying distribution." }, { "figure_ref": [], "heading": "Definition 3.1 (Perfect Map).", "publication_ref": [ "b15", "b48", "b5", "b108", "b100", "b17", "b50", "b17", "b52", "b59", "b103", "b115", "b31", "b51" ], "table_ref": [], "text": "A graph G is said to be a perfect map [16,47] for a probability distribution P if every CIS derived from G can also be derived from P and vice versa:\nX ⊥ ⊥ P Y | Z ⇐⇒ X ⊥ ⊥ G Y | Z (3.1)\nDefinition 3.2 (Conditional Independence Test). The null H 0 and alternative hypotheses H 1 defined as\nH 0 : X ⊥ ⊥ P Y | Z and H 1 : X ⊥ ⊥ P Y | Z, let I(X, Y |Z)\nto denote a conditional independence (CI) test. The null hypothesis H 0 is not rejected if and only if the resulting p-value is higher than a chosen significance level α:\nÎ(X, Y |Z) > α =⇒ X ⊥ ⊥ P Y | Z (3.2)\nWhen faithfulness is assumed, probability independence implies graphical separation2 . The main limitation of this approach is related to the exponential growth of the conditioning set Z. Indeed, given the pair (X, Y ), in the worst case scenario where X is dependent on Y (or vice-versa), the algorithm is required to test for 2 (V\\{X,Y }) conditioning sets.\nConstraint-based methods are generally capable of integrating prior knowledge into the learning process.\nConditional Independence with Mixed Data Constrain-based techniques are essentially agnostic of the specific conditional independence test that is being used. Indeed, it is possible to take advantage of such approaches in a wide variety of scenarios, as long as the assumptions of the said test are satisfied. While the main focus of causal discovery studies has been into either discrete or continuous settings, recent advances in conditional independence testing [6,107] extend existing tests to mixed-data.\nPeter-Clark (PC) One of the most studied algorithm that leverages the CISs is the Peter-Clark (PC) algorithm [99] with its variants [18,49].\nThe first step of the procedure consists in defining a complete undirected graph over the variables of the given data set D. Subsequently, a sequence of conditional independence (CI) tests are performed following an heuristic strategy [18], in order to minimize the number of tests needed. For instance, it is known that the power of CI test decreases when the size of the conditioning set increases [51], due to the curse of dimensionality. A common approach consists in selecting an upper limit to the size of the conditioning set, discarding computationalintensive time-wasting tests with low significance levels.\nThe obtained independence statements are then used to remove the associated edges and identify the underlying skeleton. Finally, the remaining edges are oriented according to a set of rules [58] that leverages the identified v-structures and acyclicity property.\nThe resulting equivalence class is returned as a CPDAG, where the remaining undirected edges are reversible for the given observational distribution that arises from the data.\nFast Causal Inference (FCI) A first extension of the PC algorithm to the causal insufficient setting (Subsection 2.6) is represented by the Fast Causal Inference (FCI) [102,113] algorithm. Specifically, the FCI algorithm relaxes both the assumption of no latent confounding [32] and no selection bias [50] in the observational setting, pushing the causal discovery problem a step closer to real-world scenarios. In this context, the authors leverage the definition of discriminating path to derive a new set of orientation rules." }, { "figure_ref": [], "heading": "Definition 3.3 (Discriminating Path", "publication_ref": [], "table_ref": [], "text": "). Let G be an ancestral graph, a path π = (X, . . . , W, Z, Y ) between X and Y is a discriminating path for Z if (i) π contains at least three edges, (ii) X is not adjacent to Y , (iii) Z is adjacent to Y , and (iv) every vertex between X and Z is a collider on π and parent of Y .\nDiscriminating paths are closely related to the separation sets identified by the PC algorithm: if a path π between X and Y is discriminating for Z, then Z is a collider on π iff every set that separates X and Y does not contains Z, otherwise it is a non-collider iff every set that separates X and Y contains Z." }, { "figure_ref": [], "heading": "Score-based Algorithms", "publication_ref": [ "b16", "b2", "b88", "b25", "b90" ], "table_ref": [], "text": "Score-based algorithms are usually structured around the maximization of a measure of fitness of a graph G through a space of possible graphs G for the observed samples D, following a defined scoring criterion S(G, D) [17]:\nG * = argmax G∈G S(G, D) (3.3)\nIn the next few paragraphs, a set of properties for scoring criteria are introduced, before shifting the focus on an optimal two-step procedure for the causal sufficient scenario.\nDefinition 3.4 (Decomposable Score). A scoring criterion S(G, D) is decomposable if it can be defined as a sum of the scores over a vertex and its parents:\nS(G, D) = Xi∈V S(X i , P a(X i ), D)(3.4)\nAs direct consequence of this property, during the discovery procedure, the score computation can be simplified in terms of local differences of the causal graph.\nMoreover, the comparison of scores of two DAGs G and H can be handled by taking into account only the vertices that have different parent sets. A graph G is said to contain a probability distribution P if there exists an independence model associated with G that represents P exactly, i.e. G is a perfect map of P . Definition 3.6 (Consistent Score). Let D be a data set associated with a probability distribution P and let G and H be two graphs. A scoring criterion S is said to be consistent in the limit of the number of samples if and only if:\n• If only G contains P , then S(G, D) > S(H, D),\n• If both G and H contain P and the model associated with H has fewer parameters that the one with G, then S(G, D) < S(H, D).\nIf a scoring criterion is both decomposable and consistent, then it is locally consistent. Definition 3.7 (Locally Consistent Score). Let G be a graph and H the graph resulting from addition of the edge X → Y to G. A scoring criterion S(G, D) is said to be locally consistent if and only if:\n• X ⊥ ⊥ P Y | P a(X) =⇒ S(G, D) < S(H, D), • X ⊥ ⊥ P Y | P a(X) =⇒ S(G, D) > S(H, D).\nExplicitly, if a scoring criterion is locally consistent then the score:\n• Increases when any edge that eliminates an independence constraint that does not hold in the generative distribution is added, and\n• Decreases when any edge that does not eliminate such a constraint is added.\nThis property guarantees that any deletion of an unnecessary edge will produce a higher score value, allowing the definition of a optimal greedy search algorithm. We report a brief list of scores for reference, such as the Akaike Information Criterion (AIC) [3], the Bayesian Information Criterion (BIC) [87], the Bayesian Dirichlet equivalent uniform (BDeu) [26] and the Bayesian Dirichlet sparse (BDs) [89]." }, { "figure_ref": [], "heading": "Definition 3.8 (Optimal Equivalence Class). Let [G]", "publication_ref": [ "b3", "b58", "b16", "b5", "b76" ], "table_ref": [], "text": "* be the equivalence class that is a perfect map of the probability distribution P and D the associated data set. In the limit of the number of samples:\nS([G] * , D) > S([G], D) ∀[G] = [G] * (3.5)\nGreedy Equivalent Search (GES) The Greedy Equivalence Search (GES) [4,57] is optimal in the limit of the number of samples [17]. The first step of the algorithm consists in the initialization of the empty graph G. The algorithm is composed by two phases: the forward search and the backward search. In the forward search phase, (i) G is modified by repeatedly adding the edge that has the highest delta score, until there is no such edge that increases the score. In the backward search phase, (ii) the edge that again achieves the highest delta score is repeatedly removed. The algorithm terminates once it reaches a local maximum during the backward search phase. This algorithm is designed to work under causal sufficiency. When this assumption no longer holds, the procedure is known to introduce extra edges as a compensation behaviour for the unobserved relationships. For example, when a fork (X ← Y → Z) is present and the middle vertex is indeed latent, GES will likely add an edge between the other two observed vertices of the structure, even if such edge is not present in the true graph. Any algorithm that is based on this technique and does not address the issue directly displays such pattern.\nFast GES (FGES) Score-based algorithms are as fast as the computation of the chosen scoring criterion is. Leveraging the properties of the score function, it is possible to minimize the number of computations needed by storing previous intermediate evaluations. Not only this optimizations reduce the computation time considerably, but also allow the application of these methods to highdimensional data sets [6,75]. This \"fast\" variant of GES (FGES) caches partial graph scores (i.e. delta scores), significantly increasing the memory usage, since relevant fragments of the graph may be considered. Moreover, computationally expensive sections of the algorithm can be parallelized, taking advantage of high performance computing (HPC) settings." }, { "figure_ref": [], "heading": "Hybrid Algorithms", "publication_ref": [ "b66", "b16", "b70" ], "table_ref": [], "text": "With the term \"hybrid\" algorithms we refer to the class of methods that combine constraint-based and score-based approaches to mitigate their drawbacks.\nAdaptively Restricted GES (ARGES) Consistency of constraint-and score-based algorithms is usually proved in low-dimensional use cases, where the number of samples is orders of magnitude greater than the number of variables. Hybrid approaches generally lacks a formal and rigorous proof of consistency, leading to undefined behaviour. For this reason, an adaptively restricted variant of GES (ARGES) [65] has been developed, targeting specifically the consistency weakness in both low-and high-dimensional spaces.\nThe novelty of this hybrid version of GES stems from the concept of admissible edge. Let G be a CPDAG and X and Y be a pair of non adjacent vertices on it. Adding an edge between X and Y is admissible for the graph G if (i) X and Y are adjacent in the (estimated) skeleton of G or (ii) there exists a node\nZ such that X → Z ← Y is a v-structure in G.\nFrom the definition of admissible edge, an equal admissible move consists in adding such edge to the graph and obtain a new equivalent CPDAG. This point is sufficient to prove that the resulting forward phase of ARGES is consistent when restricted to admissible moves (i.e. it is an independence map for the given observational probability distribution [17]).\nGreedy FCI (GFCI) Score-based causal discovery algorithms such as GES and FGES are asymptotically correct, but are not designed to work in a causal insufficient scenario, where unmeasured confounders are present in the true graph. Constraint-based causal search algorithms, such as FCI, are asymptotically correct even with unmeasured confounders, but often perform poorly on small samples. The Greedy Fast Causal Inference (GFCI) [69] algorithm combines score-based and constraint-based algorithms improving over the previous results while being asymptotically correct under causal insufficiency.\nSpecifically, the initial skeleton is obtained by un-orienting the CPDAG resulting from the execution of FGES. Then, the orientation rules of FCI are applied, with only a few slight modifications that rely on original FGES output. This approach leads to an improved accuracy over the distinct constraint-and score-based approaches. As a side effect, additional requirements arise from the union of these methods. For example, not only the conditional independence test is required to be consistent by FCI, but also the associated score must be locally consistent due to FGES. This constraint reduces the practical applications to settings where indeed such score exists." }, { "figure_ref": [], "heading": "Other Methods", "publication_ref": [ "b5", "b108", "b14", "b106", "b82", "b41", "b105", "b95", "b96", "b19", "b33" ], "table_ref": [], "text": "Hidden Compact Representation (HCR) Causal discovery methods for discrete and mixed variables have gained renovated interested in the last few years [6,107]. Although additive noise models have been widely used in the context of continuous variables, it is difficult to justify their application with categorical data, where the addition operator between the levels of variables is not well defined.\nFor this reason, authors in [15] developed a new low-dimensional embedding for discrete variables, allowing a (hidden) compact representation (HCR) of the discrete states of such variables. The method follows a two-stage procedure: at first, a discrete variable is deterministically mapped into a low-cardinality representation (e.g. binary), which acts as a proxy for the information contained in the original variable; then, a set of samples are drawn for the new proxy variable using a probabilistic mapping. The overall complexity of the model in controlled using the BIC score, balancing between total fitness and size of parameters.\nThe authors address the problem of identifiability of the model and prove that, under mild conditions, the causal graph recovered from observational data is identifiable. The method is tested against both synthetic and real-world data, providing reference values for performance metrics. In these experiments, HCR outperforms linear models in terms of accuracy and sensitivity, especially when the additive noise assumption does not hold.\nQuantile Causal Discovery (bQCD) The quantile causal discovery (bQCD) [105] technique is designed to uncover cause-effect pairs in the bivariate setting. By re-expressing independence statements in light of the minimum description length (MDL) [81], the authors build a discovery procedure by using quantile scoring.\nFollowing [41], let X and Y be two random variables with joint, marginal and conditional distributions denoted by F , F X and F X|Y respectively. The key concept here is that a lower complexity follows from a correct causal orientation of the (X, Y ) pair, since it is a more informative representation of the associated data.\nHence, the Kolmogorov complexity K(F ) is defined as the length of the shortest program that outputs F (X). Since K(F ) measures the information contained in F , authors in [104] \nstate that if X causes Y , then K(F X )+K(F Y |X ) ≤ K(F Y ) + K(F X|Y ).\nThe problem is that K(F ) cannot be computed in practice. Therefore, the authors relies on the MDL principle as a proxy for the Kolmogorov complexity. Such an approximation can be performed by estimating the population quantiles through nonparametric quantile regression.\nThe resulting procedure is robust to outliers and can be generalized to a wide range of distributions, although it requires that all population quantiles must be computable, which could be a limiting factor in real-world applications.\nLinear Non-Gaussian Acyclic (LiNGAM) In the context of linear causal models, when causal sufficiency holds, the observed variables can be expressed as a linear combination of the noise terms:\nx = Bx + e (3.6)\nHere, the exogenous distribution is assumed to be made of mutually independent (possibly non-Gaussian) variables. Solving for x reduces to the identification of the matrix A such that:\nx = (I -B) -1 e = Ae (3.7)\nThe LiNGAM [94,95] algorithm relies on independent component analysis (ICA) [20] to identify a possible solution for A. In fact, multiple mixing matrices A are feasible solutions for the given joint probability distribution. This technique is essentially focused on discovering asymmetries in the sample distribution to determine the correct causal ordering. Once such ordering has been discovered, the causal graph is built by recovering all and only the edges coherent with the order.\nThe LiNGAM method has been extended later for causally insufficient settings [34]. Let f be the vector of latent variables and Λ the matrix of the connections strength between f and x, then:\nx = Bx + Λf + e (3.8)\nThe proposed model can be solved with a variant of ICA, called overcomplete ICA, which takes into account the presence of unobserved effects.\nThe LiNGAM algorithm consistently estimates the connection matrix B. While standard ICA does not scale well in high-dimensional settings, approximated variants of ICA can be used to compute the components with a predefined fix number of iterations with reasonable precision. This leads to a efficient solution in presence of non-gaussian noise and causally insufficient data sets." }, { "figure_ref": [], "heading": "Continuous Optimization (NOTEARS)", "publication_ref": [ "b116" ], "table_ref": [], "text": "In the \"DAGs with NO TEARS\" [114] algorithm, the causal discovery problem is reduced to a continuous optimization problem. The acyclicity constraint is expressed as an equality constraint h(W) = 0, where h is a smooth differentiable function that measures the \"DAG-ness\" (i.e. a quantification of the acyclicity violations) of a given adjacency matrix W. When W is binary then:\nh(W) = tr(e W•W ) -n = 0 (3.9)\nwhere tr, is the trace operator, • is the Hadamard product, e * is the matrix exponential and n the size of W. Moreover, this function has a rather simple associated gradient:\n∇h(W) = (e W•W ) T • 2W (3.10)\nCoefficients smaller than a fixed threshold ω > 0 are set to zero, rounding the solution with an arbitrary precision. The evaluation of the matrix exponential is O(n 3 ), i.e. cubic in the number of vertices. Given the low computational complexity, NOTEARS outperforms existing methods when both the in-degree and the sample size are large.\n4 Causal Discovery with Cycles" }, { "figure_ref": [], "heading": "Cyclic SCM", "publication_ref": [ "b8" ], "table_ref": [], "text": "In a SCM, the causal graph induces a functional set F where equations follows the decomposition enforced by the causal edge assumption, Subsection 2.9. If the causal graph is acyclic, then the SCM itself is called acyclic, or recursive SEM. The concept of recursion is linked to the hierarchical order that arises from the topological ordering of the underlying DAG. Indeed, it is possible to define a sequence X 1 , X 2 , . . . , X n of vertices over V such that for any X i and X j where i < j, X j is not a cause of X i [9]. Therefore, in a non-recursive SEM, or cyclic SCM, some endogenous variables are connected to each other, forming cycles that do not allow a recursive decomposition. Still, the causal edge assumption is satisfied, since its definition is consistent even in the presence of cycles." }, { "figure_ref": [], "heading": "No Acyclicity Assumption", "publication_ref": [ "b81", "b65", "b102", "b23", "b24", "b23" ], "table_ref": [], "text": "Conditional independencies arising from cyclic SCMs are entailed by the cyclic graphs [80]. It can be shown that, in general, there is no DAG encoding the conditional independencies which hold in such SCM [64]. Nonetheless, cyclic SCMs are widely used to model systems with feedback, and are applied in sociology, economics and biology, making this class of models a relevant target of interest for causal discovery techniques.\nTo test for such independencies, d-separation can be adapted to the cyclic setting under the assumption of causal sufficiency [101]. In causally insufficient scenarios, d-separation can be replaced with σ-separation [24,25] applied to directed mixed graphs (DMGs), i.e. mixed graph (Subsection 2.24) without undirected edges. Definition 4.1 (Strongly Connected Component). Let G be a DG and X a vertex in G. The strongly connected component [24] of a vertex X is defined as:\nSCC(X) = An(X) ∩ De(X) (4.1)\nthat is, the set of vertices that are both ancestors and descendants of X, including X itself. The set Z σ-separates X from Y if it blocks every path between X and Y .\nThe above graphical criterion implies d-separation and reduces to it in the case of DAGs." }, { "figure_ref": [], "heading": "Cyclic Causal Discovery (CCD)", "publication_ref": [ "b81", "b23", "b35", "b54", "b79", "b78" ], "table_ref": [], "text": "The Cyclic Causal Discovery (CCD) algorithm [80] has been the only provably sound (Subsection 2.12) approach to general directed graphs until the development of LiNG [48]. CCD is a constraintbased algorithm that follows the same initial procedure as the one of the PC algorthm, with five different orientation rules. CCD outputs a PAG G which differ from the output of FCI for a couple of additional patterns:\n• underlining triples (X * - * Y * - * Z), where Y is an ancestor of at least one of X or Z in every graph in [G], and\n• dotted underlining triples (X * → Y . . . ← * Z), where Y is not a descendant of a common child of X and Z.\nThese additional patterns arise from a fundamental problem: the algorithms is not complete, and, therefore, there may be features common to all graphs in the same equivalence class that are not present in the output PAG (i.e. it is not the most informative PAG). While not being complete in the same sense of the previous algorithms, CDD is d-separation complete, meaning that the resulting PAG represents an equivalence class with a single graph, i.e. it encodes all the needed conditional independecies. Therefore, CCD is useful when one is interested in querying the resulting graph about dependencies, but lacks the capability to represent every causal edge by definition, in contrast to others algorithms. This limitation makes it less suitable for the definition of SCMs, especially when one is interested in the form of the functional set.\nLinear Non-Gaussian (LiNG) The LiNGAM algorithm can be adapted to the cyclic setting by weakening the acyclicity assumption. Specifically, instead of targeting a DAG, LiNG (or LiNG-D family) [48] try to recover a simple graph (i.e. without self-loops) by forcing all entries on the diagonal of the B matrix to be zero.\nWhile LiNGAM output could be seen as a set of admissible models that contains a single model (i.e. the model is identifiable), the cyclic variant usually admits more than one causal graph at the time. In fact, the acyclicity assumption that allowed to find the row-permutation of B that best fits the given data set is missing. The authors then suggest to limit the discovery procedure to the k-th best assignment, following the intuition that permutations associated to inadmissible models would score poorly asymptotically. This approach selects one single model from the equivalent class (i.e. returning set).\nLiNG inherits both limits and strengths of the original method: approximate (or sparse) ICA can be a valid alternative if running the full ICA is computationally expensive for the considered task.\nσ-Connection Graphs From the concept of σ-separation, one can derive a MG where conditional independencies are expressed in the presence of cycles and latent variables, namely a σ-Connection Graph (σ-CG). An algorithm to learn this structures from data has been developed [24] as a natural extension of the work presented in [36]. The causal discovery problem is re-casted as a continuous optimization problem based on the following loss function:\nL(G, S) = λ i (1 λi>0 -1 Xi⊥ ⊥ G Yi|Zi ) (4.2)\nwhere S is a set of conditional independence statements expressed as S = (X i , Y i , Z i , λ i ) n i=1 , where X i , Y i and Z i are variables in V and λ i ∈ R ∪ {-∞, +∞} encodes the confidence of probabilistic conditional independence\nX i ⊥ ⊥ P Y i |Z i as a constraint.\nThe λ i weights are evaluated using the indicator function 1 to constrain the conditional dependence between variables. Therefore, Equation 4.2 quantify the amount of observations against the proposed causal graph based on the observed data. During the experimental evaluation, authors relied on the weights proposed in [53]:\nλ i = log p i -log α (4.3)\nwith p i representing the p-value of a statistical test for conditional independence and α being a significance level.\nMinimizing the loss function may lead to multiple optimal solutions, where each solution G is an instance of the actual equivalence class [G]. Indeed, as for d-separation and CPDAGs, the σ-separation criterion and the associated σ-CGs take into account possible undirected edges that are invariant for any causal graph belonging to the same equivalence class.\nThis algorithm has been benchmarked against synthetic data in low-dimensional setting. While the recovery metrics show consistent performances across the experiments, especially when increasing the number of interventions, it is clear that the main limitation of this approach is linked with the σ-separation encoding, as noted by [78]. Indeed, the separation checks are preformed using Answer Set Programming (ASP), a declarative logic programming language, which slows down the learning procedure.\nbcause The procedures described so far are essentially approximate algorithms that reduce the search space (i.e. the number of conditionally independence tests) by using previously computed test results. In fact, edges that are tested in later phases rely on adjacent vertices that are selected in earlier steps of the algorithm. During the last few years, exact search approaches have been developed in a branch-and-bound fashion.\nThe bcause algorithm [77] explores the search space in a tree-like visit guided by an objective function that determines the weight of a potential solution. During the discovery phase, any edge of an intermediate result G is either absent, present or undecided. Before the actual branching step, the lower bound of the given objective function for the current partial solution G is computed. If such bound is higher than the weight obtained by the previous solution G, the branch can be closed and the algorithm backtracks. Otherwise, if G contains at least one undecided edge, the procedure branches recursively in two directions: one in which said edge is set as present and the other marked as absent. Finally, if the branch cannot be closed and G has no undecided edge, then the current solution G is updated if and only if the evaluation of the objective function results in a lower weight. The search procedure will return G as a globally optimal solution.\nSince the causal discovery problem is inherently exponential, an exact search algorithm is unfeasible in the general setting. However, if both the objective function and its lower bound can be efficiently evaluated, a constrained space for a low dimensional problem can be effectively explored. For example, the authors benchmark their method under different conditions, showing that assuming acyclicity result in a lower execution time. Moreover, the algorithm maintains a set of constraints satisfied by the local solution and updates them incrementally. Therefore, any incompatible extension of the current solution is ruled out by leveraging a linear programming solver, reducing the total number of evaluation needed. " }, { "figure_ref": [], "heading": "Causal Discovery with Interventions", "publication_ref": [], "table_ref": [], "text": "This section is focused on the difference between learning causal models using either observational or interventional data. While the former setting has been explored extensively in the past decades, only recently solutions for properly handling experimental data have been proposed." }, { "figure_ref": [], "heading": "Observational vs. Interventional", "publication_ref": [ "b7", "b73", "b71" ], "table_ref": [ "tab_1" ], "text": "In order to grasp the added value of experimental data, we will introduce the concept of ladder of causality [8,72] as a reference framework.\nThe Ladder of Causation The ladder of causation, also called the causal hierarchy, is an ordered structure of composed by three layers, where each layer is mapped to a cognition level: observational, interventional and counterfactual.\nA level inherently defines the set of potential queries that can be answered with the given information associated to it.\nIn practice, the observational layer is composed by associational or factual data, while the interventional layer is related to data that are generated by an intervention on the system, i.e. an experiment. Interacting with the system itself is the reason why these two levels are different. The counterfactual layer is the highest level of cognition, where one may ask what would have happened if a different intervention had been performed, opposed to the one that factually altered the system. This hypothetical scenario is strongly opposed to the observational one, being in the counter-factual space.\nEven if the three layers represent different information levels, they are not distinct. In fact, each layer is a generalization of the previous one, e.g. the observational setting can be seen as a special case of the interventional scenario, where no intervention is performed. Therefore, the interventional layer subsumes the observational one. The same happens with the counterfactual layer w.r.t. the interventional one, provided that the former allows to define hypothetical actions that were not present in the latter, as expressed in Table 3.\nAt this point, one may ask how to formally represent the concepts expressed by this hierarchy, to operatively exploit the informative gap between the layers.\nThe answer is provided by do-calculus [70].\ndo-calculus Queries that are usually expressed in natural language can be rephrased in terms of probability distribution by introducing the do operator, whenever possible3 . Definition 5.1 (Rules of do-calculus). Let G be a causal graph and P the probability distribution induced by G. For any disjoint subset of variables X, Y, Z and W, the following three rules apply:\n1. Insertion and deletion of observations:\nP (Y | do(X), Z, W) = P (Y | do(X), W) (5.1) if (Y ⊥ ⊥ Z | X, W) holds true in G X ," }, { "figure_ref": [], "heading": "Exchange of observations and interventions:", "publication_ref": [], "table_ref": [], "text": "P (Y | do(X), do(Z), W) = P (Y | do(X), Z, W) (5.2) if (Y ⊥ ⊥ Z | X, W) holds true in G X,Z ," }, { "figure_ref": [], "heading": "Insertion and deletion of interventions:", "publication_ref": [], "table_ref": [], "text": "P (Y | do(X), do(Z), W) = P (Y | do(X), W) (5.3) if (Y ⊥ ⊥ Z | X, W) holds true in G X,Z(W) ,\nwhere G X is the subgraph of G where the incoming edges into X are removed, G Z is the analogous for the outgoing edges from Z, and finally Z(W) is Z \\ An(W) w.r.t. the subgraph G X .\nWith these rules, which are correct and complete, a causal effect can be identified if there exists a finite sequence of applications of such rules leading to a do-free expression of the considered probability distribution." }, { "figure_ref": [], "heading": "Types of Interventions", "publication_ref": [], "table_ref": [], "text": "Definition 5.2 (Perfect Intervention). An intervention is said to be perfect (or hard ) if it removes the causal dependencies (i.e. the incoming causal edges, as in Subsection 2.9) that affect the intervention target. Indeed, do-calculus enables us to express perfect interventions in a operative framework, but there are other types of interventions that cannot be expressed using this notation." }, { "figure_ref": [], "heading": "Definition 5.3 (Imperfect Intervention", "publication_ref": [ "b56", "b107" ], "table_ref": [], "text": "). An intervention is said to be imperfect (or parametric, soft) [55] if it does not remove the causal dependence that affects the intervention target, but alters the functions that represents such dependence.\nFor instance, an imperfect intervention on an SCM could be a change in the parameters that quantify the strength of the causal relationships, while a perfect intervention would result in hard setting them to zero. In this sense, perfect interventions are a subset of imperfect interventions, where some variables are removed from the equations of the functional set as a special case.\nMechanism Change Imperfect interventions itself are a formal definition of a broader concept called mechanism change [106]. For a SCM M with a causal graph G and a set of parameters Θ associated to the function set F. A mechanism change is a mapping from M to M , where the new set of parameters is defined as Θ = Ψ ∪(Θ \\ Ψ), with the new subset Ψ that differs from the original subset Ψ. The change affects the behaviour of the function set F, inducing a set F ." }, { "figure_ref": [], "heading": "Defining the Intervention Target", "publication_ref": [ "b30", "b71", "b39", "b45" ], "table_ref": [], "text": "We can rephrase perfect and imperfect interventions under a single unified framework through the concept of intervention target [31].\nDefinition 5.4 (Intervention Target). Let G be a causal graph. A subset I ⊂ V is said to be an intervention target if it contains all and only the variables associated to an intervention over G.\nTherefore, a single-variable intervention is an intervention target that contains only one variable, while in a multi-variable intervention it contains more than one. As a special case, when I = ∅ the intervention target represents the observational case. A set of multiple intervention targets {I 0 , I 1 , . . . , I n } is called an intervention family and it is denoted with the calligraphic letter I. Definition 5.5 (Conservative Family). A family of targets I is conservative if for each vertex X in V there exists at least one intervention target in I that does not contain X:\n∃I : X ∈ I ∈ I, ∀X ∈ V (5.4)\nEssentially, a conservative family is a family that allows the existence of at least one intervention target which does not intervene on an specific variable. This property guarantees that there is at least one experiment in the family which does not alter the behaviour of such variable if performed.\nIn this settings, a conservative family allows to observe the influence of a (known) set of targets on at least one unaffected variable, enabling the possibility of disentangling such effect, especially when compared to the other experiments in the whole family. Definition 5.6 (Intervention Graph). Let G be a causal graph and I be an intervention target defined over G. The intervention graph G (I) = (V, E (I) ) is the causal graph obtained by removing any directed edge that points to a vertex in I from G:\nE (I) = {(X, Y ) | (X, Y ) ∈ E ∧ Y ∈ I} (5.5)\nThis definition of intervention graph is coherent with the intervened graph resulting from a do-intervention [70], also known as graph surgery or graph manipulation.\nWe can now formally express the interventional distribution associated to an intervention graph. Definition 5.7 (Interventional Distribution). Let G be a causal graph and I be an intervention target. The interventional distribution P (I) can be expressed using the factorization formula:\nP (I) = Xi∈I P (I) X i |P a(X i ) Xi ∈I P (∅) X i |P a(X i ) (5.6)\nwhere P (∅) is the observational distribution of the variables which were not included in the intervention target, if any.\nIn case of perfect interventions, the interventional distribution can also be expressed using the do-notation: \nP (I) = Xi∈I P (I) X i | do(I) Xi ∈I P (∅) X i |P a(X i ) (5.\nG ≡ I H =⇒ G (I) ≡ H (I) , ∀I ∈ I (5.8)\nIn other terms, interventional equivalence can be decomposed in a set of equivalence statements of intervention graphs, where each observational equivalence statement is formulated against a single intervention target contained in the given family. Definition 5.9 (Interventional Equivalence Class). Two causal graphs G and H belong to the same interventional Markov equivalence class w.r.t. the intervention family I (I-MEC) [40,45] if they are I-equivalent. As for the observational setting, the I-MEC of a graph G, denoted by [G] I , represents the set of possible causal graphs that are interventionally equivalent.\nAn intervention family I induces a classification of the edges of an intervention graphs depending on the effect on the underlying interventional distribution." }, { "figure_ref": [], "heading": "Definition 5.10 (I-covered edge). An edge (X", "publication_ref": [], "table_ref": [], "text": "→ Y ) in G is I-covered if: P a(X) = P a(Y ) \\ {X} ∧ P ({X}) (Y ) = P (∅) (Y )\nwhen the intervention target {X} is in I. Definition 5.11 (I-contradictory edge). An edge (X → Y ) in G is I-contradictory if as least one of the following conditions holds:\n• ∃S ⊂ N e(Y ) \\ {X} such that ∀I ∈ I X\\Y we observe P (I) (Y | S) = P (∅) (Y | S), or • ∀S ⊂ N e(X) \\ {Y } such that ∃I ∈ I Y \\X we observe P (I) (X | S) = P (∅) (X | S).\nI-contradictory edges are particularly of interest since they differs among interventional equivalence classes, i.e. they violate the I-Markov property, highlighting the possibility for a consistent exploitation during the discovery procedure." }, { "figure_ref": [], "heading": "Learning with Interventions", "publication_ref": [], "table_ref": [], "text": "Sometimes researchers want to observe the effect of an intervention on one single variable at the time, but there are settings in which this is not possible or it is inconvenient. Therefore, multi-variable interventions must be addressed as a special case of a generic intervention target." }, { "figure_ref": [], "heading": "Single vs. Multi-Variable Interventions", "publication_ref": [ "b22", "b22", "b34" ], "table_ref": [], "text": "When each intervention target contains a single variable at the time, the number of experiments needed to collect enough evidence to identify the causal graph is n -1, with n the number of variables [23]. Indeed, if one intervention would enable the identification of the causal edges incoming into the only variable contained in the intervention target, then the n-th intervention would be redundant.\nIn the case of intervention targets with more then one variable, only log(n) + 1 interventions are necessary and sufficient in the worst case scenario [23], where the causal graph is the complete graph. Since this worst case is improbable, O(log log(n)) can be achieved as lower bound with high probability in the multivariable setting with a randomized intervention scheme [35], that is, it is possible to plan the experimental design in advance to minimize the number of interventions." }, { "figure_ref": [], "heading": "Unknown Intervention Targets", "publication_ref": [], "table_ref": [], "text": "An other problem that one may face during structural learning with interventional data is the uncertainty related to the interventional targets. There are scenarios in which it is known that an intervention has been performed, but it is unclear which is the exact set of variables that has been affected by such intervention. In this case, an additional layer of complexity is added in order to properly handle the less informative setting of unknown intervention targets." }, { "figure_ref": [], "heading": "Interventional Algorithms", "publication_ref": [ "b30", "b114", "b98", "b104", "b62" ], "table_ref": [], "text": "Interventional GES (GIES) By leveraging the similarity between observational causal graphs and their interventional counterparts, authors in [31] proposed a generalization of the GES algorithm to the interventional setting. This new score-based variant, called Greedy Interventional Equivalence Search (GIES), follows the same two step approach of the original procedure, traversing the search space using forward-and backward-phases, until a (local) maximum score is reached.\nA major contribution of this work is related to formalization of the interventional setting. Indeed, while the algorithm itself does not differ significantly from the observational one in terms of overall design, the performance improvements are relevant, as expected by transitioning from the first to the second layer of the causal hierarchy. This is an interesting example of how observational techniques can be adapted to the interventional setting with ease, once the theoretical aspects of both the intervention distribution and the intervention targets are addressed properly.\nInterventional Greedy Permutation (IGSP) While GIES focuses its attention on perfect interventions, a first extension to general interventions is presented in [112], with the Interventional Greedy Sparsest Permutations (IGSP), an interventional variant of the GSP [97]. In this case, the greedy approach consists in the optimization of a score function, coupled with a permutation-based strategy that guides the traversal of the I-MECs space.\nFormally, let ρ be a permutation of vertices of a causal graph G. The space on which such permutation lays is a polytope called permutahedron. A possible representation of this mathematical object is indeed another graph, where each vertex corresponds to a permutation ρ and each edge between two permutations encodes a transposition of the vertices. The goal of a permutation-base causal discovery algorithm is to find a permutation ρ * , consistent with the topological order of the true causal graph G * , that optimizes a given score function. The search procedure traverses the permutahedron using a depth-first approach starting from an initial permutation ρ. For each permutation τ visited, if G τ yields a better score than G ρ then ρ is set to τ . The traversal is restarted from the updated ρ, until no such τ is found.\nIn order to leverage the advantages of the interventional data, IGSP limits the vertices transposition to the neighbors that are connected by I-covered edges, restricting the search space to permutations that are coherent with the intervention targets. An other characteristic of this search strategy is given by the prioritization of I-covered edges that are also I-contradictory, given that they represent a transition of I-MEC, which could lead to an improvement of the total score.\nAn extended version of this algorithm, named UT-IGSP, has been presented in [103] in order to tackle the unknown target scenario. The main contribution of this work is linked to the new definition of I-covered edges in light of partially unknown intervention targets. IGSP (and later UT-IGSP) has been compared to GIES under different conditions, showing that the former achieves better performances than the latter when the dimensionality of the problem is limited (i.e. lower than 10 vertices). This limit is coherent with others traversal-based approaches: although GIES is not consistent in general, its score function is more efficient in pooling together the various interventional datasets when it comes to high-dimensionality spaces.\nJoint Causal Inference with FCI (FCI-JCI) Another formal approach, similar to the one introduced in the previous subsection, is presented under the name of Joint Causal Inference (JCI) [61]. This method aims to pool together multiple observations collected during different experiments (i.e. contexts), hence, the name joint causal inference.\nIn this framework, the set of observed variables is split into two disjoint sets: system variables X and context variables C. While the former set contains the variables that have been observed during an experiment, the latter set describes under which conditions such system has been observed, following the classical distinction between endogenous and exogenous variables respectively.\nContext variables can be used as intervention variables, even if this might not always be the case: here the term is related to the notion of change of context, which is a broader scope than simply intervene on the system. Doing so, it is possible to obtain a more flexible representation of the system of interest, where external forces are represented as internal characteristics of a meta-system. This approach relaxes the boundary between experiments performed under different conditions, allowing researchers to join data with a coherent causal description.\nBefore diving into JCI itself, there are a couple of assumptions that can be (optionally) taken into consideration to understand the purpose of the entire context framework: 0. The underlying mechanism that generates the data is represented by a SCM M , where the observed variables are split in system variables and context variables.\n1. No system variable is cause of any context variable, i.e. exogeneity assumption." }, { "figure_ref": [], "heading": "2.", "publication_ref": [ "b39" ], "table_ref": [], "text": "No system variable is confounded with any context variable, i.e. randomized context.\n3. Let G C be the context graph induced by the context variables C over the causal graph G associated with the SCM M . For each pair of context variables (C i , C j ) in the context graph the following holds true:\n(C i ↔ C j ) ∈ G C ∧ (C i → C j ) ∈ G C (5.9)\nthat is, no context variable is a direct cause of another context variable, but there is a hidden confounder between each pair of context variables, i.e. generic context.\nWhile assumptions (0), ( 1) and ( 2) are usually considered mild in the interventional setting, assumption (3) might need to be clarified further: if the goal of the causal discovery is to disentangle the causal relationships using the context variables as guidance, rather than focusing on the connections of the context graph, then assumption (3) can be enforced if ( 1) and ( 2) were also assumed. This approach allows the algorithm to restrict the search space to the graphs that satisfy this last assumption, speeding-up the learning process.\nThe generic JCI procedure can adapted to any observational causal discovery algorithm by following four steps: (i) add the context variables, (ii) pool data together by setting the values of the context variables, (iii) address faithfulness violations between contexts, if any, (iv) execute the selected observational learning algorithm. Authors provide reference adaptations for multiple algorithms, such as FCI.\nThe FCI-JCI variant is particularly of interest, provided that it inherits the strength points of FCI in the causally insufficient setting. Various combinations of the three assumptions were tested, showing that FCI123 (i.e. all three assumptions made) is less accurate in general, but significantly faster than others solutions, allowing its application in more complex scenarios with a sensible number of variables.\nUnknown Intervention Targets using Ψ-FCI Authors in [40] adapted both PC and FCI algorithms to the causal discovery setting under imperfect interventions with unknown intervention targets. The fundamental contribution of this work is the extension of the I-MEC to a more general Ψ-MEC that is capable of representing intervention graphs with unknown intervention targets.\nThe key idea is that a pair of intervention targets I, J ∈ I can be used to identify a unique interventional mechanism that encompasses both targets. Let G be a causal graph and I an intervention family. The induced set of interventional probability distributions P (I) = {P (I0) , P (I1) , . . . , P (In) } satisfies the Ψ-Markov property if the following holds true for any disjoint subsets of variables Y, Z and W:\n1. Insertion and deletion of observations:\nP (I) (Y | Z, W) = P (I) (Y | W) (5.10) if (Y ⊥ ⊥ Z | W) holds true in G for all I ∈ I," }, { "figure_ref": [], "heading": "Invariance of interventions:", "publication_ref": [ "b10", "b24", "b62", "b83", "b83", "b79", "b12", "b116" ], "table_ref": [], "text": "P (I) (Y | W) = P (J) (Y | W) (5.11) if Y ⊥ ⊥ K | (W \\ W K ) holds true in G W K R(W)\nfor all I, J ∈ I, where K is the symmetric difference of I and\nJ, W K = W ∩ K, R = K \\ W K and finally R(W) = R \\ An(W) w.r.t. G.\nWhile Equation 5.10 is essentially derived from observational Markov equivalence, Equation 5.11 is related to the distributional invariances across pairs if intervention targets w.r.t. the associated intervention graph. Indeed, if I and J are the true intervention targets for P (I) and P (J) , they must satisfy the invariance for the interventional distributions when separation holds in a given intervention graph. Moreover, the Ψ-Markov property does not require any assumption about the experimental setting in which such interventions are performed. Specifically, it could happen that a subset of experiments were not carried out exactly in the same way, e.g. not in a controlled environment. Therefore, even if interventions targets were known a priori, Ψ-Markov would still be more general than the I-Markov property.\nThe authors then recast the augmented graph proposed by [11,25], adding a set of utility vertices that are analogous to the context vertices proposed by [61]. Therefore, the output of the former can be compared to the latter using the related augmented graph, showing that the accuracy of the edge orientations recovered by their Ψ-FCI variant is superior then the one proposed by FCI-JCI.\nbackShift Continuing in the unknown targets setting, the backShift [82] algorithm is a causal discovery approach that recovers linear (possibly cyclical) models under causal insufficiency. It focuses on shift interventions with unknown targets, a subset of imperfect interventions where the effect of such perturbation yields a fixed shift of the intervened variable. Both the targets and the shift value can be estimated from the data.\nThe key idea of this technique is to represent the the target SCM M as:\n(I -B)x = c + e (5.12)\nwhere x is a random vector, B is the adjacency matrix of the casual graph associated to M , e is the noise vector and c is the random shift vector that models the shift intervention on the system. Then, a joint matrix diagonalization is applied to the differences between the covariance matrices ∆Σ Σ Σ of each experiment I ∈ I:\nD = argmin D ∈D I∈I L(D ∆Σ Σ Σ (I) D T ) (5.13)\nwhere D = I -B, L is the sum-of-squared loss function and I the family of targets. This approach assumes that data represent observation at the equilibrium, the D matrix is invertible and the cycle product [82] is strictly smaller than one. Moreover, noises, interventions between variables and between experiments are assumed to be uncorrelated.\nAuthors compare their solution to the LiNG observational alternative, taking advantage of the interventional asymmetries arising from the additional information contained in the data. The results show that backShift is capable of dealing with both interventions and latent variables under mild assumptions, outperforming LiNG in both observational and interventional setting. Moreover, the computational complexity is O(|I| • n 2 • m), with n representing the number of the variables and m representing the sample size, which allows its application in high-dimensional settings.\nbcause+ An extension of the bcause algorithm to interventional data, called bcause+, is proposed in [78]. When multiple experimental datasets are available, the core-base estimation of the lower bound of each branch of the exact search can be improved by taking into account the variables affected by the intervention.\nIn particular, the graphical separation statements checks by the observational variant (using either d-separation or σ-separation) are extended to consider the constraints induced by an intervention target. By assuming the absence of edges oriented into vertices that are part of an intervention target, the search procedure can avoid to check for separation, e.g. in case of perfect interventions. In this sense, intervention targets can be used to derive linear programming constraints by considering the subsets of intervened variables that affect the separation statements.\nThe improved version of the previous algorithm is also evaluated on nonlinear cyclic causal models, showing its capability to deal with non-linear relationships. However, even with the added constraints, the exponentially-increasing execution time prohibits its application in high-dimensional contexts, which is a well known limitation for exact search methods.\nDifferentiable Causal Discovery with Interventions (DCDI) Under regularity assumption, authors in [13] propose a general differentiable causal discovery algorithm that is capable of learning causal graphs from interventional data with both perfect and imperfect interventional targets, even in the case of unknown interventions.\nThe key idea of this algorithm is to maximize a score function defined as follows:\nS I (G) = sup\nφ I∈I E X log f (I) (X; B, R, φ) -λ|G| (5.14) where φ are the weights of the estimator used to maximize the score function (i.e. neural networks in this case), X follows the interventional distribution P (I) , f (I) is the interventional density function, B the binary adjacency matrix of G, R the binary interventional matrix (i.e. R ij = 1 if X i ∈ I j ). Essentially, the score function is built upon the conditional interventional distribution to recover the invariant edges across interventions. In fact, vertices that are not in any intervention target are characterized by a conditional probability distribution which is invariant across interventions, as for conservative families of interventions. Relying on conditional invariance, the causal graph Ĝ = argmax G∈G S I (G) is I-equivalent (Subsection 5.8) to the true graph G * , for λ > 0 small enough. In case of unknown intervention targets, an additional -λ R |I| regularization term is added to the score function.\nThe DCDI algorithm has been tested against IGSP and GIES with known interventions and JCI-PC and UT-IGSP for unknown interventions, showing marginal advantages in terms of structural recovery. As for others continuous optimization methods [114], the major strength point is represented by its scalability: it takes O(n 3 ) to compute the matrix exponential during each training step, making it the only causal discovery algorithm that supports non-linear relationships in the interventional setting in a high-dimensional setting." }, { "figure_ref": [], "heading": "Evaluation and Tuning", "publication_ref": [], "table_ref": [], "text": "This section tackles the evaluation and tuning step typical of any practical application. A collection of reference data sets is listed, both real-world and synthetic generated ones, serving as benchmarking resources for discovery methods. In order to evaluate different solutions resulting from a set of configurations (i.e. hyperparameter) we report comparison metrics found in the specialized literature, both in terms of structure and entailed causal statements. Finally, tuning strategies and software packages are explored as support for new developed techniques." }, { "figure_ref": [], "heading": "Evaluation Datasets", "publication_ref": [ "b63", "b106", "b1", "b85", "b44", "b29", "b20" ], "table_ref": [], "text": "Cause-Effect Pairs (Tuebingen) Ever-growing data set [62,105] designed to benchmark discovery algorithms against bi-variate settings with known ground truth. The latest version reported by the change-log (December 20, 2017) includes 108 pairs. Each pair is composed by a data file with two columns (cause and effect respectively), a short description of the data sampling procedure, and a 2D scatter plot.\nRobotic Manipulation (CausalWorld) Simulator [2] for causal structure and transfer learning in a robotic manipulation environment. The environment is a simulation of an open-source robotic platform capable of constructing 3D shapes from a given set of blocks.\nSingle-Cell Flow Cytometry (Sachs) Flow cytometry measurements [84] of 11 proteins and phospholipids. The data set is split in different experiments, with nine stimulatory or inhibitory interventions. The study compares new learned model against ground truth obtained by reference literature on signaling networks with intervention points.\nSingle-Cell RNA-Sequencing (Klein) Single-cell RNA-sequencing (scRNAseq) data set [44] of ∼3000 mouse embryonic stem cells after leukemia inhibitory factor (LIF) withdrawal. The ground truth model is obtained by querying the TRRUST database [30] for the related causal relationships.\nSingle-Cell Gene Expression (Perturb-Seq) Measurements of gene expression [21] composed by 992 observational and 13.435 interventional observations from eight close-to-perfect interventions, each corresponding to a gene deletion using the CRISPR/Cas9 technique applied to bone marrow-derived dendritic cells." }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [ "b63", "b1", "b85", "b44", "b20", "b112", "b93", "b93", "b55" ], "table_ref": [], "text": "Type URL Tuebingen [62] Real-world Here CausalWorld [2] Synthetic Here Sachs [84] Real-world Here Klein [44] Real-world Here Perturb-Seq [21] Real-world Here SynTReN [110] Synthetic Here DREAM4 [92] Synthetic Here Synthetic mRNA Expression (DREAM4) The DREAM4 challenge [92] provides five datasets simulated from five biologically plausible gene regulatory networks with 10 genes [54]. Each dataset is composed by both observational and interventional data sampled by applying small random noise perturbations, single-gene knockdowns and single-gene knockouts, resulting in time series with unknown interventions." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [], "table_ref": [], "text": "In the context of causal discovery, the definitions of true positive (TP), true negative (TN), false positive (FP) and false negative (FN) have the same interpretation of the metrics referred to a binary classifier, which try to predict the edge orientation." }, { "figure_ref": [], "heading": "Adjacency Precision (AP) & Recall (AR)", "publication_ref": [ "b86", "b5", "b86", "b111", "b74", "b97" ], "table_ref": [], "text": "A first set of evaluation metrics for graphical models is made of the adjacency precision (AP) and adjacency recall (AR) [85]. These metrics are computed as the ratio between the number of correctly predicted adjacent vertices over the total predicted ones for AP and true predicted ones for AR. Formally, once the confusion matrix associated with the presence of edges is computed, the two metrics are defined as follows:\nAP = T P T P + F P (6.1) AR = T P T P + F N (6.2)\nArrowheads Precison (AHP) & Recall (AHR) While metrics related to adjacency can deliver insights on the general structure (i.e. skeleton) quality, arrowheads metrics [6,85] focus on highlighting inferred relationships performance. This class of metrics is particularly useful when there are multiple arrowhead marks that encode different causal statements, such as in PAGs.\nHere, classical adjacency metrics fail to account for invariant marks that might be interpreted as a head or a tail, overestimating the algorithm performance.\nAs for adjacency metrics, arrowheads precision (AHP) and recall (AHR) are defined as the ratio between correctly predicted arrowheads over total predicted arrowheads, and correctly predicted arrowheads over true arrowheads: AHP = T P T P + F P (6.3) AHR = T P T P + F N (6.4)\nwhere TP, FP and FN refer to the confusion matrix entries computed over the predicted arrowheads, not only the presence/absence of an edge.\nStructural Hamming Distance (SHD) It measures the differences between two graphical models in terms of their edges. Formally, let G and H be two graphs and E(G, U ) the symmetric difference between the edge sets E(G) and E(U ), the SHD [109] counts the number of necessary operations to transform G into H:\nSHD(G, H) = V 2 (X,Y ), X<Y      1 (X, Y ) ∈ E(G, U ), 1 (Y, X) ∈ E(U, G),0 Otherwise. (6.5)\nwhere the allowed operations consist in addition, deletion and reversal of an edge.\nStructural Intervention Distance (SID) It is a pre-metric defined over the interventional distributions. Formally, the SID [73] counts the number of wrongly inferred interventional distributions. This measure relies on the notion of adjustment set [96] and it is strongly related to the SHD." }, { "figure_ref": [], "heading": "Parameter Tuning", "publication_ref": [], "table_ref": [], "text": "Strategies to perform parameter/s turning are rarely found in surveys, even tough casual discovery algorithms may have multiple parameters that regulate the search procedure. Here, we report three general and flexible practices described in the specialized literature that can be applied to any technique described so far." }, { "figure_ref": [], "heading": "Minimizing Model Complexity (BIC & AIC)", "publication_ref": [ "b88", "b2" ], "table_ref": [], "text": "A first approach for parameter/s tuning is related to model complexity. The goal is to find the parameter/s configuration that minimizes the complexity of the associated causal graph. As a measure of complexity one can rely on the Bayesian Information Criterion (BIC) [87] or the Akaike Information Criterion (AIC) [3]. This tuning strategy is particularly effective when coupled with score-based approaches that are able to exploit the same function, allowing to reuse the intermediate scores for a faster evaluation. The most general form of model complexity minimization is implemented as a grid search over all parameters configurations for the ranges." }, { "figure_ref": [], "heading": "Stability Approach to Regularization Selection (StARS)", "publication_ref": [ "b53", "b9" ], "table_ref": [], "text": "The StARS [52] approach is based on selecting the parameters configuration that minimizes the graph instability when small perturbations are applied to the data. The instability of an edge is the probability of presence of said edge when the causal graph is learned from a subsample of the data (without replacement). Hence, the graph instability of a given parameters' configuration h is the average of the edge instabilities computed w.r.t. h. In order to avoid configurations that lead to trivial graphs, e.g. the empty graph or the complete graph, the authors introduce a β parameter that acts as a threshold for the acceptable level of instability. In the end, this method measures the sensitivity of a specific parameters' configuration h as a function of the underlying data distribution.\nOut-of-sample Causal Tuning (OCT & OCTs) While previous approaches focused on metrics related to the causal structure alone, authors in [10] propose to employ the resulting model for its prediction capabilities, reducing the problem into an evaluation of a predictor. This approach works in a out-of-sample fashion, hence the name Out-of-sample Causal Tuning (OCT). The main advantages of such method are (i) the lack of parametric assumptions about the distribution of the data and (ii) the generalization to cases where the BIC and AIC scores are not defined, i.e. discrete models with hidden variables." }, { "figure_ref": [], "heading": "Software Packages", "publication_ref": [ "b42", "b111", "b13", "b89", "b109", "b91", "b43", "b18", "b99", "b18", "b99", "b66", "b77", "b17", "b38" ], "table_ref": [], "text": "Stable and reliable implementations of discovery methods are fundamental to achieve reproducibility of the experimental results. In the following paragraphs, a list of notable tools is presented.\nCausal Discovery Toolbox (CDT) The Causal Discovery Toolbox [42] is a Python front-end that acts as a bridge between different subpackages, pooling together multiple discovery algorithms. For example, one may find constrainedbased algorithms such as PC, Max-Min Parents & Children (MMPC) [109], score-based algorithms as GES and variants (GIES), and non linear approaches as LiNGAM, Causal Additive Models (CAM) [14], and others.\nbnlearn The bnlearn [88] package is an R package developed for bayesian inference and structural learning. While both PC and MMPC are implemented, algorithms such as Incremental Association Markov Blanket (IAMB) [108] and its variants are present too. Moreover, the underlying implementation is well suited for large scale application due to the optimized support for parallel computing [90].\npcalg The pcalg [43] package is a R utility for causal discovery and causal inference using graphical models. The algorithms provided here are PC and variants (CPC, PC Select), FCI and variants (RFCI [19], Anytime FCI [98], Adaptive Anytime FCI [19,98], FCI-JCI), GES and variants (AGES [65], ARGES, GIES) and LiNGAM. Given the wide variety of FCI-based algorithms and the integrated tools for causal inference, this package is particularly well suited for causal insufficient settings.\nTETRAD While previous packages where intended for command line usage, TETRAD [76] is a causal discovery package developed in Java with a graphical user interface. It follows a pipeline paradigm, where the user can drag & drop boxes from the side bar and connect them together to form data pipelines. The user can choose from a wide range of options, such as PC (PCStable, CPC & CPCStable [18], PCMax), FCI (RFCI, RFCI-BSC [39], GFCI), GES (FGES, FGES-MB, IMaGES), LiNGAM, and others. Given the simplicity of interface, it is well suited for researcher with limited programming experience.\n7 Practical Applications" }, { "figure_ref": [], "heading": "Causal Discovery in Economics", "publication_ref": [ "b0" ], "table_ref": [], "text": "Emissions, Production and Energy Use Authors in [1] explore the interactions between growth in CO 2 emissions, economic production and energy use, both at the global and multi-regional levels over the period 1990-2014. In order to recover the causal relationship between variables, a modified version of the PC algorithm for time-series is used. The output of the discovery step showed that CO 2 emissions, energy and economic activity are linked by a set of non-linear dependencies. At the global level, this graph suggests that a too rapid transition to net-zero emissions in the energy sector may hinder the global economic growth. When the regional level is taken into account, it is shown that regions are fully integrated into the system, which argues for coordinated policies across regions." }, { "figure_ref": [], "heading": "Causal Discovery in Medicine", "publication_ref": [ "b94", "b60" ], "table_ref": [], "text": "Alzheimer's Pathophysiology Researchers in [93] employed data made available by the Alzheimer's Disease Neuroimaging Initiative (ADNI) coupled with biological markers and clinical assessment to study the biological mechanism behind the Alzheimer's Disease. Two causal discovery algorithms (FCI and FGES) were compared against the gold standard graph retrieved from literature.\nThe methods were executed both with and without trivial background knowledge, e.g. patient's age is not affected by any biomarker. A significant improvement was observed with the addition of the knowledge base. Finally, longitudinal data were included, discovering more edges and removing the incorrect ones. The performance of the constraint-based was lower and less stable across the bootstrap samples than the score-based one.\nUnmet Treatments in Schizophrenia Authors in [59] selected the GFCI algorithm to identify the causes of functional outcomes of patients affected by schizophrenia during the critical window for early intervention. The algorithm was applied to the Recovery After an Initial Schizophrenia Episode Early Treatment Program (RAISE-ETP) trial at two time-points, i.e. baseline and after 6-months. Social and occupational functioning metrics were derived from the Quality of Life Scale (QLS). The retrieved causal graph was used to build a SCM in order to quantify the magnitude of the effects.\nThe estimated effects shed light over the interaction between both social and occupational functioning with the socio-affective capacity, which in turn affects the motivation of the subject. Moreover, an extended analysis of time dependencies revealed several causal cycles over the 6-months time-frame." }, { "figure_ref": [], "heading": "Causal Discovery in Psychology", "publication_ref": [ "b6" ], "table_ref": [], "text": "Alcohol Use and Anxiety Disorder Psychopathology researchers in [7] used graphical modeling algorithms to identify causal relationships within and between manifestations of psychiatric disorders. In this context, such methods are employed to identify symptoms that are part of a causal chain of \"mediators\". The main target of the study was to test whether drinking motivated by the goal of reducing negative affect (i.e. drinking to cope, DTC) served as a mediator in comorbid alcohol use and anxiety disorder in a causally insufficient setting.\nThe resulting graph showed that the most important causal influence of drinking was drinking craving, which was in turn influenced by DTC. However, there was still a degree of ambiguity in the direction of depression's associations with social anxiety and stress, suggesting the possible presence of latent variables." }, { "figure_ref": [], "heading": "Conclusions and Discussion", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A Brief Summary", "publication_ref": [], "table_ref": [], "text": "Causal inference depends heavily on the construction of a reference model that crystallizes the acquired knowledge. To meet such requirement, causal discovery provides a set of methods that are able to recover a graphical description of the underlying mechanism, exploiting both collected data and prior knowledge. In this work, we presented a list of algorithms, evaluation criteria and software tools, trying to cover a wide range of theoretical and practical scenarios in a coherent and unified manner. Moreover, we compared these resources against challenging problems, such as the presence of unobserved variables, cyclical dependencies, non-linear relationships and unknown interventions, highlighting the strengths and weaknesses of each solution. Finally, we reported a set of parameters tuning strategies and publicly available data sets to explore properly the described techniques and to test new ones." }, { "figure_ref": [], "heading": "Future Directions", "publication_ref": [], "table_ref": [], "text": "In terms of opportunities for future extensions, in this contribution we did not explore the implications of applying such method to time series, which would add an additional layer of complexity. Indeed, the representation of the causal dependencies in time are different from the one expressed in a static scenario and deserves a separated discussion on its own, especially when combined with the other topics introduced during the discussion." }, { "figure_ref": [], "heading": "Funding", "publication_ref": [], "table_ref": [], "text": "Alessio Zanga was granted a Ph.D. scholarship by F. Hoffmann-La Roche Ltd." } ]
Understanding the laws that govern a phenomenon is the core of scientific progress. This is especially true when the goal is to model the interplay between different aspects in a causal fashion. Indeed, causal inference itself is specifically designed to quantify the underlying relationships that connect a cause to its effect. Causal discovery is a branch of the broader field of causality in which causal graphs is recovered from data (whenever possible), enabling the identification and estimation of causal effects. In this paper, we explore recent advancements in a unified manner, provide a consistent overview of existing algorithms developed under different settings, report useful tools and data, present real-world applications to understand why and how these methods can be fruitfully exploited.
A Survey on Causal Discovery: Theory and Practice
[ { "figure_caption": "Figure 2 . 1 :21Figure 2.1: The causal graph G (a) of the related SCM M (b). In (a) X is a direct cause of Y and an indirect cause of Z, while Y is an effect, a direct effect, of X. An example of associated SCM is reported in (b), where the functional set F follows the causal edge assumption.", "figure_data": "", "figure_id": "fig_0", "figure_label": "21", "figure_type": "figure" }, { "figure_caption": "Figure 2 . 2 :22Figure 2.2: In this figure, A and B are d-separated even without conditioning on C, since they form a collider. The same does not hold for A and D, given that they form a chain by means of C, and therefore conditioning (i.e. setting its value) on the middle vertex C d-separates them.", "figure_data": "", "figure_id": "fig_1", "figure_label": "22", "figure_type": "figure" }, { "figure_caption": "Figure 2 . 3 :23Figure 2.3:A DAG on the left and its CPDAG on the right. As we can see, both graphs have the same underlying structure (i.e. skeleton), but differ from the orientation of some of the edges. Specifically, the edges connecting A to B and C can be rearranged to form different chains or a fork. This is not true for the others edges in the CPDAG, since they are compelled. In fact, modifying the orientation of one of them would either remove the v-structure formed by B → D ← C or introduce a new one.", "figure_data": "", "figure_id": "fig_2", "figure_label": "23", "figure_type": "figure" }, { "figure_caption": "Figure 2 . 4 :24Figure 2.4: A mixed graph on the left and one of its' possible PAGs on the right.", "figure_data": "", "figure_id": "fig_3", "figure_label": "24", "figure_type": "figure" }, { "figure_caption": "Definition 3 . 5 (35Equivalent Score). A scoring criterion S(G, D) is score equivalent if S(G, D) = S(H, D), for each pair of graphs G and H in the same equivalence class.", "figure_data": "", "figure_id": "fig_4", "figure_label": "35", "figure_type": "figure" }, { "figure_caption": "Definition 4 . 242(σ-separation). Let G be a DMG, π be a path on G and Z a subset of V. The path π is blocked[24,25] by Z if and only if π contains: • a collider A * → B ← * C where B ∈ An(Z), or • a non-collider A ← B * - * C (or A * - * B → C) where B ∈ An(Z) and A (respectively C) is part of SCC(B) (Equation 4.1).", "figure_data": "", "figure_id": "fig_5", "figure_label": "42", "figure_type": "figure" }, { "figure_caption": "7 ) 5 . 8 (758Definition Interventional Equivalence). Let G and H be two causal graphs and I be an intervention family. G and H are interventionally Markov equivalent w.r.t. the family I (i.e. I-equivalent) if the associated intervention graphs G (I) and H (I) have the same skeleton and the same v-structures for each intervention target of the family:", "figure_data": "", "figure_id": "fig_6", "figure_label": "758", "figure_type": "figure" }, { "figure_caption": "Layers of causation with associated questions, practical examples and methods.", "figure_data": "LayerQuestionMethodObservational How would seeing X change myUn/Supervisedbelief in Y ?LearningInterventional What happens to Y if I do X?ReinforcementLearningCounterfactual What would have happened to YStructuralif I had done X instead of X?Causal Model", "figure_id": "tab_1", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Static datasets by type and availability.", "figure_data": "Synthetic Gene Expression (SynTReN) Network generator [110] thatcreates synthetic transcriptional regulatory networks. The models are pairedwith kinetics simulations in order to sample gene expression data that approxi-mate observed experimental data.", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" } ]
Alessio Zanga; Fabio Stella
[ { "authors": "Peter Martey Addo; Christelle Manibialoa; Florent Mcisaac", "journal": "Energy Reports", "ref_id": "b0", "title": "Exploring nonlinearity on the co2 emissions, economic production and energy use nexus: A causal discovery approach", "year": "2021" }, { "authors": "Ossama Ahmed; Frederik Träuble; Anirudh Goyal; Alexander Neitz; Yoshua Bengio; Bernhard Schölkopf; Manuel Wüthrich; Stefan Bauer", "journal": "", "ref_id": "b1", "title": "Causalworld: A robotic manipulation benchmark for causal structure and transfer learning", "year": "2020" }, { "authors": "H Akaike", "journal": "IEEE Transactions on Automatic Control", "ref_id": "b2", "title": "A new look at the statistical model identification", "year": "1974" }, { "authors": "Jose A Juan I Alonso-Barba; Jose M Gámez; Puerta", "journal": "International journal of approximate reasoning", "ref_id": "b3", "title": "Scaling up the greedy equivalence search algorithm by constraining the search space of equivalence classes", "year": "2013" }, { "authors": "David Steen A Andersson; Madigan; Michael D Perlman", "journal": "The Annals of Statistics", "ref_id": "b4", "title": "A characterization of markov equivalence classes for acyclic digraphs", "year": "1997" }, { "authors": "Bryan Andrews; Joseph Ramsey; Gregory F Cooper", "journal": "PMLR", "ref_id": "b5", "title": "Learning highdimensional directed acyclic graphs with mixed data-types", "year": "2019" }, { "authors": "Justin J Anker; Erich Kummerfeld; Alexander Rix; Scott J Burwell; Matt G Kushner", "journal": "Alcoholism: Clinical and Experimental Research", "ref_id": "b6", "title": "Causal network modeling of the determinants of drinking behavior in comorbid alcohol use and anxiety disorder", "year": "2019" }, { "authors": "Elias Bareinboim; Juan David Correa; Duligur Ibeling; Thomas F Icard", "journal": "", "ref_id": "b7", "title": "On pearl's hierarchy and the foundations of causal inference", "year": "2021" }, { "authors": "D William; Berry", "journal": "Sage", "ref_id": "b8", "title": "Nonrecursive causal models", "year": "1984" }, { "authors": "I Konstantina V Biza; Sofia Tsamardinos; Triantafillou", "journal": "", "ref_id": "b9", "title": "Tuning causal discovery algorithms", "year": "2020" }, { "authors": "Stephan Bongers; Patrick Forré; Jonas Peters; Joris M Mooij", "journal": "", "ref_id": "b10", "title": "Foundations of structural causal models with cycles and latent variables", "year": "2021" }, { "authors": "Stephan Bongers; Joris M Mooij", "journal": "", "ref_id": "b11", "title": "From random differential equations to structural causal models: the stochastic case", "year": "2018" }, { "authors": "Philippe Brouillard; Sébastien Lachapelle; Alexandre Lacoste; Simon Lacoste-Julien; Alexandre Drouin", "journal": "", "ref_id": "b12", "title": "Differentiable causal discovery from interventional data", "year": "2020" }, { "authors": "Peter Bühlmann; Jonas Peters; Jan Ernest", "journal": "The Annals of Statistics", "ref_id": "b13", "title": "Cam: Causal additive models, high-dimensional order search and penalized regression", "year": "2014" }, { "authors": "Ruichu Cai; Jie Qiao; Kun Zhang; Zhenjie Zhang; Zhifeng Hao", "journal": "Advances in neural information processing systems", "ref_id": "b14", "title": "Causal discovery from discrete data using hidden compact representation", "year": "2018" }, { "authors": "Enrique Castillo; Jose M Gutierrez; Ali S Hadi", "journal": "Springer Science & Business Media", "ref_id": "b15", "title": "Expert systems and probabilistic network models", "year": "2012" }, { "authors": "David Maxwell; Chickering ", "journal": "Journal of machine learning research", "ref_id": "b16", "title": "Optimal structure identification with greedy search", "year": "2002-11" }, { "authors": "Diego Colombo; Marloes H Maathuis", "journal": "", "ref_id": "b17", "title": "Order-independent constraint-based causal structure learning", "year": "2013" }, { "authors": "Diego Colombo; H Marloes; Markus Maathuis; Thomas S Kalisch; Richardson", "journal": "The Annals of Statistics", "ref_id": "b18", "title": "Learning high-dimensional directed acyclic graphs with latent and selection variables", "year": "2012-02" }, { "authors": "Pierre Comon", "journal": "Signal processing", "ref_id": "b19", "title": "Independent component analysis, a new concept?", "year": "1994" }, { "authors": "Atray Dixit; Oren Parnas; Biyu Li; Jenny Chen; Charles P Fulco; Livnat Jerby-Arnon; Nemanja D Marjanovic; Danielle Dionne; Tyler Burks; Raktima Raychowdhury", "journal": "cell", "ref_id": "b20", "title": "Perturb-seq: dissecting molecular circuits with scalable singlecell rna profiling of pooled genetic screens", "year": "2016" }, { "authors": "Mathias Drton; Thomas S Richardson", "journal": "", "ref_id": "b21", "title": "Iterative conditional fitting for gaussian ancestral graph models", "year": "2012" }, { "authors": "Frederick Eberhardt; Clark Glymour; Richard Scheines", "journal": "", "ref_id": "b22", "title": "On the number of experiments sufficient and in the worst case necessary to identify all causal relations among n variables", "year": "2012" }, { "authors": "Patrick Forré; M Joris; Mooij", "journal": "", "ref_id": "b23", "title": "Constraint-based causal discovery for nonlinear structural causal models with cycles and latent confounders", "year": "2018" }, { "authors": "Patrick Forré; Joris M Mooij", "journal": "", "ref_id": "b24", "title": "Markov properties for graphical models with cycles and latent variables", "year": "2017" }, { "authors": "Dan Geiger; David Heckerman", "journal": "Elsevier", "ref_id": "b25", "title": "Learning gaussian networks", "year": "1994" }, { "authors": "Clark Glymour; Kun Zhang; Peter Spirtes", "journal": "Frontiers in genetics", "ref_id": "b26", "title": "Review of causal discovery methods based on graphical models", "year": "2019" }, { "authors": "Madelyn Glymour; Judea Pearl; Nicholas P Jewell", "journal": "John Wiley & Sons", "ref_id": "b27", "title": "Causal inference in statistics: A primer", "year": "2016" }, { "authors": "Ruocheng Guo; Lu Cheng; Jundong Li; P Richard Hahn; Huan Liu", "journal": "ACM Computing Surveys", "ref_id": "b28", "title": "A survey of learning causality with data: Problems and methods", "year": "2021-07" }, { "authors": "Heonjong Han; Jae-Won Cho; Sangyoung Lee; Ayoung Yun; Hyojin Kim; Dasom Bae; Sunmo Yang; Chan Yeong Kim; Muyoung Lee; Eunbeen Kim", "journal": "Nucleic acids research", "ref_id": "b29", "title": "Trrust v2: an expanded reference database of human and mouse transcriptional regulatory interactions", "year": "2018" }, { "authors": "Alain Hauser; Peter Bühlmann", "journal": "The Journal of Machine Learning Research", "ref_id": "b30", "title": "Characterization and greedy learning of interventional markov equivalence classes of directed acyclic graphs", "year": "2012" }, { "authors": " Ma Hernán; Robins", "journal": "Chapman & Hall/CRC", "ref_id": "b31", "title": "Causal Inference: What If", "year": "2020" }, { "authors": "L Jennifer; Hill", "journal": "Journal of Computational and Graphical Statistics", "ref_id": "b32", "title": "Bayesian nonparametric modeling for causal inference", "year": "2011" }, { "authors": "Patrik O Hoyer; Shohei Shimizu; Antti J Kerminen", "journal": "", "ref_id": "b33", "title": "Estimation of linear, non-gaussian causal models in the presence of confounding latent variables", "year": "2006" }, { "authors": "Huining Hu; Zhentao Li; Adrian R Vetta", "journal": "Advances in neural information processing systems", "ref_id": "b34", "title": "Randomized experimental design for causal graph discovery", "year": "2014" }, { "authors": "Antti Hyttinen; Frederick Eberhardt; Matti Järvisalo", "journal": "", "ref_id": "b35", "title": "Constraint-based causal discovery: Conflict resolution with answer set programming", "year": "2014" }, { "authors": "Antti Hyttinen; Paul Saikko; Matti Järvisalo", "journal": "", "ref_id": "b36", "title": "A core-guided approach to learning optimal causal graphs", "year": "2017" }, { "authors": "Guido W Imbens", "journal": "Review of Economics and statistics", "ref_id": "b37", "title": "Nonparametric estimation of average treatment effects under exogeneity: A review", "year": "2004" }, { "authors": "Fattaneh Jabbari; Joseph Ramsey; Peter Spirtes; Gregory Cooper", "journal": "Springer", "ref_id": "b38", "title": "Discovery of causal models that contain latent variables through bayesian scoring of independence constraints", "year": "2017" }, { "authors": "Amin Jaber; Murat Kocaoglu; Karthikeyan Shanmugam; Elias Bareinboim", "journal": "", "ref_id": "b39", "title": "Causal discovery from soft interventions with unknown targets: Characterization and learning", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b40", "title": "", "year": "2020" }, { "authors": "Dominik Janzing; Bernhard Schölkopf", "journal": "IEEE Transactions on Information Theory", "ref_id": "b41", "title": "Causal inference using the algorithmic markov condition", "year": "2010" }, { "authors": "Diviyan Kalainathan; Olivier Goudet", "journal": "", "ref_id": "b42", "title": "Causal discovery toolbox: Uncover causal relationships in python", "year": "2019" }, { "authors": "Markus Kalisch; Martin Mächler; Diego Colombo; H Marloes; Peter Maathuis; Bühlmann", "journal": "Journal of Statistical Software", "ref_id": "b43", "title": "Causal inference using graphical models with the R package pcalg", "year": "2012" }, { "authors": "Linas Allon M Klein; Ilke Mazutis; Naren Akartuna; Adrian Tallapragada; Victor Veres; Leonid Li; David A Peshkin; Marc W Weitz; Kirschner", "journal": "Cell", "ref_id": "b44", "title": "Droplet barcoding for single-cell transcriptomics applied to embryonic stem cells", "year": "2015" }, { "authors": "Murat Kocaoglu; Amin Jaber; Karthikeyan Shanmugam; Elias Bareinboim", "journal": "", "ref_id": "b45", "title": "Characterization and learning of causal graphs with latent variables from soft interventions", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b46", "title": "", "year": "2019" }, { "authors": "Murat Kocaoglu; Karthikeyan Shanmugam; Elias Bareinboim", "journal": "", "ref_id": "b47", "title": "Experimental design for learning causal graphs with latent variables", "year": "2017" }, { "authors": "Daphne Koller; Nir Friedman", "journal": "MIT press", "ref_id": "b48", "title": "Probabilistic graphical models: principles and techniques", "year": "2009" }, { "authors": "Gustavo Lacerda; L Peter; Joseph Spirtes; Patrik O Ramsey; Hoyer", "journal": "", "ref_id": "b49", "title": "Discovering cyclic causal models by independent components analysis", "year": "2012" }, { "authors": "Thuc Duy; Le ; Tao Hoang; Jiuyong Li; Lin Liu; Huawen Liu; Shu Hu", "journal": "IEEE/ACM Transactions on Computational Biology and Bioinformatics", "ref_id": "b50", "title": "A fast pc algorithm for high dimensional causal discovery with multi-core pcs", "year": "2019-09" }, { "authors": "Sanghack Lee; Juan D Correa; Elias Bareinboim", "journal": "", "ref_id": "b51", "title": "Generalized transportability: Synthesis of experiments from heterogeneous domains", "year": "2020" }, { "authors": "Chun Li; Xiaodan Fan", "journal": "Wiley Interdisciplinary Reviews: Computational Statistics", "ref_id": "b52", "title": "On nonparametric conditional independence tests for continuous variables", "year": "2020" }, { "authors": "Han Liu; Kathryn Roeder; Larry Wasserman", "journal": "", "ref_id": "b53", "title": "Stability approach to regularization selection (stars) for high dimensional graphical models", "year": "2010" }, { "authors": "Sara Magliacane; Tom Claassen; Joris M Mooij", "journal": "", "ref_id": "b54", "title": "Ancestral causal inference", "year": "2017" }, { "authors": "Daniel Marbach; Thomas Schaffter; Claudio Mattiussi; Dario Floreano", "journal": "Journal of computational biology", "ref_id": "b55", "title": "Generating realistic in silico gene networks for performance assessment of reverse engineering methods", "year": "2009" }, { "authors": "Florian Markowetz; Steffen Grossmann; Rainer Spang", "journal": "PMLR", "ref_id": "b56", "title": "Probabilistic soft interventions in conditional gaussian networks", "year": "2005-01-08" }, { "authors": "Adam Massmann; Pierre Gentine; Jakob Runge", "journal": "", "ref_id": "b57", "title": "Causal inference for process understanding in earth sciences", "year": "2021" }, { "authors": "Christopher Meek", "journal": "", "ref_id": "b58", "title": "Graphical Models: Selecting causal and statistical models", "year": "1997" }, { "authors": "Christopher Meek", "journal": "", "ref_id": "b59", "title": "Causal inference and causal explanation with background knowledge", "year": "2013" }, { "authors": "Kathleen Miley; Piper Meyer-Kalos; Sisi Ma; David J Bond; Erich Kummerfeld; Sophia Vinogradov", "journal": "Psychological Medicine", "ref_id": "b60", "title": "Causal pathways to social and occupational functioning in the first episode of schizophrenia: uncovering unmet treatment needs", "year": "2021" }, { "authors": "M Joris; Tom Mooij; Claassen", "journal": "PMLR", "ref_id": "b61", "title": "Constraint-based causal discovery using partial ancestral graphs in the presence of cycles", "year": "2020" }, { "authors": "M Joris; Sara Mooij; Tom Magliacane; Claassen", "journal": "", "ref_id": "b62", "title": "Joint causal inference from multiple contexts", "year": "2020" }, { "authors": "Jonas Joris M Mooij; Dominik Peters; Jakob Janzing; Bernhard Zscheischler; Schölkopf", "journal": "The Journal of Machine Learning Research", "ref_id": "b63", "title": "Distinguishing cause from effect using observational data: methods and benchmarks", "year": "2016" }, { "authors": "Raha Moraffah; Paras Sheth; Mansooreh Karami; Anchit Bhattacharya; Qianru Wang; Anique Tahir; Adrienne Raglin; Huan Liu", "journal": "Knowledge and Information Systems", "ref_id": "b64", "title": "Causal inference for time series analysis: Problems, methods and evaluation", "year": "2021" }, { "authors": "Mario Nagase; Yutaka Kano", "journal": "Statistics & Probability Letters", "ref_id": "b65", "title": "Identifiability of nonrecursive structural equation models", "year": "2017" }, { "authors": "Preetam Nandy; Alain Hauser; Marloes H Maathuis", "journal": "", "ref_id": "b66", "title": "High-dimensional consistency in score-based and hybrid structure learning", "year": "2018" }, { "authors": "Ana Rita Nogueira; João Gama; Carlos Abreu Ferreira", "journal": "Journal of Dynamics & Games", "ref_id": "b67", "title": "Causal discovery in machine learning: Theories and applications", "year": "2021" }, { "authors": "Ana Rita Nogueira; Andrea Pugnana; Salvatore Ruggieri; Dino Pedreschi; João Gama", "journal": "Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery", "ref_id": "b68", "title": "Methods and tools for causal discovery and causal inference", "year": "2022" }, { "authors": " ", "journal": "The Lancet", "ref_id": "b69", "title": "Identification of risk loci with shared effects on five major psychiatric disorders: a genome-wide analysis", "year": "2013" }, { "authors": "Juan Miguel; Ogarrio ; Peter Spirtes; Joe Ramsey", "journal": "PMLR", "ref_id": "b70", "title": "A hybrid causal search algorithm for latent variable models", "year": "2016" }, { "authors": "Judea Pearl", "journal": "Biometrika", "ref_id": "b71", "title": "Causal diagrams for empirical research", "year": "1995" }, { "authors": "Judea Pearl", "journal": "", "ref_id": "b72", "title": "Theoretical impediments to machine learning with seven sparks from the causal revolution", "year": "2018" }, { "authors": "Judea Pearl; Dana Mackenzie", "journal": "Basic Books, Inc", "ref_id": "b73", "title": "The Book of Why: The New Science of Cause and Effect", "year": "2018" }, { "authors": "Jonas Peters; Peter Bühlmann", "journal": "Neural computation", "ref_id": "b74", "title": "Structural intervention distance for evaluating causal graphs", "year": "2015" }, { "authors": "Jonas Peters; Dominik Janzing; Bernhard Schölkopf", "journal": "The MIT Press", "ref_id": "b75", "title": "Elements of causal inference: foundations and learning algorithms", "year": "2017" }, { "authors": "Joseph Ramsey; Madelyn Glymour; Ruben Sanchez-Romero; Clark Glymour", "journal": "International journal of data science and analytics", "ref_id": "b76", "title": "A million variables and more: the fast greedy equivalence search algorithm for learning high-dimensional graphical causal models, with an application to functional magnetic resonance images", "year": "2017" }, { "authors": "Kun Joseph D Ramsey; Madelyn Zhang; Ruben Glymour; Biwei Sanchez Romero; Imme Huang; Savini Ebert-Uphoff; Elizabeth A Samarasinghe; Clark Barnes; Glymour", "journal": "", "ref_id": "b77", "title": "Tetrad-a toolbox for causal discovery", "year": "2018" }, { "authors": "Kari Rantanen; Antti Hyttinen; Matti Järvisalo", "journal": "International Journal of Approximate Reasoning", "ref_id": "b78", "title": "Discovering causal graphs with cycles and latent confounders: An exact branch-and-bound approach", "year": "2020" }, { "authors": "Kari Rantanen; Antti Hyttinen; Matti Järvisalo", "journal": "PMLR", "ref_id": "b79", "title": "Learning optimal cyclic causal graphs from interventional data", "year": "2020" }, { "authors": "Thomas Richardson; Peter Spirtes", "journal": "The Annals of Statistics", "ref_id": "b80", "title": "Ancestral graph markov models", "year": "2002" }, { "authors": " Thomas S Richardson", "journal": "", "ref_id": "b81", "title": "A discovery algorithm for directed cyclic graphs", "year": "2013" }, { "authors": "Jorma Rissanen", "journal": "Automatica", "ref_id": "b82", "title": "Modeling by shortest data description", "year": "1978" }, { "authors": "Dominik Rothenhäusler; Christina Heinze; Jonas Peters; Nicolai Meinshausen", "journal": "", "ref_id": "b83", "title": "Backshift: Learning causal cyclic graphs from unknown shift interventions", "year": "2015" }, { "authors": "Paul K Rubenstein; Stephan Bongers; Bernhard Schoelkopf; Joris M Mooij", "journal": "", "ref_id": "b84", "title": "From deterministic odes to dynamic structural causal models", "year": "2018" }, { "authors": "Karen Sachs; Omar Perez; Dana Pe'er; Douglas A Lauffenburger; Garry P Nolan", "journal": "Science", "ref_id": "b85", "title": "Causal protein-signaling networks derived from multiparameter singlecell data", "year": "2005" }, { "authors": "Richard Scheines; Joseph Ramsey", "journal": "NIH Public Access", "ref_id": "b86", "title": "Measurement error and causal discovery", "year": "2016" }, { "authors": "Bernhard Schölkopf; Francesco Locatello; Stefan Bauer; Nan Rosemary Ke; Nal Kalchbrenner; Anirudh Goyal; Yoshua Bengio", "journal": "", "ref_id": "b87", "title": "Toward causal representation learning", "year": "2021" }, { "authors": "Gideon Schwarz", "journal": "The annals of statistics", "ref_id": "b88", "title": "Estimating the dimension of a model", "year": "1978" }, { "authors": "Marco Scutari", "journal": "Journal of Statistical Software", "ref_id": "b89", "title": "Learning bayesian networks with the bnlearn R package", "year": "2010" }, { "authors": "Marco Scutari", "journal": "PMLR", "ref_id": "b90", "title": "An empirical-bayes score for discrete bayesian networks", "year": "2016" }, { "authors": "Marco Scutari", "journal": "Journal of Statistical Software", "ref_id": "b91", "title": "Bayesian network constraint-based structure learning algorithms: Parallel and optimized implementations in the bnlearn R package", "year": "2017" }, { "authors": "Amirhossein Shahbazinia; Saber Salehkaleybar; Matin Hashemi", "journal": "", "ref_id": "b92", "title": "Paralingam: Parallel causal structure learning for linear non-gaussian acyclic models", "year": "2021" }, { "authors": "Paul Shannon", "journal": "R package version", "ref_id": "b93", "title": "Dream4: Synthetic expression data for gene regulatory network inference from the 2009 dream4 challenge", "year": "2021" }, { "authors": "Sisi Xinpeng ; Shen; Prashanthi Ma; Gyorgy Vemuri; Simon", "journal": "Scientific reports", "ref_id": "b94", "title": "Challenges and opportunities with causal discovery algorithms: application to alzheimer's pathophysiology", "year": "2020" }, { "authors": "Shohei Shimizu", "journal": "Behaviormetrika", "ref_id": "b95", "title": "Lingam: Non-gaussian methods for estimating causal structures", "year": "2014" }, { "authors": "Shohei Shimizu; Patrick Blöbaum", "journal": "", "ref_id": "b96", "title": "Recent Advances in Semi-Parametric Methods for Causal Discovery", "year": "2020" }, { "authors": "Ilya Shpitser; Judea Pearl", "journal": "Journal of Machine Learning Research", "ref_id": "b97", "title": "Complete identification methods for the causal hierarchy", "year": "2008" }, { "authors": "Liam Solus; Yuhao Wang; Caroline Uhler", "journal": "", "ref_id": "b98", "title": "Consistency guarantees for greedy permutation-based causal inference algorithms", "year": "2021" }, { "authors": "Peter Spirtes", "journal": "PMLR", "ref_id": "b99", "title": "An anytime algorithm for causal inference", "year": "2001" }, { "authors": "Peter Spirtes; Richard Clark N Glymour; David Scheines; Heckerman", "journal": "MIT press", "ref_id": "b100", "title": "Causation, prediction, and search", "year": "2000" }, { "authors": "Peter Spirtes; Kun Zhang", "journal": "Applied informatics", "ref_id": "b101", "title": "Causal discovery and inference: concepts and recent methodological advances", "year": "2016" }, { "authors": "L Peter; Spirtes", "journal": "", "ref_id": "b102", "title": "Directed cyclic graphical representations of feedback models", "year": "2013" }, { "authors": "L Peter; Christopher Spirtes; Thomas S Meek; Richardson", "journal": "", "ref_id": "b103", "title": "Causal inference in the presence of latent variables and selection bias", "year": "2013" }, { "authors": "Chandler Squires; Yuhao Wang; Caroline Uhler", "journal": "", "ref_id": "b104", "title": "Permutation-based causal structure learning with unknown intervention targets", "year": "2020" }, { "authors": "Oliver Stegle; Dominik Janzing; Kun Zhang; Joris M Mooij; Bernhard Schölkopf", "journal": "Advances in neural information processing systems", "ref_id": "b105", "title": "Probabilistic latent variable models for distinguishing between cause and effect", "year": "2010" }, { "authors": "Natasa Tagasovska; Valérie Chavez-Demoulin; Thibault Vatter", "journal": "PMLR", "ref_id": "b106", "title": "Distinguishing cause from effect using quantiles: Bivariate quantile causal discovery", "year": "2020" }, { "authors": "Jin Tian; Judea Pearl", "journal": "", "ref_id": "b107", "title": "Causal discovery from changes", "year": "2013" }, { "authors": "Michail Tsagris; Giorgos Borboudakis; Vincenzo Lagani; Ioannis Tsamardinos", "journal": "International journal of data science and analytics", "ref_id": "b108", "title": "Constraint-based causal discovery with mixed data", "year": "2018" }, { "authors": "Ioannis Tsamardinos; Constantin F Aliferis; Alexander R Statnikov; Er Statnikov", "journal": "", "ref_id": "b109", "title": "Algorithms for large scale markov blanket discovery", "year": "" }, { "authors": " St; Augustine", "journal": "", "ref_id": "b110", "title": "", "year": "2003" }, { "authors": "Ioannis Tsamardinos; Laura E Brown; Constantin F Aliferis", "journal": "Machine learning", "ref_id": "b111", "title": "The max-min hill-climbing bayesian network structure learning algorithm", "year": "2006" }, { "authors": "Tim Van Den Bulcke; Koenraad Van Leemput; Bart Naudts; Piet Van Remortel; Hongwu Ma; Alain Verschoren; Bart De Moor; Kathleen Marchal", "journal": "BMC bioinformatics", "ref_id": "b112", "title": "Syntren: a generator of synthetic gene expression data for design and analysis of structure learning algorithms", "year": "2006" }, { "authors": "Thomas Verma; Judea Pearl", "journal": "", "ref_id": "b113", "title": "Equivalence and synthesis of causal models", "year": "1991" }, { "authors": "Karren Yang; Abigail Katcoff; Caroline Uhler", "journal": "PMLR", "ref_id": "b114", "title": "Characterizing and learning equivalence classes of causal dags under interventions", "year": "2018" }, { "authors": "Jiji Zhang", "journal": "Artificial Intelligence", "ref_id": "b115", "title": "On the completeness of orientation rules for causal discovery in the presence of latent confounders and selection bias", "year": "2008" }, { "authors": "Xun Zheng; Bryon Aragam; Pradeep Ravikumar; Eric P Xing", "journal": "", "ref_id": "b116", "title": "Dags with no tears: Continuous optimization for structure learning", "year": "2018" } ]
[ { "formula_coordinates": [ 5, 133.77, 127.93, 343.71, 32.68 ], "formula_id": "formula_0", "formula_text": "). For each directed edge (X, Y ) ∈ E, X is a direct cause of Y and Y is a direct effect of X. Recursively, every cause of X that is not a direct cause of Y , is an indirect cause of Y ." }, { "formula_coordinates": [ 5, 238.46, 237.52, 239.02, 9.68 ], "formula_id": "formula_1", "formula_text": "X i := f (P a(X i )) ∀X i ∈ V. (2.1)" }, { "formula_coordinates": [ 5, 158.68, 355.08, 318.81, 21.28 ], "formula_id": "formula_2", "formula_text": "V ∩ U = ∅," }, { "formula_coordinates": [ 6, 187.5, 139.89, 233.89, 146.56 ], "formula_id": "formula_3", "formula_text": "X Y Z U XY U Z (a) M = (V, U, F, P ) V = {X, Y, Z} U = {U XY , U Z } F =      f X : X := 2U XY , f Y : Y := X + U XY , f Z : Z := 3Y + U Z P = U XY ∼ N (0, 1), U Z ∼ N (0, 1)(b)" }, { "formula_coordinates": [ 7, 244.04, 339.77, 233.45, 20.08 ], "formula_id": "formula_4", "formula_text": "P (V) = Xi∈V P (X i |P a(X i )) (2.2)" }, { "formula_coordinates": [ 7, 239.22, 414.22, 238.26, 9.68 ], "formula_id": "formula_5", "formula_text": "X ⊥ ⊥ P Y | Z =⇒ X ⊥ ⊥ G Y | Z (2.3)" }, { "formula_coordinates": [ 7, 148.71, 626.68, 156.66, 48.56 ], "formula_id": "formula_6", "formula_text": "• X ← Y → Z is a fork on π, • X → Y → Z is a chain on π, and • X → Y ← Z is a collider on π." }, { "formula_coordinates": [ 8, 148.71, 159.17, 328.77, 53.24 ], "formula_id": "formula_7", "formula_text": "• a fork A ← B → C or a chain A → B → C such that the middle vertex B is in Z, or • a collider A → B ← C such that middle vertex B, or any descendant of it, is not in Z." }, { "formula_coordinates": [ 11, 148.71, 284.08, 159.94, 9.96 ], "formula_id": "formula_8", "formula_text": "• X ∈ Sp(Y ), then X ∈ An(Y ), and" }, { "formula_coordinates": [ 11, 148.71, 303.46, 203.64, 9.96 ], "formula_id": "formula_9", "formula_text": "• X ∈ N e(Y ), then P a(X) = ∅ ∧ Sp(X) = ∅." }, { "formula_coordinates": [ 11, 148.71, 548.47, 205.4, 9.96 ], "formula_id": "formula_10", "formula_text": "• any arrowhead mark in G is invariant in [G]," }, { "formula_coordinates": [ 11, 148.71, 567.85, 174.62, 9.96 ], "formula_id": "formula_11", "formula_text": "• any tail mark in G is invariant in [G]." }, { "formula_coordinates": [ 14, 238.11, 345.47, 239.37, 9.68 ], "formula_id": "formula_12", "formula_text": "X ⊥ ⊥ P Y | Z ⇐⇒ X ⊥ ⊥ G Y | Z (3.1)" }, { "formula_coordinates": [ 14, 245.01, 379.34, 232.47, 9.68 ], "formula_id": "formula_13", "formula_text": "H 0 : X ⊥ ⊥ P Y | Z and H 1 : X ⊥ ⊥ P Y | Z, let I(X, Y |Z)" }, { "formula_coordinates": [ 14, 235.21, 424.67, 242.27, 12.17 ], "formula_id": "formula_14", "formula_text": "Î(X, Y |Z) > α =⇒ X ⊥ ⊥ P Y | Z (3.2)" }, { "formula_coordinates": [ 15, 257.89, 613.76, 219.6, 18.63 ], "formula_id": "formula_15", "formula_text": "G * = argmax G∈G S(G, D) (3.3)" }, { "formula_coordinates": [ 16, 231.41, 159.74, 246.07, 20.11 ], "formula_id": "formula_16", "formula_text": "S(G, D) = Xi∈V S(X i , P a(X i ), D)(3.4)" }, { "formula_coordinates": [ 16, 148.71, 375.45, 216.49, 9.96 ], "formula_id": "formula_17", "formula_text": "• If only G contains P , then S(G, D) > S(H, D)," }, { "formula_coordinates": [ 16, 148.71, 493.17, 200.64, 28.84 ], "formula_id": "formula_18", "formula_text": "• X ⊥ ⊥ P Y | P a(X) =⇒ S(G, D) < S(H, D), • X ⊥ ⊥ P Y | P a(X) =⇒ S(G, D) > S(H, D)." }, { "formula_coordinates": [ 17, 220.55, 170.28, 256.93, 10.81 ], "formula_id": "formula_19", "formula_text": "S([G] * , D) > S([G], D) ∀[G] = [G] * (3.5)" }, { "formula_coordinates": [ 18, 133.77, 175.78, 205.61, 8.74 ], "formula_id": "formula_20", "formula_text": "Z such that X → Z ← Y is a v-structure in G." }, { "formula_coordinates": [ 19, 133.77, 356.78, 343.71, 21.91 ], "formula_id": "formula_21", "formula_text": "state that if X causes Y , then K(F X )+K(F Y |X ) ≤ K(F Y ) + K(F X|Y )." }, { "formula_coordinates": [ 19, 280.15, 510.01, 197.33, 8.77 ], "formula_id": "formula_22", "formula_text": "x = Bx + e (3.6)" }, { "formula_coordinates": [ 19, 258.18, 572.05, 219.3, 10.81 ], "formula_id": "formula_23", "formula_text": "x = (I -B) -1 e = Ae (3.7)" }, { "formula_coordinates": [ 20, 268.3, 173.07, 209.19, 8.77 ], "formula_id": "formula_24", "formula_text": "x = Bx + Λf + e (3.8)" }, { "formula_coordinates": [ 20, 246.21, 370.75, 231.27, 10.83 ], "formula_id": "formula_25", "formula_text": "h(W) = tr(e W•W ) -n = 0 (3.9)" }, { "formula_coordinates": [ 20, 248.2, 427.84, 229.28, 10.83 ], "formula_id": "formula_26", "formula_text": "∇h(W) = (e W•W ) T • 2W (3.10)" }, { "formula_coordinates": [ 21, 243.79, 355.74, 233.69, 8.74 ], "formula_id": "formula_27", "formula_text": "SCC(X) = An(X) ∩ De(X) (4.1)" }, { "formula_coordinates": [ 22, 222.9, 557.92, 254.58, 12.77 ], "formula_id": "formula_28", "formula_text": "L(G, S) = λ i (1 λi>0 -1 Xi⊥ ⊥ G Yi|Zi ) (4.2)" }, { "formula_coordinates": [ 22, 133.77, 618.12, 126.61, 9.65 ], "formula_id": "formula_29", "formula_text": "X i ⊥ ⊥ P Y i |Z i as a constraint." }, { "formula_coordinates": [ 23, 266.43, 139.92, 211.05, 9.65 ], "formula_id": "formula_30", "formula_text": "λ i = log p i -log α (4.3)" }, { "formula_coordinates": [ 25, 158.68, 263.42, 318.8, 33.15 ], "formula_id": "formula_31", "formula_text": "P (Y | do(X), Z, W) = P (Y | do(X), W) (5.1) if (Y ⊥ ⊥ Z | X, W) holds true in G X ," }, { "formula_coordinates": [ 25, 158.68, 327.18, 318.8, 33.15 ], "formula_id": "formula_32", "formula_text": "P (Y | do(X), do(Z), W) = P (Y | do(X), Z, W) (5.2) if (Y ⊥ ⊥ Z | X, W) holds true in G X,Z ," }, { "formula_coordinates": [ 25, 158.68, 390.94, 318.81, 33.6 ], "formula_id": "formula_33", "formula_text": "P (Y | do(X), do(Z), W) = P (Y | do(X), W) (5.3) if (Y ⊥ ⊥ Z | X, W) holds true in G X,Z(W) ," }, { "formula_coordinates": [ 26, 249.69, 529.78, 227.79, 8.77 ], "formula_id": "formula_34", "formula_text": "∃I : X ∈ I ∈ I, ∀X ∈ V (5.4)" }, { "formula_coordinates": [ 27, 221.98, 149.54, 255.5, 11.07 ], "formula_id": "formula_35", "formula_text": "E (I) = {(X, Y ) | (X, Y ) ∈ E ∧ Y ∈ I} (5.5)" }, { "formula_coordinates": [ 27, 194.04, 284.74, 283.44, 22.21 ], "formula_id": "formula_36", "formula_text": "P (I) = Xi∈I P (I) X i |P a(X i ) Xi ∈I P (∅) X i |P a(X i ) (5.6)" }, { "formula_coordinates": [ 27, 197.51, 386.07, 271.78, 22.21 ], "formula_id": "formula_37", "formula_text": "P (I) = Xi∈I P (I) X i | do(I) Xi ∈I P (∅) X i |P a(X i ) (5." }, { "formula_coordinates": [ 27, 227.76, 488.57, 249.72, 11.72 ], "formula_id": "formula_38", "formula_text": "G ≡ I H =⇒ G (I) ≡ H (I) , ∀I ∈ I (5.8)" }, { "formula_coordinates": [ 28, 201.2, 127.96, 260.01, 30.65 ], "formula_id": "formula_39", "formula_text": "→ Y ) in G is I-covered if: P a(X) = P a(Y ) \\ {X} ∧ P ({X}) (Y ) = P (∅) (Y )" }, { "formula_coordinates": [ 28, 148.71, 222.03, 344.79, 43.41 ], "formula_id": "formula_40", "formula_text": "• ∃S ⊂ N e(Y ) \\ {X} such that ∀I ∈ I X\\Y we observe P (I) (Y | S) = P (∅) (Y | S), or • ∀S ⊂ N e(X) \\ {Y } such that ∃I ∈ I Y \\X we observe P (I) (X | S) = P (∅) (X | S)." }, { "formula_coordinates": [ 30, 240.54, 614.13, 236.94, 9.65 ], "formula_id": "formula_41", "formula_text": "(C i ↔ C j ) ∈ G C ∧ (C i → C j ) ∈ G C (5.9)" }, { "formula_coordinates": [ 31, 158.68, 525.6, 318.81, 31.32 ], "formula_id": "formula_42", "formula_text": "P (I) (Y | Z, W) = P (I) (Y | W) (5.10) if (Y ⊥ ⊥ Z | W) holds true in G for all I ∈ I," }, { "formula_coordinates": [ 31, 158.68, 585.98, 318.8, 34.86 ], "formula_id": "formula_43", "formula_text": "P (I) (Y | W) = P (J) (Y | W) (5.11) if Y ⊥ ⊥ K | (W \\ W K ) holds true in G W K R(W)" }, { "formula_coordinates": [ 31, 158.68, 623.2, 318.31, 20.72 ], "formula_id": "formula_44", "formula_text": "J, W K = W ∩ K, R = K \\ W K and finally R(W) = R \\ An(W) w.r.t. G." }, { "formula_coordinates": [ 32, 235.87, 483.75, 241.62, 22.41 ], "formula_id": "formula_45", "formula_text": "D = argmin D ∈D I∈I L(D ∆Σ Σ Σ (I) D T ) (5.13)" }, { "formula_coordinates": [ 35, 182.87, 569.12, 294.61, 22.31 ], "formula_id": "formula_46", "formula_text": "AP = T P T P + F P (6.1) AR = T P T P + F N (6.2)" }, { "formula_coordinates": [ 36, 196.46, 315.87, 281.03, 40.47 ], "formula_id": "formula_47", "formula_text": "SHD(G, H) = V 2 (X,Y ), X<Y      1 (X, Y ) ∈ E(G, U ), 1 (Y, X) ∈ E(U, G),0 Otherwise. (6.5)" } ]
2023-05-17
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Artificial Intelligence in Medicine", "publication_ref": [ "b14", "b31", "b13", "b5", "b11", "b2", "b12", "b23", "b10", "b22", "b0", "b18" ], "table_ref": [], "text": "State of the Art. Artificial Intelligence (AI) has found many applications in medicine [15] and, more specifically, in cancer research [32] in the form of predictive models for diagnosis [14], prognosis [6] and therapy planning [12]. As a subfield of AI, Machine Learning (ML) and in particular Deep Learning (DL) has achieved significant results, especially in image processing [3]. Nonetheless, ML and DL models have limited explainability [13] because of their black-box design, which limits their adoption in the clinical field: clinicians and physicians are reluctant to include models that are not transparent in their decision process [24]. While recent research on Explainable AI (XAI) [11] has attacked this problem, DL models are still opaque and difficult to interpret. In contrast, in Probabilistic Graphical Models (PGMs) the interactions between different variables are encoded explicitly: the joint probability distribution P of the variables of interest factorizes according to a graph G, hence the \"graphical\" connotation. Bayesian Networks (BNs) [23], which we will describe in Section 3.1, are an instance of PGMs that can be used as causal models. In turn, this makes them ideal to use as decision support systems and overcome the limitations of the predictions based on probabilistic associations produced by other ML models [1,19]." }, { "figure_ref": [], "heading": "Lymph Node Metastases in Endometrial Cancer Patients", "publication_ref": [ "b3", "b4", "b19", "b1", "b30", "b25" ], "table_ref": [], "text": "Background. The present paper focuses on the development of a BN predictive model for endometrial cancer (EC). Endometrial cancer is cancer of the mucous lining, or endometrium, of the uterus. It is a common gynecological disease affecting hundreds of thousands of women worldwide. Although most patients with EC are diagnosed at an early stage of the disease and have a favorable prognosis, approximately 90,000 patients around the world die every year because of EC [4]. Surgery to remove the uterus (hysterectomy), possibly together with the ovaries (ovariectomy), is the typical initial treatment for EC; the choice of neo-adjuvant (pre-surgery) or adjuvant (post-surgery) treatments depends on patient outcome prognosis. The presence of pelvic and/or para-aortic lymph node metastases (LNM) is one of the most important prognostic factors for poor outcome. The identification of LNM during the primary treatment makes it possible to choose a suitable adjuvant treatment and improve survival in node-positive EC [5,20]. However, no consensus exists on how to determine which patients will benefit from lymphadenectomy (or lymph node dissection): this procedure is usually performed after or concomitant with surgery to evaluate evidence for the spread of cancer, which helps the medical team determine the progress of and treatment options for a patient's malignancy). In clinical early-stage EC, lymphadenectomy has been observed to have a marginal impact on EC outcomes and to be associated with substantial long-term comorbidities.\nThe diagnostic accuracy for LNM is limited: approximately 50% of LNM is found in low-or intermediate-risk patients [2,31].\nObjectives. This work uses the BN model from Reijnen et al. [26] as a starting point to improve the state of the art in two ways:\n• Extending the BN model to include the hospital of treatment as an additional variable to detect, estimate and control for potential selection bias.\n• Addressing the bias introduced by the missing imputation step, which could induce spurious correlations, hindering the interpretability of the discovered relationships.\n• Developing a causal model that integrates domain expert knowledge with observational data to better identify patients with EC designated as low or intermediate risk to develop LNM, in order to support stakeholders for decision-making." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b16", "b9", "b15", "b24", "b25", "b32" ], "table_ref": [], "text": "Individualized treatment aims to minimize unnecessary exposure to therapyrelated morbidity and at the same time offers proper management according to patients' risk-stratification. In the context of EC, predicting the risk of LNM before surgical treatment has received limited attention in the literature. Koskas et al. [17] evaluated the performance of BNs models within their cohort of 519 patients. Only one model achieved an AUC greater than 0.75,1 highlighting the need for improved pre-operative risk stratification. Subsequent works [10,16,25] identified biomarkers such as p53 and L1CAM as potential prognostic predictors, together with patients baseline comorbidities and tumors characteristics such as histology, grading and staging. More recently, Reijnen et al. [26] developed a model for the prediction of LNM and of disease-specific survival (DSS) in EC patients. This model, called ENDORISK, is a BN built on clinical, histopathological and molecular biomarkers that can be assessed preoperatively, allowing for patient counseling and shared decision-making before surgery. ENDORISK was shown to be competitive in both goodness of fit and predictive accuracy, achieving AUC values between 0.82 and 0.85 [33].\n3 Methods" }, { "figure_ref": [], "heading": "Causal Bayesian Networks", "publication_ref": [ "b0", "b21", "b33", "b22" ], "table_ref": [], "text": "Firstly, we will summarize those key definitions for BNs and causal models that we will need to describe our contributions in Section 3.\nDefinition 1 (Graph) A graph G =(V, E) is a mathematical object represented by a tuple of two sets: a finite set of nodes V and a finite set of edges E ⊆ V × V. In the following pages (V, E) will be omitted if not specified otherwise.\nWe will focus on directed graphs where (X, Y ) = (Y, X), which is graphically represented as X → Y . A directed graph encodes a set of ordinal relationships, i.e. in X → Y the node X is called parent of Y and Y is said to be the child of X. Therefore, the set of parents of X is Pa(X), while the set of children of X is Ch(X).\nA directed path π is a finite ordered set of nodes π = (V 0 → • • • → V n ) such that each adjacent pair of nodes (V i , V i+1 ) in π is a directed edge in E.\nA cycle is a path where the first and the last node are the same node. A graph is acyclic if it contains no cycle, also called a Directed Acyclic Graph (DAG).\nDefinition 2 (Causal Graph) A causal graph G = (V, E) [1] is a graph that encodes the cause-effect relationships of a system.\nCauses & Effects. The set V contains the variables that describe the behavior of the system under study, whereas the set E contains the edges that make explicit the interplay of the variables. In particular, for each directed edge (X, Y ) ∈ E, X is said to be a direct cause of Y , whereas Y is called direct effect of X. This definition is recursive: a variable Z that is the direct cause of X, but not of Y , is said to be an indirect cause of Y . This mapping between a causal graph G and the cause-effect relationships is formalized by the causal edge assumption [22].\nDefinition 3 (Causal Edge Assumption) Let G = (V, E) be a causal graph.\nThe value assigned to each variable X ∈ V is completely determined by the function f given its parents:\nX := f (Pa(X)) ∀X ∈ V(1)\nThe causal edge assumption allows us to interpret the edges of a causal graph in a non-ambiguous way: it enforces a recursive relationship over the structure of the graph, establishing a chain of functional dependencies. Hence, this class of graphical models is inherently explainable, even for researchers approaching them for the first time.\nWhen the causal graph is not known a priori, it is possible to recover it from a combination of prior knowledge and data driven approaches. Such problem is called Causal Discovery [34].\nDefinition 4 (Causal Discovery) Let G * be the true but unknown graph in the space of possible graphs G from which the data set D has been generated.\nThe Causal Discovery problem consists in recovering G * given the data set D and the prior knowledge K.\nOnce the causal graph G * is recovered, it is possible to build a PGM with the given structure. For example, BNs [23] are a widely known type of PGM.\nDefinition 5 (Bayesian Network) Let be G a DAG and let P (X) be a global probability distribution with parameters Θ. A BN B = (G, Θ) is a model in which each variable of X is a vertex of G and P (X) factorizes into local probability distributions according to G:\nP (X) = X∈X P (X | Pa(X))(2)\nThe key difference between a BN and a Causal BN (CBN) is the semantic interpretation of its edges. Indeed, in a CBN an edge represents a cause-effect relationship between two variables, whereas the same edge in a BN entails only a probabilistic dependence." }, { "figure_ref": [], "heading": "Definition 6 (Causal Bayesian Network", "publication_ref": [], "table_ref": [], "text": ") A Causal BN B = (G, Θ) is a BN where the associated DAG G is a causal graph." }, { "figure_ref": [], "heading": "Causal Discovery with Observational and Missing Data", "publication_ref": [ "b29", "b7", "b17", "b7", "b27", "b20" ], "table_ref": [], "text": "Causal discovery algorithms are usually divided into two classes: constraintbased and score-based. The two classes have been extended to handle missing data in different ways: constraint-based algorithms rely on test-wise deletion [30] to perform conditional independence tests efficiently in order to mitigate the impact of missing observations, while score-based approaches alternate data imputation and causal discovery [8].\nCausal Discovery with Missing Data. By default, causal discovery algorithms are not designed to handle incomplete data. However, we can combine them with missing value imputation approaches to complete the data and reduce the problem to a standard causal discovery. A widely-used application of this idea is the Expectation Maximization (EM) [18] algorithm. In particular, the Structural EM [8] algorithm is specifically designed to iteratively run the imputation step performed by EM and a causal discovery step performed by a score-based algorithm, alternating them until convergence.\nGreedy Search: The Hill-Climbing Approach. A widely applied scorebased algorithm for causal discovery is Greedy Search (GS) [28]. GS traverses the space G of the possible DAGs over the set of variables V, selecting the optimal graph G * by a greedy evaluation of a function S, known as the scoring criterion. There are multiple strategies to implement GS, one of which is called Hill-Climbing (HC). At its core, HC repeatedly applies three fundamental operations to change the current recovered structure, moving from a graph to another, across the graphs space G. These \"moves\" are the addition, deletion or reversal of an edge. If a move improves the score S, then the graph is updated accordingly. The procedure halts when no moves improve the score and returns a DAG.\nWhile the graphs space G contains every graph that could be generated given the vertices V, only a subset of them are compatible with the probability distribution induced by the observed data. Moreover, not every graph compatible with said distribution is necessarily causal. Therefore, it is possible to shrink the search space by adding constraints in terms of structural properties, that is, by requiring or forbidding the existence of an edge in the optimal graph G * .\nEncoding Prior Knowledge. One could restrict the set of admissible graphs by encoding prior knowledge through required or forbidden edge lists [21]. For instance, it is possible to leverage expert knowledge to identify known relationships and encode them as required edges. These lists can also encode a partial ordering when potential causes of other variables are known.\nFor example, suppose that clinicians want to include their prior knowledge on the interaction between biomarkers and LNM into the CBN. This inclusion would happen during the execution of the causal discovery algorithm and, therefore, requires that the experts' knowledge is encoded programmatically. Causal discovery algorithms essentially learn a set of ordinal, parent-child relationships: it is natural to encode prior knowledge in the same form. For instance, if we know that p53 is not a direct cause of LNM, then the translation of such a concept would be p53 ∈ Pa(LNM). If, on the other hand, we know that LNM is a direct cause of L1CAM then we would have L1CAM ∈ Pa(LNM) . This is a direct consequence of the Causal Edge Assumption (Definition 3). Even this simple example shows the flexibility of this approach, allowing to encode different sources of prior knowledge without any restrictions." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [ "b28", "b8", "b17" ], "table_ref": [], "text": "Causal discovery algorithms provide a correct solution to the causal discovery problem in the limit of the number of samples [29]. However, in real-world applications the available data are finite, especially in medicine, where data samples are usually small. As a result, even small amounts of noise in the data may result in a different structure. Therefore, it is important to quantify our confidence in the presence of each edge in the causal BN, also called the \"strength\" of an edge.\nEstimating Edge Strength: A Bootstrap Approach. The estimation of the strength of an edge was performed through a bootstrap approach [9]. Here, a custom version with Structural EM is reported in Algorithm 1, described as follows. Line 1, the procedure takes as input a data set D, prior knowledge K, hyperparameters α α α for Structural EM, number of bootstraps n and number of samples to draw m. Line 2, the confidence matrix C is initialized. Lines 3-6, the data set D is re-sampled n times with replacement, drawing m observations for each bootstrap following a uniform distribution. For each sampled data set D i ⊆ D, the causal discovery algorithm is applied to induce a corresponding graph G i . Finally, line 7, is responsible to compute the strength of each edge as the relative frequency of inclusion across the n bootstraps.\nThe causal discovery algorithm developed is described in Algorithm 2. Line 1 is based on the confidence matrix estimation computed by Algorithm 1. Line 2, the causal graph G is initialized to the empty graph and, line 3, the associated confidence matrix C, i.e. the matrix containing the edges strength, is computed. Line 4 describes a generic strategy to select the edges to insert into G given C. Here, we relied on a threshold λ to filter irrelevant edges to build the \"average graph\". Lines 5-6, the CBN parameters Θ are fitted given G by applying EM [18] to the data set D with missing data." }, { "figure_ref": [], "heading": "Definition and Selection of Variables.", "publication_ref": [ "b25", "b28" ], "table_ref": [], "text": "To conduct this analysis we used the cohort presented by Reijnen et al. An overview of the cohort and the procedures done for data collection can be found in [26]. Briefly, the retrospective multicenter cohort study included 763 patients, with a median age 65 years, surgically treated for endometrial cancer between 1995 and 2013 at one of the 10 participating European hospitals. Clinical and histopathological variables with prognostic value for the prediction of LNM were identified by a systematic review of the literature. The used variables could be divided into three major temporal tiers:\n• Pre-operative clinical, histopathological variables and biomarkers: Estrogen Receptor (ER) expression, Progesteron Recepter (PR) expression, L1CAM (cell migration) expression, p53 (tumour suppressor gene) expression, cervical cytology, platelets counts (thrombocytosis), lymphadenopathy on MRI or CT, lymphovascular space invasion (LVSI), Ca-125 serum levels and pre-operative tumor grade,\n• Post-operative/treatment variables: adjuvant therapy (Chemotherapy and/or Radiotherapy), post-operative tumour grade,\n• Late post-operative outcomes: 1-,3-,5-year disease-specific survival (DSS), Lymph Nodes Metastases (LNM), Myometrial Invasion.\nAll the described variables are discrete variables, with cardinality ranging from 2 to 3. Two main changes were done in comparison to published works: addition of hospital of treatment (10 levels) in the model and separation of adjuvant therapy into two different dichotomous variables (chemotherapy and radiotherapy).\nTraining and Testing. The data set D was split in a train set and a test set following a 70/30 ratio. For each configuration of hyperparameters (α α α, n, m, λ), we applied Algorithm 2 to the train set, with the same prior knowledge K.\nThe resulting BNs were evaluated on the test set by estimating the probability of LNM. The hyperparameter tuning was performed following a grid search, as suggested in [29]. While cross validation (CV) is generally preferred over a naïve train-test splitting, hyperparameter tuning over a learning procedure based on Structural EM is computationally expensive and, therefore, it would require a nonignorable amount of time when coupled with CV. Moreover, we considered the possibility to further split the train set to obtain a validation set, but the reduced sample size hindered the feasibility of this additional step. Finally, we computed the sensitivity, specificity, ROC and AUC2 for each CBN model.\nDefinition 7 (Sensitivity & Specificity) Given a binary classification problem, the confusion matrix is a 2 × 2 squared integer matrix resulting from the application of a classification algorithm. The values on the main diagonal are called true positives (T P ) and true negatives (T N ), while the values on the off diagonal are false positives (F P ) and false negatives (F N ). Then, the true positive ratio (T P R) and the true negative ratio (T N R) are defined as follows:\nT P R = T P T P + F N (3) T P R = T N T N + F P(4)\nThe T P R and T N R are also called sensitivity and specificity, respectively." }, { "figure_ref": [], "heading": "Definition 8 (ROC & AUC)", "publication_ref": [], "table_ref": [], "text": "The Receiving Operating Characteristic (ROC) curve is a plot of sensitivity and (1 -specificity) measures at different thresholds.\nThe Area Under the Curve (AUC) is the area under the ROC curve.\nAlgorithm 1 Confidence matrix from missing data and prior knowledge.\n1: procedure ConfidenceMatrix(D, K, α α α, n, m) 2:\nC ← 0 Initialize a |V| × |V| matrix, with V the variables in D.\n3:\nfor i ∈ [1, n] do 4: D i ← Sample(D, m)\nSample from D with replacement.\n5:\nG i ← StructuralEM(D i , K, α α α) Learn G i from D i and K. 6: C[X, Y ] ← C[X, Y ] + 1, ∀ (X, Y ) ∈ E i\nIncrement the edge count." }, { "figure_ref": [], "heading": "7:", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "C ← C/n", "publication_ref": [], "table_ref": [], "text": "Normalize the confidence matrix." }, { "figure_ref": [], "heading": "8:", "publication_ref": [], "table_ref": [], "text": "return C\nFigure 1 is a scatter plot of the results of the execution of Algorithm 2. The color mapping allows to clearly distinguish three well-separated clusters, grouped by the parents space cardinality. Specifically, the red cluster represents the models where LNM has no parents, the light-red cluster contains models where LNM has only Chemotherapy as parent, and finally the blue cluster where both Chemotherapy and Histology are parents of LNM.\nAlgorithm 2 Learn CBN from missing data and prior knowledge.\n1: procedure CBN(D, K, α α α, n, m, λ) 2:\nG ← (V, ∅) Initialize an empty graph over the variables V in D." }, { "figure_ref": [], "heading": "3:", "publication_ref": [], "table_ref": [], "text": "C ← ConfidenceMatrix(D, K, α α α, n, m) Compute the confidence matrix." }, { "figure_ref": [], "heading": "4:", "publication_ref": [], "table_ref": [], "text": "Insert edges into G following a strategy w.r.t. C and λ.\n5: Θ ← EM(G, D)\nEstimate the parameters using EM." }, { "figure_ref": [], "heading": "6:", "publication_ref": [], "table_ref": [], "text": "B ← (G, Θ) Build the CBN given G and Θ. " }, { "figure_ref": [], "heading": "Parents space cardinality", "publication_ref": [ "b26", "b6", "b25" ], "table_ref": [], "text": "In-sample vs. Out-of-sample AUC Scatterplot The structure presented in Figure 2 is built by encoding prior causal knowledge elicited by clinicians and randomized controlled trials (RCTs). The encoding process is performed by adding a directed edge from the expected cause to its effect. Each edge addition is supported by biological and physiological knowledge, either obtained by querying experts or from reviewed literature, without observational data. Note, for example, that the Therapy node does not have incoming edges, since therapy is always assigned at random in an RCT (and only the outcome matters).\nThe graph presented in Figure 3 is the result of the application of Algorithm 2 on the collected data set and encoded prior knowledge based on partial temporal ordering of variables. The Therapy node is split into Radiotherapy and Chemotherapy to highlight the different impact of adjuvant treatments.\nThe two graphs share a common subset of edges, e.g. the ones related to Recurrence and Survivals. A major difference stands in the edges related to the biomarkers cluster. Indeed, while in Figure 2 biomarkers, such as p53, CA125 and L1CAM, are assumed to be strongly related to LNM, in the recovered graph the PreoperativeGrade is observed as common parent of the variables contained in such cluster. Moreover, no biomarker is directly connected to LNM, not as a parent nor as a child, calling for further analyses of the collected data.\nSuch similarities and differences also appear in Figure 4, where Hospital is introduced to explore the potential presence of latent effects and selection bias. While the graph in Figure 4 is not completely different from the one in Figure 3 in terms of observed substructures, the latter encodes different independence statements due to the presence of the newly introduced Hospital.\nThe crucial difference stands in the semantic interpretation of Hospital, which in this case is not to be intended as a direct cause of its children, but rather as a proxy for others unobserved variables or biases, i.e. a context variable. Indeed, while it could be that population heterogeneity across hospitals affects the choice of adjuvant treatments, it would be nonsensical to conclude that Hospital is a cause of Ca-125. Nonetheless, the causal discovery procedure includes a set of edges that are related to spurious associations present in the data set. For example, the directed edge that connects Hospital to p53 is an instance of such pattern, which could be caused by a missing-not-at-random (MNAR) mechanism [27]. Another example of the impact of biases is represented by the directed edge from Hospital to PostoperativeGrade. In this case, an unbalanced distribution of patients' grading across geographical regions, which Hospital is a proxy of, could act as a potential source of selection bias [7].\nThe ROC curve depicted in Figure 5 is obtained by predicting the probability of the LNM class on the test set, given the CBN fitted on the structure in Figure 4 and the train set. It achieves an AUC of 0.883, with associated 95% CI 0.775-0.991, which is higher than the one obtained in [26], although it was not possible to compare the metrics using a significance test due to the different test sets. " }, { "figure_ref": [], "heading": "Conclusions and Future Works", "publication_ref": [ "b21" ], "table_ref": [], "text": "Given the known limitations of data-driven approaches when applied to observational data, causal discovery techniques are used to explore and mitigate the impact of spurious associations during the learning process. In this work we explored the task of learning a causal representation to assess the pre-operative risk of developing LNM in endometrial cancer patients. Furthermore, the recovered models were extended to include information from context variables, aiming to uncover previously unobserved effects.\nThe resulting procedure takes advantage of pre-existing techniques to reduce the bias introduced during the imputation step in a bootstrap approach. This enabled us to compute the strength of the observed associations in the obtained models across multiple re-sampled instances, allowing a step of model averaging to recover less frequent substructures. The risk assessment is performed by predicting the probability of developing LNM using a CBN fitted on the recovered structure and given train set, showing an increased AUC over previous works.\nStill, we highlighted a set of potential issues that need to be addressed in future works.\nMissingness Mechanism. With the introduction of the Hospital variable we observed a set of edges that hint to the presence of a potential missing-not-atrandom pattern. If this is the case, then it would require careful consideration in order to reduce the bias introduced during the missing imputation step.\nEffect of Adjuvant Therapy. Once a causal graph is obtained, it is theoretically possible to estimate the causal effect of each adjuvant therapy, either single or combined, on the development of LNM. Before directly computing the the effect, there are assumptions that need to be carefully verified, e.g. positivity, consistency, unconfoundedness and non interference [22].\nImpact of Selection Bias. While it is clear that observing an association between Hospital and other variables it is not sufficient to conclude that, indeed, there is a selection bias, it is a strong hint that there are other unobserved variables that influence the causal mechanism. It could be interesting to assess which is the impact of the selection bias mediated by the Hospital variable alone." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "Alessio Zanga was granted a Ph.D. scholarship by F. Hoffmann-La Roche Ltd." } ]
Assessing the pre-operative risk of lymph node metastases in endometrial cancer patients is a complex and challenging task. In principle, machine learning and deep learning models are flexible and expressive enough to capture the dynamics of clinical risk assessment. However, in this setting we are limited to observational data with quality issues, missing values, small sample size and high dimensionality: we cannot reliably learn such models from limited observational data with these sources of bias. Instead, we choose to learn a causal Bayesian network to mitigate the issues above and to leverage the prior knowledge on endometrial cancer available from clinicians and physicians. We introduce a causal discovery algorithm for causal Bayesian networks based on bootstrap resampling, as opposed to the single imputation used in related works. Moreover, we include a context variable to evaluate whether selection bias results in learning spurious associations. Finally, we discuss the strengths and limitations of our findings in light of the presence of missing data that may be missing-not-at-random, which is common in real-world clinical settings.
Risk Assessment of Lymph Node Metastases in Endometrial Cancer Patients: A Causal Approach
[ { "figure_caption": "Figure 1 :Figure 2 :12Figure1: Scatter plot of the results of Algorithm 2. Each dot is a CBN with achieved in-sample and out-of-sample AUC on the horizontal and vertical axes, respectively. Dots color depend on the cardinality of the space of the parents of the target node LNM. To the right, a zoom of the cluster of those CBN that achieved higher values of AUC.", "figure_data": "", "figure_id": "fig_1", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 3 :Figure 5 :35Figure3: Strength plot of recovered CBN. The edges thickness depends on the strength of the edge itself, which is estimated by the confidence matrix C. The nodes and edges are colored to ease the comparison with the reference graph Figure2. In particular, green edges are present both in reference and recovered graph with the same orientation, orange edges are present both in reference and recovered graph with reversed orientation, gray nodes and edges cannot be directly compared due to different node sets, and, finally, black edges are present only in the recovered graph.", "figure_data": "", "figure_id": "fig_2", "figure_label": "35", "figure_type": "figure" } ]
Alessio Zanga; Alice Bernasconi; Peter J F Lucas; Hanny Pijnenborg; Casper Reijnen; Marco Scutari; Fabio Stella
[ { "authors": "Elias Bareinboim; Juan D Correa; Duligur Ibelind; Thomas Icard", "journal": "", "ref_id": "b0", "title": "On Pearl's Hierarchy and the Foundations of Causal Inference", "year": "2020" }, { "authors": " Bendifallah; P Canlorbe; E Collinet; F Arsène; C Huguet; D Coutant; O Hudry; E Graesslin; C Raimond; E Touboul; M Daraï; Ballester", "journal": "British Journal of Cancer", "ref_id": "b1", "title": "Just how accurate are the major risk stratification systems for early-stage endometrial cancer", "year": "2015" }, { "authors": "Linda Wenya; Ahmed Bi; Matthew B Hosny; Maryellen L Schabath; Giger; J Nicolai; Alireza Birkbak; Tavis Mehrtash; Omar Allison; Christopher Arnaout; Ian F Abbosh; Raymond H Dunn; Mak; M Rulla; Clare M Tamimi; Charles Tempany; Udo Swanton; Lawrence H Hoffmann; Robert J Schwartz; Raymond Y Gillies; Hugo J W L Huang; Aerts", "journal": "CA: A Cancer Journal for Clinicians", "ref_id": "b2", "title": "Artificial intelligence in cancer imaging: Clinical challenges and applications", "year": "2019" }, { "authors": "Freddie Bray; Jacques Ferlay; Isabelle Soerjomataram; Rebecca L Siegel; Lindsey A Torre; Ahmedin Jemal", "journal": "CA: A Cancer Journal for Clinicians", "ref_id": "b3", "title": "Global cancer statistics 2018: Globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries", "year": "2018" }, { "authors": "Stephanie M De Boer; Melanie E Powell; Linda Mileshkin", "journal": "The Lancet Oncology", "ref_id": "b4", "title": "Adjuvant chemoradiotherapy versus radiotherapy alone in women with high-risk endometrial cancer (portec-3): patterns of recurrence and post-hoc survival analysis of a randomised phase 3 trial", "year": "2019-09" }, { "authors": "Olivier Elemento; Christina Leslie; Johan Lundin; Georgia Tourassi", "journal": "Nature Reviews Cancer", "ref_id": "b5", "title": "Artificial intelligence in cancer research, diagnosis and therapy", "year": "2021" }, { "authors": "Kevin M Esterling; David Brady; Eric Schwitzgebel", "journal": "OSF Preprints", "ref_id": "b6", "title": "The Necessity of Construct and External Validity for Generalized Causal Claims", "year": "2021" }, { "authors": "Nir Friedman", "journal": "", "ref_id": "b7", "title": "The Bayesian Structural EM", "year": "1998" }, { "authors": "Nir Friedman; Moises Goldszmidt; Abraham Wyner", "journal": "", "ref_id": "b8", "title": "Data Analysis with Bayesian Networks: A Bootstrap Approach", "year": "2013" }, { "authors": "Gad Getz; Stacey B Gabriel; Kristian Cibulskis", "journal": "Nature", "ref_id": "b9", "title": "Integrated genomic characterization of endometrial carcinoma", "year": "2013" }, { "authors": "David Gunning; Mark Stefik; Jaesik Choi; Timothy Miller; Simone Stumpf; Guang-Zhong Yang", "journal": "Science Robotics", "ref_id": "b10", "title": "XAI-Explainable artificial intelligence", "year": "2019" }, { "authors": "Dean Ho", "journal": "Science", "ref_id": "b11", "title": "Artificial intelligence in cancer therapy", "year": "2020-02" }, { "authors": "Andreas Holzinger; Georg Langs; Helmut Denk; Kurt Zatloukal; Heimo Müller", "journal": "WIREs Data Mining and Knowledge Discovery", "ref_id": "b12", "title": "Causability and explainability of artificial intelligence in medicine", "year": "2019" }, { "authors": "Shigao Huang; Jie Yang; Simon Fong; Qi Zhao", "journal": "Cancer Letters", "ref_id": "b13", "title": "Artificial intelligence in cancer diagnosis and prognosis: Opportunities and challenges", "year": "2020-02" }, { "authors": "Vivek Kaul; Sarah Enslin; Seth A Gross", "journal": "Gastrointestinal Endoscopy", "ref_id": "b14", "title": "History of artificial intelligence in medicine", "year": "2020" }, { "authors": "K F Felix; Anthony N Kommoss; Friedrich Karnezis; Aline Kommoss; Talhouk; Andrei Florin; Annette Taran; C Blake Staebler; David G Gilks; Bernhard Huntsman; Sara Y Krämer; Jessica N Brucker; Stefan Mcalpine; Kommoss", "journal": "British Journal of Cancer", "ref_id": "b15", "title": "L1cam further stratifies endometrial carcinoma patients with no specific molecular risk profile", "year": "2018" }, { "authors": "Martin Koskas; Marie Fournier; Anke Vanderstraeten; Francine Walker; Dirk Timmerman; Ignace Vergote; Frédéric Amant", "journal": "European Journal of Cancer", "ref_id": "b16", "title": "Evaluation of models to predict lymph node metastasis in endometrial cancer: A multicentre study", "year": "2016" }, { "authors": "L Steffen; Lauritzen", "journal": "Computational Statistics and Data Analysis", "ref_id": "b17", "title": "The EM algorithm for graphical association models with missing data", "year": "1995" }, { "authors": "Sanghack Lee; Elias Bareinboim", "journal": "", "ref_id": "b18", "title": "Structural causal bandits: Where to intervene?", "year": "2018" }, { "authors": "Daniela Matei; Virginia Filiaci; Marcus E Randall", "journal": "New England Journal of Medicine", "ref_id": "b19", "title": "Adjuvant chemotherapy plus radiation for locally advanced endometrial cancer", "year": "2019" }, { "authors": "Christopher Meek", "journal": "", "ref_id": "b20", "title": "Strong Completeness and Faithfulness in Bayesian Networks", "year": "2013" }, { "authors": "Judea Pearl; Madelyn Glymour; Nicholas P Jewell", "journal": "John Wiley \\& Sons", "ref_id": "b21", "title": "Causal inference in statistics: A primer", "year": "2016" }, { "authors": "Judea Pearl; Stuart Russell", "journal": "", "ref_id": "b22", "title": "BAYESIAN NETWORKS", "year": "2003" }, { "authors": "Luisa Pumplun; Mariska Fecho; Nihal Wahl; Felix Peters; Peter Buxmann", "journal": "Journal of Medical Internet Research", "ref_id": "b23", "title": "Adoption of Machine Learning Systems for Medical Diagnostics in Clinics: Qualitative Interview Study", "year": "2021" }, { "authors": "J M Louis; Nicole C M Van Der Putten; Koen Visser; Van De; Maria Vijver; Peter Santacana; Johan Bronsert; Marc Bulten; Eva Hirschfeld; Antonio Colas; Angel Gil-Moreno; Gemma Garcia; Fransesc Mancebo; Jone Alameda; Reidun K Trovik; Jutta Kopperud; Stefanie Huvila; Martin Schrauwen; Francine Koskas; Vit Walker; Lubos Weinberger; Eva Minar; Jandakova; P L M Marc; Saskia Snijders; Van Den; Xavier Berg-Van Erp; Helga B Matias-Guiu; Frederic Salvesen; Amant; F A G Leon; Johanna M A Massuger; Pijnenborg", "journal": "British Journal of Cancer", "ref_id": "b24", "title": "L1cam expression in endometrial carcinomas: An enitec collaboration study", "year": "2016" }, { "authors": "Evangelia Casper Reijnen; Nicole C M Gogou; Hilde Visser; Jordache Engerud; Ramjith; J M Louis; Koen Van Der Putten; Van De; Maria Vijver; Peter Santacana; Johan Bronsert; Marc Bulten; Eva Hirschfeld; Antonio Colas; Armando Gil-Moreno; Gemma Reques; Camilla Mancebo; Jone Krakstad; Ingfrid S Trovik; Jutta Haldorsen; Martin Huvila; Vit Koskas; Marketa Weinberger; Jitka Bednarikova; Hausnerova; A M Anneke; Xavier Van Der Wurff; Frederic Matias-Guiu; Amant; F A G Leon; Massuger; P L M Marc; Snijders; V N Heidi; Kusters-Vandevelde; J F Peter; Johanna M A Lucas; Pijnenborg", "journal": "PLoS Medicine", "ref_id": "b25", "title": "Preoperative risk stratification in endometrial cancer (ENDORISK) by a Bayesian network model: A development and validation study", "year": "2020" }, { "authors": "Marco Scutari", "journal": "", "ref_id": "b26", "title": "Bayesian network models for incomplete and dynamic data", "year": "2020" }, { "authors": "Marco Scutari; Claudia Vitolo; Allan Tucker", "journal": "Statistics and Computing", "ref_id": "b27", "title": "Learning Bayesian networks from big data with greedy search: computational complexity and efficient implementation", "year": "2019" }, { "authors": "Peter Spirtes; Richard Clark N Glymour; David Scheines; Heckerman", "journal": "MIT press", "ref_id": "b28", "title": "Causation, prediction, and search", "year": "2000" }, { "authors": "Eric V Strobl; Shyam Visweswaran; Peter L Spirtes", "journal": "International Journal of Data Science and Analytics", "ref_id": "b29", "title": "Fast causal inference with non-random missingness by test-wise deletion", "year": "2018" }, { "authors": "Jone Trovik; Elisabeth Wik; M J Henrica; Camilla Werner; Harald Krakstad; Ingrid Helland; Tormund S Vandenput; Ingunn M Njolstad; Janusz Stefansson; Solveig Marcickiewicz; Anne C Tingulstad; Frederic Staff; Lars A Amant; Helga B Akslen; Salvesen", "journal": "European Journal of Cancer", "ref_id": "b30", "title": "Hormone receptor loss in endometrial carcinoma curettage predicts lymph node metastasis and poor outcome in prospective multicentre trial", "year": "2013" }, { "authors": "Olga Troyanskaya; Zlatko Trajanoski; Anne Carpenter; Sebastian Thrun; Narges Razavian; Nuria Oliver", "journal": "Nature cancer", "ref_id": "b31", "title": "Artificial intelligence and cancer", "year": "2020-02" }, { "authors": "Petra Vinklerová; Petra Ovesná; Jitka Hausnerová; Johanna M A Pijnenborg; J F Peter; Casper Lucas; Stephanie Reijnen; Vít Vrede; Weinberger", "journal": "Frontiers in Oncology", "ref_id": "b32", "title": "External validation study of endometrial cancer preoperative risk stratification model (endorisk)", "year": "2022" }, { "authors": "Alessio Zanga; Elif Ozkirimli; Fabio Stella", "journal": "International Journal of Approximate Reasoning", "ref_id": "b33", "title": "A Survey on Causal Discovery: Theory and Practice", "year": "2022" } ]
[ { "formula_coordinates": [ 4, 133.77, 256.49, 343.71, 22.27 ], "formula_id": "formula_0", "formula_text": "A directed path π is a finite ordered set of nodes π = (V 0 → • • • → V n ) such that each adjacent pair of nodes (V i , V i+1 ) in π is a directed edge in E." }, { "formula_coordinates": [ 4, 133.77, 469.58, 343.83, 8.81 ], "formula_id": "formula_1", "formula_text": "Definition 3 (Causal Edge Assumption) Let G = (V, E) be a causal graph." }, { "formula_coordinates": [ 4, 243.9, 514.49, 233.58, 9.96 ], "formula_id": "formula_2", "formula_text": "X := f (Pa(X)) ∀X ∈ V(1)" }, { "formula_coordinates": [ 5, 246.43, 252.17, 231.05, 20.75 ], "formula_id": "formula_3", "formula_text": "P (X) = X∈X P (X | Pa(X))(2)" }, { "formula_coordinates": [ 5, 133.77, 341.03, 343.71, 20.76 ], "formula_id": "formula_4", "formula_text": ") A Causal BN B = (G, Θ) is a BN where the associated DAG G is a causal graph." }, { "formula_coordinates": [ 8, 176.28, 333.61, 247.4, 22.31 ], "formula_id": "formula_5", "formula_text": "T P R = T P T P + F N (3) T P R = T N T N + F P(4)" }, { "formula_coordinates": [ 8, 139.14, 460.95, 223.44, 20.58 ], "formula_id": "formula_6", "formula_text": "1: procedure ConfidenceMatrix(D, K, α α α, n, m) 2:" }, { "formula_coordinates": [ 8, 139.14, 485.07, 134.61, 21.68 ], "formula_id": "formula_7", "formula_text": "for i ∈ [1, n] do 4: D i ← Sample(D, m)" }, { "formula_coordinates": [ 8, 139.14, 508.38, 338.34, 22.27 ], "formula_id": "formula_8", "formula_text": "G i ← StructuralEM(D i , K, α α α) Learn G i from D i and K. 6: C[X, Y ] ← C[X, Y ] + 1, ∀ (X, Y ) ∈ E i" }, { "formula_coordinates": [ 9, 139.14, 179.97, 163.26, 20.58 ], "formula_id": "formula_9", "formula_text": "1: procedure CBN(D, K, α α α, n, m, λ) 2:" }, { "formula_coordinates": [ 9, 139.14, 237.5, 94.47, 11.81 ], "formula_id": "formula_10", "formula_text": "5: Θ ← EM(G, D)" } ]
10.1371/journal.pcbi.1006613
2023-08-08
[ { "figure_ref": [ "fig_1", "fig_1", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b33", "b6", "b34", "b18", "b36", "b39" ], "table_ref": [], "text": "Dental implant is a common surgical procedure in oral and maxillofacial surgery (Varga Jr et al., 2020), in which the surgical guide plays an important role in precise bone drilling and implant placement (Gargallo-Albiol et al., 2021;Vinci et al., 2020). However, the design of the surgical guide heavily relies on the manual location of the implant position using the patient's panoramic radiographic image, or cone beam computed tomography (CBCT) data, which is subjective and prone to doctor's experiences (Liu et al., 2021). In contrast, artificial intelligence (AI) methods can quickly locate the implant position, which is trained using a large number of successful implant cases designed by dentists with rich related clinical experiences. As the AI methods always give the same prediction for the same data, it is thus more objective when predicting the implant position and have less varitions. Therefore, it inspire us to improve the efficiency of surgical guide design using deep learning-based methods.\nGenerally, the prediction of implant location in CBCT data can be considered as a three-dimensional (3D) regression task. However, the training of a 3D neural network requires a lot of training data, which leads to higher collection and labeling costs. The common solution is to convert the 3D CBCT data into a series of 2D slices. Dental-YOLO (Widiasri et al., 2022) utilized the 2D sagittal view of CBCT to measure the oral bone, e.g., the alveolar bone, and determine the implant position indirectly. ImplantFormer (Yang et al., 2022) predicts the implant position using the 2D axial view of tooth crown images and projects the prediction results back to the tooth root by the space transform algorithm.\nEven though current methods can achieve great performance on the implant position prediction, these methods do not consider the influence of the variation of the tooth cross-sectional area, which may degrade the performance of the prediction network.\nFirst of all, physically, the irregular structure of the tooth leads to the decrease of cross-sectional area from the tooth crown to the tooth root. As a result, the gap between neighboring teeth increase as the number of CT layers grows. When the gap between neighboring teeth is big enough, the regions between sparse teeth may have a similar characteristic with the actual implant region (see Fig. 1(a)), which will misguide the prediction network to generate false positive detection. Secondly, as shown in Fig. 1(b), the tooth spacing has a big variation (from 9.71 to 14.72 mm) across different patients, where the fixed kernel size of convolution or patch embedding can not extract robust features.\nBoth problems make a big challenge for implant position regression.\nTo tackle these challenges, we develop a two-stream implant position regression framework (TSIPR), which consists of an implant region detector (IRD) and a multi-scale patch embedding regression network (MSPENet). IRD is an object detector designed to locate the implant region and filter out the region of sparse teeth. The training of IRD uses the extended bounding box of implant position annotation. Compared to the ground-truth position (the red point in Fig. 2) that has little useful texture, the extended box (the dashed blue box in • Extensive experiments on a dental implant dataset demonstrates that the proposed TSIPR achieves superior performance than the existing methods, especially for patients with sparse teeth.\nThe rest of the paper is organized as follows. Section 2 briefly reviews the related works. Section 3 gives the details of the proposed method. Section 4 presents experiments on a dental implant dataset and the experimental results are compared with that of mainstream detectors and the state-of-the-art methods. Section 5 provides the conclusions. " }, { "figure_ref": [], "heading": "Related work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Deep learning in dentistry", "publication_ref": [ "b8", "b42", "b20", "b22", "b4", "b27", "b11" ], "table_ref": [], "text": "Deep learning technology has been applied in many tasks of dentistry, such as tooth segmentation, orthodontic treatment, and dental implant classification. For tooth segmentation, the studies mainly focus on two kinds of data, i.e. CBCT data and 3D dental point cloud data. Jang et al. (Jang et al., 2021) proposed a fully automated method of identifying and segmenting 3D individual teeth from dental CBCT images, which addressed the difficulty of separating the individual tooth from adjacent teeth and its surrounding alveolar bone. Zhang et al. (Zhang et al., 2018) proposed a label tree-based method to assign each tooth several labels and decompose the segmentation task into several sub-tasks for resolving the problem of limited training data. Mahdi et al. (Mahdi et al., 2020) proposed a residual network-based faster R-CNN model for automatic teeth recognition, which further refined the candidates using a candidate optimization technique that evaluates both positional relationship and confidence score. For orthodontic treatment, Qian et al. (Qian et al., 2020) proposed a multi-head attention neural network for detecting cephalometric landmarks, which consists of a multi-head and an attention. The multi-head component adopts multi-head subnets to learn different features from various aspects. The attention uses a multi-attention mechanism to refine the detection based on the features extracted by the multi-head. Dai et al. (Dai et al., 2019) proposed a new automated cephalometric landmark localization method based on GAN, which trains an adversarial network to learn the mapping from features to the distance map of a specific target landmark. For the task of dental implants classification, Sukegawa et al. (Sukegawa et al., 2020) evaluated a series of CNN models, i.e. a basic CNN with three convolutional layers, VGG16 and VGG19 transfer-learning models, and fine-tuned VGG16 and VGG19 for implant classification. Kim et al. (Kim et al., 2020) developed an optimal pre-trained network architecture for identifying four different types of implants, i.e. Brånemark Mk TiUnite Implant, Dentium Implantium Implant, Straumann Bone Level Implant and Straumann Tissue Level Implant on intraoral radiographs." }, { "figure_ref": [], "heading": "Deep learning in object detection", "publication_ref": [ "b17", "b23", "b24", "b7", "b1", "b28", "b30", "b35", "b13", "b5", "b2", "b45" ], "table_ref": [], "text": "Current object detectors can be divided into two categories, i.e. anchorbased and anchor-free. The anchor-based detector sets the pre-defined anchor box before training and the anchor-free detector directly regresses the bounding box of the object. The anchor-based detector can be further grouped into onestage and two-stage methods. SSD (Liu et al., 2016) and YOLO (Redmon et al., 2016) are classic one-stage detectors, which directly predict the bounding box and category of objects based on the feature maps. Faster R-CNN (Ren et al., 2015) is a classical two-stage detector that consists of a region proposal network (RPN) and a prediction network (R-CNN (Girshick et al., 2014)). A series of detection algorithms (Cai & Vasconcelos, 2018;Sun et al., 2021;Tan et al., 2020;Wang et al., 2022) have been proposed to improve the performance of these anchor-based detectors. Compared to the anchor-based detector that heavily relies on the predefined anchor box, the anchor-free detector breaks such limitation. CornerNet (Law & Deng, 2018) simplified the prediction of the object bounding box as the regression of the top-left corner and the bottomright corner. CenterNet (Duan et al., 2019) further simplified CornerNet by regressing the center of object. With the development of the vision transformer, transformer-based anchor-free detector achieves great success in object detection. DETR (Carion et al., 2020) employs ResNet as the backbone and introduces a transformer-based encoder-decoder architecture for the object detection task. Deformable DETR (Zhu et al., 2020) extends DETR with sparse deformable attention that reduces the training time significantly." }, { "figure_ref": [], "heading": "Deep learning in implant position estimation", "publication_ref": [ "b26", "b14", "b29", "b36" ], "table_ref": [], "text": "The computer-aided diagnosis (CAD) systems has been applied to dental implant planning. Sadighpour et al. (Sadighpour et al., 2014) developed an ANN model which utilized a number of input factors to formulate a decision regarding the type of prosthesis (fixed or removable) and the specific design of the prosthesis for rehabilitation of the edentulous maxilla. Lee et al. (Lee et al., 2012) applied fuzzy recognition map for implant abutment selection. Szejka et al. (Szejka et al., 2011) developed an interactive reasoning system which requires the dentist to select the region of interest within a 3D bone model based on computed tomography (CT) images, to help the selection of the optimum implant length and design. However, these CAD systems need manual hyperparameter adjustment.\nRecently, researchers proposed different approaches to determine the implant position using the panoramic radiographic images and 2D slices of CBCT. Kurt et al. (Kurt Bayrakdar et al., 2021) utilised multiple pre-trained convolutional networks to segment the teeth and jaws to locate the missing tooth and generate a virtual tooth mask according to the neighbouring teeth' location and tilt. Widiasri et al. introduced Dental-YOLO (Widiasri et al., 2022) " }, { "figure_ref": [ "fig_5" ], "heading": "Method", "publication_ref": [ "b39", "b39" ], "table_ref": [], "text": "Using tooth crown image to regress the implant position has been shown to be effective in (Yang et al., 2022). Therefore, in this work, we follow this paradigm to train TSIPR. An overview of TSIPR is presented in Fig. 3 (Yang et al., 2022) to obtain the implant position at tooth rootP os p r (j) = (x p j , ŷ p j , ẑ p j )." }, { "figure_ref": [ "fig_1", "fig_0", "fig_0" ], "heading": "Implant Region Detector", "publication_ref": [ "b35", "b25" ], "table_ref": [], "text": "As shown in Fig. 1(a), the gap between neighboring teeth at the region of sparse teeth has a similar characteristic with the actual implant region, which will misguide the prediction network to generate false detection. To tackle this problem, we propose to train an implant region detector (IRD) to filter out the false detection.\nThe IRD is trained using the extended bounding box of implant position annotation (shown as the dashed blue box in Fig. 2). Different from the original implant position annotation (the red point in Fig. 2) that has little useful texture at the implant region, the extended box includes the neighboring teeth that enable the implant region to contain rich characteristics. Moreover, at the scale of the extended region, the real implant region has a larger interval between the neighboring teeth than that between sparse teeth. Both characteristics can be easily captured by the IRD. As the output of IRD is the bounding box with the highest confidence, which represents the most probable implant region, the false detection generated at the region of sparse teeth will be removed. Specifically, we set the size of the extended box as 128 × 128 to ensure that the texture of neighboring teeth is included. The extended bounding box will not introduce additional labeling costs, as the coordinate of the extended box is determined according to the original annotation.\nConsidering that the output of IRD is used to refine the detection results, a trade-off between location performance and inference speed is required. Therefore, we introduce a strong detector, i.e. YOLOv7-X (Wang et al., 2022) as is the GIoU loss (Rezatofighi et al., 2019). The overall training loss of IRD is:\nIRD\nL I = L cls + L loc + L cof (1)" }, { "figure_ref": [ "fig_1", "fig_5" ], "heading": "Multi-scale Patch Embedding Regression Network", "publication_ref": [ "b39", "b31", "b19", "b0" ], "table_ref": [], "text": "In implant position regression, ViT relies on the patch embedding operation and multi-head self-attention (MHSA) to build the relationship between the implant position and the texture of neighboring teeth (Yang et al., 2022). How-ever, due to the structural difference in the patient's mouth, the teeth spacing of tooth crown image has a big variation in different patients (see Fig. 1(a)).\nThe single kernel size of patch embedding in ViT can not perform well in this situation. Additionally, although ViT shows great performance in capturing global context (Tuli et al., 2021), it may ignore the local texture within each patch (Lowe, 1999), which can not extract enriched feature representation. In contrast, convolutional neural network (CNN) benefits from the inductive bias to capture local texture (Baker et al., 2018). To tackle the above issues, we design a multi-scale patch embedding regression network (MSPENet) to predict the implant position, which mainly consists of three parts: i) Multi-scale Patch Embedding, ii) Encoder and Decoder, iii) Regression Head. An overview of the proposed network is presented in Fig. 3.\nGiven a tooth crown image I p , the multi-scale patch embedding module firstly extracts robust features by three different sizes of patch embedding. Then, the output features are integrated together by concatenation and input into the encoder for further feature extraction. The decoder is used to recover the highresolution representation from the output of encoder. In the end, the regression head aims to output a Gaussian heatmap that highlights a precise implant position. Next, we will introduce these modules in detail." }, { "figure_ref": [], "heading": "Multi-scale Patch Embedding Module", "publication_ref": [ "b37" ], "table_ref": [], "text": "The multi-scale patch embedding module is devised to extract robust features from the input image. Similar to the CvT (Wu et al., 2021), we use convolution with overlapping patches to implement the patch embedding layer.\nThe convolutional patch embedding layer enables us to output features of the same resolution with different patch sizes. Specifically, we select three patch embedding layers with size of 5×5, 8×8, and 10×10, respectively. The patch sizes are determined by the experimental results. The multi-scale patch embedding module takes I r p as input and separately extracts image features in parallel. Then, the extracted multi-scale features are aggregated by concatenation. To fuse the features of different patch embedding layers, we use 1×1 convolution to smooth the aggregated feature. The output feature is fed into the encoder for further feature learning." }, { "figure_ref": [ "fig_5" ], "heading": "Encoder and Decoder", "publication_ref": [ "b3", "b21", "b15", "b38" ], "table_ref": [], "text": "Recent works show the benefit of combining convolution and transformer in network design (Chen et al., 2022;Mehta & Rastegari, 2021), in which the convolution captures the local texture and the transformer extracts global context.\nConsidering that the local texture within the patch is also important for the prediction of implant position, we devise a global-local feature interaction block (GLFIB) for Encoder to integrate both local texture and global context. The architecture of GLFIB is given in Fig. 3. GLFIB consists of three branches of transformer and one branch of convolution in parallel. We use multiple transformer modules to enrich the channel. This design aims to enable the network to focus more on capturing the relationship between different patches. To alleviate the computational burden, we follow (Lee et al., 2022) to adopt depth-wise convolutions and the efficient factorized self-attention (Xu et al., 2021) to construct GLFIB. Specifically, the local feature of network l ∈ R h× w×c is separately fed into each branch for feature extraction, and then the output features of branches are aggregated together by concatenation:\nA = concat[C 1 (l), T 1 (l), T 2 (l), T 3 (l)],(2)\nwhere A ∈ R h 2 × w 2 ×4c is the aggregated feature. C(•) and T (•) is the convolution and transformer module, respectively. The kernel size of both modules are 3 ×3. After obtaining the aggregated features, we use f (•) to interact features between local texture and global context:\nO = f (A),(3)\nwhere O ∈ R h 2 × w 2 ×2c is the final output feature. We use 1×1 convolution with channel of 2c for f (•). The encoder of MSPENet consists of four cascaded GLFIB, and the output of the last GLFIB is used as input for the decoder.\nThe output of the Encoder is a high-level feature. To ensure fine-grained heatmap regression, three deconvolution layers are adopted as the Decoder to recover high-resolution features from the output of encoder. The Decoder consecutively upsamples feature map as high-resolution feature representations, in which the output resolution the same as the first GLFIB. In the end, the upsampled feature map is input into the regression network to locate the implant position." }, { "figure_ref": [], "heading": "Regression Head", "publication_ref": [ "b44", "b16", "b7" ], "table_ref": [], "text": "The regression network consists of a heatmap head and a local offset head, which is used for predicting the implant position. The heatmap head gen-\nerates an Gaussian heatmap F ∈ [0, 1] W g × H g\n, where g is the down sampling factor of the prediction and set as 4. Following the standard practice of Cen-terNet (Zhou et al., 2019), the ground-truth position is transformed into the heatmap F using a 2D Gaussian kernel:\nF xy = exp(- (x -tx ) 2 + (y -ty ) 2 2σ 2 ) (4)\nwhere ( tx , ty ) is ground-truth annotation in F . σ is an object size-adaptive standard deviation . The heatmap head is optimized by the focal loss (Lin et al., 2017):\nL h = -1 N xy    (1 -Fxy ) α log( Fxy ) if F xy = 1\n(1 -Fxy ) β log( Fxy ) λ log(1 -Fxy ) otherwise\n(5)\nwhere α and β are the hyper-parameters of the focal loss, F is the predicted heatmap. The heatmap F is expected to be equal to 1 at the groundtruth position, and equal to 0 otherwise.\nThe local offset head computes the discretization error caused by g, which is used to further refine the prediction location. The loss of the local offset L o is optimized by L1 loss (Girshick et al., 2014). The overall training loss of MSPENet is:\nL M = L h + L o (6)" }, { "figure_ref": [], "heading": "Two-Stream Implant Position Regression Framework", "publication_ref": [ "b39" ], "table_ref": [], "text": "TSIPR is proposed for better predicting the implant position, which parallelizes a coarse implant region detector -IRD and an implant position regression \nM = IRD(I l p, W l ),(7)\nH = MSPENet(I r p , W r ),(8)\nwhere M ∈ R ĥ 4 × ŵ 4 is the RoI mask of implant region. W l and W r represent the learning parameters of IRD and MSPENet, respectively. After getting the regression heatmap H ∈ R ĥ 4 × ŵ 4 from the MSPENet, to filter out the false positive predictions, the RoI mask is applied for refinement:\ny = E(M ⊗ H),(9)\nwhere y is the extracted implant position at tooth crown and E represents the coordinate extraction operation. ⊗ is the matrix multiplication operation.\nHowever, the predicted implant positions at the tooth crown area are not the real location of implant. To obtain the implant position at tooth root, we introduce a space transformation algorithm (Yang et al., 2022), which fit the center line of implant using predicted implant position at tooth crown and then extend the center line to the root area. By this means, the intersections of implant center line with 2D slices of tooth root image, i.e. the implant position at tooth root area, can be obtained. " }, { "figure_ref": [], "heading": "Evaluation Criteria", "publication_ref": [ "b10" ], "table_ref": [], "text": "In the clinical, the diameter of the implant is 3.5∼5mm, and the mean error between the predicted and ideal implant position is required to be less than about 5 pixels (1mm with the CBCT imaging resolution in the paper),\ni.e., around 25% of the size of implant. Therefore, instead of general AP 50 , AP 75 (Kaur et al., 2022) is used as the evaluation criteria in this work. The calculation of AP is defined as follows:\nP recition = T P T P + F P (10)\nRecall = T P T P + F N (11) AP = 1 0 P (r)dr(12)\nHere TP, FP and FN are the number of correct, false and missed predictions, respectively. P(r) is the PR Curve where the recall and precision act as abscissa and ordinate, respectively." }, { "figure_ref": [], "heading": "Performance Analysis", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Comparison of different IRD", "publication_ref": [], "table_ref": [], "text": "IRD is designed for locating the implant region to refine the predicted implant position. Therefore, a high accurate detector with quick inference speed detector is required. To have a good trade-off between accuracy and speed for the IRD, we compare three versions of Yolov7 detector, results are listed in Table 1. Since the implant region located by the IRD is a coarse area, the AP 50 is used as the evaluation criteria to assess the location performance. From the table we can observe that the Yolov7-X achieved the highest precision of 86.9% and a medium AP value 87.8%. Although the Yolov7-W6 achieved the best overall performance, i.e. 89.3% AP, the precision rate is close to the Yolov7-X.\nCompared to the recall rate, the precision of the bounding box is a more important index for the IRD since only the highest confidence box is selected for each image. In terms of the inference time, Yolov7-W6 is slower than Yolov7-X for nearly 10 fps. Yolov7 has the highest inference speed, but the locating performance is poor. Consequently, we chose the Yolov7-X as the IRD for the implant region location task. " }, { "figure_ref": [ "fig_8" ], "heading": "Component Ablation", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "To validate the effectiveness of the proposed multi-scale patch embedding module, we compare the network performance of both single-scale and multiscale patch embedding. Specifically, we test the patch size from 5 to 10 in the experiment and the results are listed in Table 2. From the table we can observe that the performance of the single-scale network with different patch sizes is similar. We choose three patch sizes (5, 8, 10) with the highest performance for our multi-scale patch embedding module. As the combination of multiscale patch embedding can extract robust features from images with different tooth spacing, the multi-scale patch embedding can improve nearly 0.7 ∼ 1.5% location performance compared to the single scale one. This experimental results are consistent with our assumption and demonstrate the effectiveness of the proposed multi-path patch embedding method.\nIn Fig. 5, we visualize the detection results of the single-scale and multiscale patch embedding network. The visualization indicates that the multiscale patch embedding network generate more accurate detection results than the single-scale. " }, { "figure_ref": [ "fig_9", "fig_10" ], "heading": "Branch Ablation", "publication_ref": [], "table_ref": [], "text": "As previously discussed, MSPENet might easily generate false detection at the sparse teeth region. We conducted an ablation experiment to validate whether the proposed IRD can reduce the false detection rate. Fig. 6 shows the PR curve and the F1 score of the TSIPR for different IoU, the abscissa and ordinate of PR curve is Recall and precision, respectively; on the dash line the recall equals to precision. From the curves we can observe that for both IoU of 0.5 and 0.75, the PR curve of network with IRD branch is above the baseline PR curve, which indicates that the IRD branch can improve the detection performance on both recall and precision. The F1 score of the network with IRD are also 2.23% and 0.56% higher when IoU equals to 0.5 and 0.75, respectively.\nWe also visualize the detection results of the MSPENet and TSIPR in Fig. 7.\nWe can observe from the figure that the detection results predicted by the MSPENet have false detections in the teeth area with large space. When the IRD is introduced, the false detection results are filtered greatly, which is consistent with the experimental results." }, { "figure_ref": [], "heading": "Comparison to the mainstream Detectors", "publication_ref": [ "b41", "b43", "b40", "b44", "b45", "b39" ], "table_ref": [ "tab_4" ], "text": "To demonstrate the superiority of our method, we compare the location performance of the proposed TSIPR with other mainstream detectors. As little useful texture is available around the center of implant, the anchor-based detectors cannot regress implant position successfully. Only the CNN-based anchorfree detectors (VFNet (Zhang et al., 2021), ATSS (Zhang et al., 2020), Rep-Points (Yang et al., 2019), CenterNet (Zhou et al., 2019)), transformer-based detectors (Deformable DETR (Zhu et al., 2020) and ImplantFormer (Yang et al., 2022)) are employed for comparison. Results are listed in Table 4.\nFrom the table we can observe that the transformer-based methods perform better than the CNN-based networks (e.g., ImplantFormer achieved 13.7% AP, which is 1.6% higher than the best performed CNN-based network -ATSS). This experimental result demonstrates that the capacity of building the long-ranged dependency is important for the implant position regression. Our method, MSPENet, achieves the highest AP -15.4% among the transformer-based methods, which outperforms the ImplantFormer by 1.7% AP. When applying the IRD to filter out the false detection, the AP value reaches 15.7%.\nTo further validate the effectiveness of TSIPR, we introduce more metrics for comparison, i.e., F1 score, parameter, and FPS. From the table we can observe that the proposed TSIPR also performs the best, in terms of F1 score, with reasonable efficiency. These experimental results prove the effectiveness of our method, which achieves the best performance among all benchmarks." }, { "figure_ref": [], "heading": "Conclusion and Future Work", "publication_ref": [], "table_ref": [], "text": "In this study, we develop a two-stream implant position regression frame- Although the proposed TSIPR achieves a promising performance than the previous methods, it has some limitations. Firstly, the annotation of the implant position is difficult, which requires a pair of CBCT data captured pre-and post-implantation, for each patient. Secondly, TSIPR does not fully explore 3D context. As the network input is a single 2D slice of tooth crown, the texture variation between the neighbored slices is not used. Therefore, in the future work, we will explore semi-supervision approaches and expand the TSIPR to take multiple slices as input to fully explore 3D context. In real clinics, as IRD only output the most probable implant region, TSIPR can not perform well for patients with multiple missing teeth." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This work was supported by the National Natural Science Foundation of China under Grant 82261138629; Guangdong Basic and Applied Basic Research Foundation under Grant 2023A1515010688 and 2021A1515220072; Shenzhen Municipal Science and Technology Innovation Council under Grant JCYJ20220531101412030 and JCYJ20220530155811025." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [], "table_ref": [], "text": "The dental implant dataset used for evaluation is collected from the Shenzhen University General Hospital (SZUH), and all the implant positions were annotated by three experienced dentists. Specifically, the dataset contains 154 patients, from which 3045 2D slices of tooth crown are selected. Some sample " }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b9", "b32" ], "table_ref": [], "text": "Pytorch is used for model training and testing. For the training of IRD, we use a batch size of 16, SGD optimizer and a learning rate of 0.01. Four data augmentation methods, i.e. mosaic, mixup, photometric distortion and geometric are employed. The network is trained for 80 epochs. For the training of MSPENet, we use a batch size of 8, Adam optimizer and a learning rate of 0.0005 for network training. A series of augmentation methods, i.e. adding random noise (Kaur et al., 2021), enhancing contrast (Ubhi et al., 2022), random crop, random scale and random flip are employed. The network is trained for 140 epochs and the learning rate is divided by 10 at 40th and 60th epochs, respectively. All the models are trained and tested on the platform of TESLA" } ]
In implant prosthesis treatment, the design of the surgical guide heavily relies on the manual location of the implant position, which is subjective and prone to doctor's experiences. When deep learning based methods has started to be applied to address this problem, the space between teeth are various and some of them might present similar texture characteristic with the actual implant region. Both problems make a big challenge for the implant position prediction. In this paper, we develop a two-stream implant position regression framework (TSIPR), which consists of an implant region detector (IRD) and a multi-scale patch embedding regression network (MSPENet), to address this issue. For the training of IRD, we extend the original annotation to provide additional supervisory information, which contains much more rich characteristic and do not introduce extra labeling costs. A multi-scale patch embedding module is designed for the MSPENet to adaptively extract features from the images with various tooth spacing. The global-local feature interaction block is designed to
Two-Stream Regression Network for Dental Implant Position Prediction
[ { "figure_caption": "Fig. 2 )2Fig. 2) contains much more rich characteristics, i.e., the neighboring teeth. More importantly, the acquisition of the extended box do not introduce extra labeling costs. MSPENet is devised to regress the precise implant position. To adaptively extract features from the images with various tooth spacing, we design a multi-scale patch embedding module, to aggregate the features extracted from different sizes of patch embedding for more robust features. A global-local feature interaction block (GLFIB) is designed as the encoder of MSPENet, which integrates the global context of the transformer and the local texture extracted", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: (a) Example images of the sparse teeth in the tooth crown image. The red and yellow circles denote the actual regression region and the area prone to generate false alarms, respectively. (b) Comparison of the tooth crown images with different tooth spacing.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Comparison of implant position annotation and the extended implant region.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "to detect the alveolar bone and mandibular canal based on the sagittal view of CBCT to determine the height and width of the alveolar bone. Yang et al. (Yang et al., 2022) developed a transformer-based implant position regression network (Im-plantFormer), which directly predicts the implant position on the 2D axial view of tooth crown images and projects the prediction results back to the tooth root by the space transform algorithm. However, these methods do not consider the irregular structure of tooth, which is a big challenge to produce false alarms. Algorithm 1 Pseudocode of the workflow of TSIPR. Input: The tooth crown image of the patient p -I p. Output: The implant position at tooth root y r . 1: M = IRD(I p) 2: H = M SP EN et(I p) 3: Ĥ = M ⊗ H 4: P os p c (i) = Extract( Ĥ) 5: P os p r (j) = T P os p c →P os p r (P os p c (i))", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": ". It mainly consists of an implant region detector (IRD) and a multi-scale patch embedding regression network (MSPENet). We provide a pseudocode as Algorithm 1 to explain the workflow of TSIPR. During training, IRD and MSPENet are trained separately. In inference, IRD and MSPENet share the same tooth crown image I p ∈ R H×W ×C of patient p as input, and the outputs of IRD and MSPENet are the most probable implant region M ∈ R H 4 × W 4 and the heatmap with the implant position H ∈ R H 4 × W 4 , respectively. Then, we multiply M with H to filter out the error detection generated by the MSPENet and obtain", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The overview of the proposed TSIPR.", "figure_data": "", "figure_id": "fig_5", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": ". The network architecture is shown in Fig. 3, which consists of a backbone, a neck and three prediction heads. IRD takes I p as input. Feature maps of three different resolutions are extracted by the backbone and then input into the neck for feature fusion. Finally, the location prediction head generates the bounding box of the probable implant region. The IRD network is optimized by three loss functions, i.e. classification loss L cls , localization loss L loc and confidence loss L cof . L cls and L cof are the cross-entropy loss function and L loc", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Some sample images in the dental implant dataset. The red points denote the implant position annotation.", "figure_data": "", "figure_id": "fig_7", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Visual comparison of detection results between the single-scale and multi-scale patch embedding network. The red and green circles denote the ground-truth position and the predicted implant position, respectively.", "figure_data": "", "figure_id": "fig_8", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: PR curve of the TSIPR in different IoU.", "figure_data": "", "figure_id": "fig_9", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Comparison of the detection results of the MSPENet and TSIPR. The red and green circles denote the ground-truth position and the predicted implant position, respectively. The dashed red circle indicates the region of false detection.", "figure_data": "", "figure_id": "fig_10", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "work (TSIPR) for CBCT data based implant position prediction, which consists of an implant region detector (IRD) and a multi-scale patch embedding regression network (MSPENet). We extend the original annotation to provide additional supervisory information for the training of IRD, which locates the most probable bounding box of implant region to filter out the false regressions generated by the MSPENet. For the MSPENet, a multi-scale patch embedding module is designed to adaptively extract features from the images with various tooth spacing. The global-local feature interaction block is designed to build the encoder of MSPENet, which combines the transformer and convolution for enriched feature representation. Extensive experiments on a dental implant dataset demonstrated that the proposed TSIPR achieves superior performance than the existing methods, especially for patients with sparse teeth.", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Performance Comparison of Different IRD.", "figure_data": "NetworkPrecision(%) Recall(%) AP 75 % FPSYolov778.184.584.072Yolov7-X86.981.787.865Yolov7-W686.884.289.357V100 GPU. For the training of other baseline detectors, MMDetection libraryand ImageNet pre-training models are used.", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Ablation study of different patch size in the multi-scale patch embedding module.", "figure_data": "NetworkPatch Size 5 6 78910AP 75 %14.7±0.351114.2±0.341713.9±0.2283MSPENet14.5±0.431414.3±0.631514.6±0.135415.4±0.3215", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation study of the proposed GLFIB. T and C denote the transformer and convolution, respectively.To demonstrate the effectiveness of the proposed GLFIB, we conduct an ablation of convolution and transformer in the GLFIB. Results are listed in", "figure_data": "NetworkGLFIBAP 75 %[T,T,T,T] 14.0±0.4713[C,T,T,T] 15.4±0.3215MSPENet[C,C,T,T] 14.6±0.6135[C,C,C,T] 13.3±0.3354[C,C,C,C] 12.2±0.52414.4.3. Ablation of the GLFIB", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "From the table we can observe that the pure transformer architecture achieves an AP of 14.0%, which outperforms the pure convolutional architecture by 1.8% AP. When introducing a convolution into the GLFIB, the AP value", "figure_data": "improves by 1.4%. This experimental result demonstrates that the extractedlocal texture from the CNN can provide fine-grained feature for the predictionnetwork. With the increment of the number of convolutions, the AP valuedecreases. This phenomenon illustrates that the prediction of implant positionrelies on the global context, which is consistent with our design intention. Theexperimental results validate the effectiveness of the GLFIB, which combinesthe convolution and transformer for better feature extraction.", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Comparison of the proposed method with other mainstream detectors.", "figure_data": "MethodsNetworkBackboneAP 75 %F1 ScoreParam(M) FPSCenterNet10.9±0.210.8±0.132.2969ATSS12.1±0.211.9±0.232.3035CNN-basedVFNet RepPointsR5011.8±0.8 11.2±0.111.8±0.1 11.1±0.332.78 36.8428 16ImplantFormer11.5±0.311.3±0.324.7365Deformable DETR12.8±0.112.5±0.141.0722Transformer-basedImplantFormerViT-B-R50 13.7±0.213.6±0.2100.5214-MSPENet(ours) TSIPR(ours)-15.4±0.3 15.7±0.4 15.6±0.3 85.51 15.2±0.2 14.2158 46", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" } ]
Xinquan Yang; Xuguang Li; Xuechen Li; Wenting Chen; Linlin Shen; Xin Li; Yongqiang Deng
[ { "authors": "N Baker; H Lu; G Erlikhman; P J Kellman", "journal": "PLoS computational biology", "ref_id": "b0", "title": "Deep convolutional networks do not classify based on global object shape", "year": "2018" }, { "authors": "Z Cai; N Vasconcelos", "journal": "", "ref_id": "b1", "title": "Cascade r-cnn: Delving into high quality object detection", "year": "2018" }, { "authors": "N Carion; F Massa; G Synnaeve; N Usunier; A Kirillov; S Zagoruyko", "journal": "Springer", "ref_id": "b2", "title": "End-to-end object detection with transformers", "year": "2020" }, { "authors": "Y Chen; X Dai; D Chen; M Liu; X Dong; L Yuan; Z Liu", "journal": "", "ref_id": "b3", "title": "Mobile-former: Bridging mobilenet and transformer", "year": "2022" }, { "authors": "X Dai; H Zhao; T Liu; D Cao; L Xie", "journal": "IEEE Access", "ref_id": "b4", "title": "Locating anatomical landmarks on 2d lateral cephalograms through adversarial encoder-decoder networks", "year": "2019" }, { "authors": "K Duan; S Bai; L Xie; H Qi; Q Huang; Q Tian", "journal": "", "ref_id": "b5", "title": "Centernet: Keypoint triplets for object detection", "year": "2019" }, { "authors": "J Gargallo-Albiol; O Salomó-Coll; N Lozano-Carrascal; H.-L Wang; F Hernández-Alfaro", "journal": "Clinical Oral Implants Research", "ref_id": "b6", "title": "Intra-osseous heat generation during implant bed preparation with static navigation: Multifactor in vitro study", "year": "2021" }, { "authors": "R Girshick; J Donahue; T Darrell; J Malik", "journal": "", "ref_id": "b7", "title": "Rich feature hierarchies for accurate object detection and semantic segmentation", "year": "2014" }, { "authors": "T J Jang; K C Kim; H C Cho; J K Seo", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b8", "title": "A fully automated method for 3d individual tooth identification and segmentation in dental cbct", "year": "2021" }, { "authors": "A Kaur; A P S Chauhan; A K Aggarwal", "journal": "Expert Systems with Applications", "ref_id": "b9", "title": "An automated slice sorting technique for multi-slice computed tomography liver cancer images using convolutional network", "year": "2021" }, { "authors": "A Kaur; A P S Chauhan; A K Aggarwal", "journal": "IEEE/ACM Transactions on Computational Biology and Bioinformatics", "ref_id": "b10", "title": "Prediction of enhancers in dna sequence data using a hybrid cnn-dlstm model", "year": "2022" }, { "authors": "J.-E Kim; N.-E Nam; J.-S Shim; Y.-H Jung; B.-H Cho; J J Hwang", "journal": "Journal of clinical medicine", "ref_id": "b11", "title": "Transfer learning via deep neural networks for implant fixture system classification using periapical radiographs", "year": "2020" }, { "authors": "Kurt Bayrakdar; S Orhan; K Bayrakdar; I S Bilgir; E Ezhov; M Gusarev; M Shumilov; E ", "journal": "BMC Medical Imaging", "ref_id": "b12", "title": "A deep learning approach for dental implant planning in cone-beam computed tomography images", "year": "2021" }, { "authors": "H Law; J Deng", "journal": "", "ref_id": "b13", "title": "Cornernet: Detecting objects as paired keypoints", "year": "2018" }, { "authors": "S Lee; J Yang; J Han", "journal": "Expert Systems with Applications", "ref_id": "b14", "title": "Development of a decision making system for selection of dental implant abutments based on the fuzzy cognitive map", "year": "2012" }, { "authors": "Y Lee; J Kim; J Willette; S J Hwang", "journal": "", "ref_id": "b15", "title": "Mpvit: Multi-path vision transformer for dense prediction", "year": "2022" }, { "authors": "T.-Y Lin; P Goyal; R Girshick; K He; P Dollár", "journal": "", "ref_id": "b16", "title": "Focal loss for dense object detection", "year": "2017" }, { "authors": "W Liu; D Anguelov; D Erhan; C Szegedy; S Reed; C.-Y Fu; A C Berg", "journal": "Springer", "ref_id": "b17", "title": "Ssd: Single shot multibox detector", "year": "2016-10-11" }, { "authors": "Y Liu; Z.-C Chen; C.-H Chu; F.-L Deng", "journal": "", "ref_id": "b18", "title": "Transfer learning via artificial intelligence for guiding implant placement in the posterior mandible: An in vitro study", "year": "2021" }, { "authors": "D G Lowe", "journal": "", "ref_id": "b19", "title": "Object recognition from local scale-invariant features", "year": "1999" }, { "authors": "F P Mahdi; K Motoki; S Kobashi", "journal": "Scientific reports", "ref_id": "b20", "title": "Optimization technique combined with deep learning method for teeth recognition in dental panoramic radiographs", "year": "2020" }, { "authors": "S Mehta; M Rastegari", "journal": "", "ref_id": "b21", "title": "Mobilevit: light-weight, general-purpose, and mobile-friendly vision transformer", "year": "2021" }, { "authors": "J Qian; W Luo; M Cheng; Y Tao; J Lin; H Lin", "journal": "IEEE Access", "ref_id": "b22", "title": "Cephann: a multi-head attention network for cephalometric landmark detection", "year": "2020" }, { "authors": "J Redmon; S Divvala; R Girshick; A Farhadi", "journal": "", "ref_id": "b23", "title": "You only look once: Unified, real-time object detection", "year": "2016" }, { "authors": "S Ren; K He; R Girshick; J Sun", "journal": "Advances in neural information processing systems", "ref_id": "b24", "title": "Faster rcnn: Towards real-time object detection with region proposal networks", "year": "2015" }, { "authors": "H Rezatofighi; N Tsoi; J Gwak; A Sadeghian; I Reid; S Savarese", "journal": "", "ref_id": "b25", "title": "Generalized intersection over union: A metric and a loss for bounding box regression", "year": "2019" }, { "authors": "L Sadighpour; S M M Rezaei; M Paknejad; F Jafary; P Aslani", "journal": "Journal of Research and Practice in Dentistry", "ref_id": "b26", "title": "The application of an artificial neural network to support decision making in edentulous maxillary implant prostheses", "year": "2014" }, { "authors": "S Sukegawa; K Yoshii; T Hara; K Yamashita; K Nakano; N Yamamoto; H Nagatsuka; Y Furuki", "journal": "Biomolecules", "ref_id": "b27", "title": "Deep neural networks for dental implant system classification", "year": "2020" }, { "authors": "P Sun; R Zhang; Y Jiang; T Kong; C Xu; W Zhan; M Tomizuka; L Li; Z Yuan; C Wang", "journal": "", "ref_id": "b28", "title": "Sparse r-cnn: End-to-end object detection with learnable proposals", "year": "2021" }, { "authors": "A L Szejka; M Rudek; O C Jnr", "journal": "", "ref_id": "b29", "title": "A reasoning method for determining the suitable dental implant", "year": "2011" }, { "authors": "M Tan; R Pang; Q V Le", "journal": "", "ref_id": "b30", "title": "Efficientdet: Scalable and efficient object detection", "year": "2020" }, { "authors": "S Tuli; I Dasgupta; E Grant; T L Griffiths", "journal": "", "ref_id": "b31", "title": "Are convolutional neural networks or transformers more like human vision?", "year": "2021" }, { "authors": "J S Ubhi; A K Aggarwal", "journal": "Journal of Visual Communication and Image Representation", "ref_id": "b32", "title": "Neural style transfer for image within images and conditional gans for destylization", "year": "2022" }, { "authors": "E Varga; M Antal; L Major; R Kiscsatári; G Braunitzer; J Piffkó", "journal": "Clinical oral implants research", "ref_id": "b33", "title": "Guidance means accuracy: A randomized clinical trial on freehand versus guided dental implantation", "year": "2020" }, { "authors": "R Vinci; M Manacorda; R Abundo; A Lucchina; A Scarano; C Crocetta; L Lo Muzio; E Gherlone; F Mastrangelo", "journal": "Journal of Clinical Medicine", "ref_id": "b34", "title": "Accuracy of edentulous computer-aided implant surgery as compared to virtual planning: a retrospective multicenter study", "year": "2020" }, { "authors": "C.-Y Wang; A Bochkovskiy; H.-Y M Liao", "journal": "", "ref_id": "b35", "title": "Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors", "year": "2022" }, { "authors": "M Widiasri; A Z Arifin; N Suciati; C Fatichah; E R Astuti; R Indraswari; R H Putra; C Za'in", "journal": "IEEE Access", "ref_id": "b36", "title": "Dental-yolo: Alveolar bone and mandibular canal detection on cone beam computed tomography images for dental implant planning", "year": "2022" }, { "authors": "H Wu; B Xiao; N Codella; M Liu; X Dai; L Yuan; L Zhang", "journal": "", "ref_id": "b37", "title": "Cvt: Introducing convolutions to vision transformers", "year": "2021" }, { "authors": "W Xu; Y Xu; T Chang; Z Tu", "journal": "", "ref_id": "b38", "title": "Co-scale convattentional image transformers", "year": "2021" }, { "authors": "X Yang; X Li; X Li; P Wu; L Shen; X Li; Y Deng", "journal": "", "ref_id": "b39", "title": "Implantformer: Vision transformer based implant position regression using dental cbct data", "year": "2022" }, { "authors": "Z Yang; S Liu; H Hu; L Wang; S Lin", "journal": "", "ref_id": "b40", "title": "Reppoints: Point set representation for object detection", "year": "2019" }, { "authors": "H Zhang; Y Wang; F Dayoub; N Sunderhauf", "journal": "", "ref_id": "b41", "title": "Varifocalnet: An iou-aware dense object detector", "year": "2021" }, { "authors": "K Zhang; J Wu; H Chen; P Lyu", "journal": "Computerized Medical Imaging and Graphics", "ref_id": "b42", "title": "An effective teeth recognition method using label tree with cascade network structure", "year": "2018" }, { "authors": "S Zhang; C Chi; Y Yao; Z Lei; S Z Li", "journal": "", "ref_id": "b43", "title": "Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection", "year": "2020" }, { "authors": "X Zhou; D Wang; P Krähenbühl", "journal": "", "ref_id": "b44", "title": "Objects as points", "year": "2019" }, { "authors": "X Zhu; W Su; L Lu; B Li; X Wang; J Dai", "journal": "", "ref_id": "b45", "title": "Deformable detr: Deformable transformers for end-to-end object detection", "year": "2020" } ]
[ { "formula_coordinates": [ 10, 133.8, 396.32, 15.94, 9.96 ], "formula_id": "formula_0", "formula_text": "IRD" }, { "formula_coordinates": [ 10, 254.04, 549.68, 223.48, 10.35 ], "formula_id": "formula_1", "formula_text": "L I = L cls + L loc + L cof (1)" }, { "formula_coordinates": [ 12, 225.24, 467.12, 252.28, 9.97 ], "formula_id": "formula_2", "formula_text": "A = concat[C 1 (l), T 1 (l), T 2 (l), T 3 (l)],(2)" }, { "formula_coordinates": [ 12, 283.08, 569.36, 194.44, 9.97 ], "formula_id": "formula_3", "formula_text": "O = f (A),(3)" }, { "formula_coordinates": [ 13, 133.8, 277.76, 198.13, 12.97 ], "formula_id": "formula_4", "formula_text": "erates an Gaussian heatmap F ∈ [0, 1] W g × H g" }, { "formula_coordinates": [ 13, 230.16, 349.17, 247.36, 25.32 ], "formula_id": "formula_5", "formula_text": "F xy = exp(- (x -tx ) 2 + (y -ty ) 2 2σ 2 ) (4)" }, { "formula_coordinates": [ 13, 167.88, 432.74, 269.34, 33.32 ], "formula_id": "formula_6", "formula_text": "L h = -1 N xy    (1 -Fxy ) α log( Fxy ) if F xy = 1" }, { "formula_coordinates": [ 13, 273.36, 602.24, 204.16, 10.34 ], "formula_id": "formula_7", "formula_text": "L M = L h + L o (6)" }, { "formula_coordinates": [ 14, 264.6, 360.6, 212.92, 11.98 ], "formula_id": "formula_8", "formula_text": "M = IRD(I l p, W l ),(7)" }, { "formula_coordinates": [ 14, 249.6, 384.6, 227.92, 11.86 ], "formula_id": "formula_9", "formula_text": "H = MSPENet(I r p , W r ),(8)" }, { "formula_coordinates": [ 14, 271.08, 491.72, 206.44, 9.97 ], "formula_id": "formula_10", "formula_text": "y = E(M ⊗ H),(9)" }, { "formula_coordinates": [ 16, 261.72, 451.03, 215.8, 53.35 ], "formula_id": "formula_11", "formula_text": "Recall = T P T P + F N (11) AP = 1 0 P (r)dr(12)" } ]
2023-05-17
[ { "figure_ref": [], "heading": "", "publication_ref": [ "b35", "b7", "b22", "b14", "b16", "b38", "b30", "b1", "b33", "b20", "b10" ], "table_ref": [], "text": "1 Introduction Bayesian networks. Bayesian networks (BNs) [Pearl, 1988] are probabilistic graphical models that enable succinct knowledge representation and facilitate probabilistic reasoning [Darwiche, 2009]. Parametric Bayesian networks (pBNs) [Castillo et al., 1997] extend BNs by allowing polynomials in conditional probability tables (CPTs) rather than constants. Parameter synthesis on Markov models. Parameter synthesis is to find the right values for the unknown parameters with respect to a given constraint. Various synthesis techniques have been developed for parametric Markov chains (pMCs) ranging over e.g., the gradient-based methods [Heck et al., 2022], convex optimization [Cubuktepe et al., 2018;Cubuktepe et al., 2022], and region verification [Quatmann et al., 2016]. Recently, Salmani and Katoen [2021a] have proposed a translation from pBNs to pMCs that facilitates using pMC algorithms to analyze pBNs. Proceeding from this study, we tackle a different problem [Kwisthout and van der Gaag, 2008] for Bayesian networks. Minimal-change parameter tuning. Given a Bayesian network B, a hypothesis H, evidence E, λ ∈ [0, 1], and a constraint of the form Pr(H|E) ≤ λ (or ≥ λ), what is a minimal change-with respect to a given measure of distance-in the probability values of (a subset of) CPT rows, such that the constraint holds?\nWe illustrate the problem with an example of testing COVID-19 that is adopted from several medical studies [Barreiro et al., 2021;Nishiura et al., 2020;Dinnes et al., 2022]. The outcome of PCR tests only depends on whether the person is infected, while the antigen tests are less likely to correctly identify COVID-19 if the infection is asymptomatic (or presymptomatic). Figure 1a depicts a Bayesian network that models such probabilistic dependencies. In the original network, the probability of no COVID-19 given that both tests are positive, is 0.011089. Assume that in an application domain, the result of such a query is now required not to exceed 0.009: the aim is to make this constraint hold while imposing the least change in the original network with respect to a distance measure. This is an instance of the minimal-change parameter tuning problem. We consider the -bounded variant of the problem: for a subset of modifiable CPT rows, are there new values within the distance of from the original probability values that make the constraint hold?\nMain contributions. Based on existing region verification and region partitioning techniques for pMCs,\n• we propose a practical algorithm for -bounded tuning.\nMore precisely, we find instantiations that (i) satisfy the constraint (if the constraint is satisfiable) and (ii) areclose and (iii) lean towards the minimum distance instantiation depending on a coverage factor 0 ≤ η ≤ 1.\n• We propose two region expansion schemes to realizecloseness of the results both for Euclidean distance and for CD distance [Chan and Darwiche, 2005].\n• Contrary to the existing techniques that restrict to hyperplane solution spaces, we handle pBNs with multiple parameters in multiple distributions and multiple CPTs. Our experiments on our prototypical implementation 1 indicate that -bounded tuning of up to 8 parameters for large networks with 100 variables is feasible. Paper organization. Section 2 includes the basic notations and Sec. 3 the preliminaries on parametric Bayesian networks. Section 4 introduces parametric Markov chains and the region verification techniques thereof. Section 5 details our main contributions and Sec. 6 our experimental results. Section 7 concludes the paper with an overview of the related studies that were not mentioned before. 8 0.42 0 .0 1 0.99 q 1q q 1q 0 .0 4 0.96 0 .0 4 0 .9 6 1 1 (b) " }, { "figure_ref": [], "heading": "Background", "publication_ref": [], "table_ref": [], "text": "Variables. Let V be a set of m random variables v 1 , . . . , v m and D vi the domain of variable v i . For A ⊆ V , Eval(A) denotes the set of joint configurations for the variables in A. Parameters. Let X = {x 1 , . . . , x n } be a set of n realvalued parameters. A parameter instantiation is a function u : X → R that maps each parameter to a value. All parameters are bounded; i.e., lb xi ≤ u(x i ) ≤ ub xi for x i . Let\nI i = [lb xi , ub xi ].\nThe parameter space U ⊆ R n of X is the set of all possible values of X, i.e., the hyper-rectangle spanned by the intervals I i for all i. Substitution. Polynomials f over X are functions f : R n → R where f (u) is obtained by replacing each occurrence of x i in f by u(x i ); e.g., for \nf = 2x 2 1 +x 2 , u(x 1 ) = 3 and u(x 2 ) = 2, f (u) = 20. Let f [u] denote" }, { "figure_ref": [], "heading": "Parametric Bayesian Networks", "publication_ref": [], "table_ref": [], "text": "A parametric Bayesian network (pBN) is a BN in which a subset of entries in the conditional probability tables (CPTs) are polynomials over the parameters in X. Let par vi denote the set of parents for the node v i in the graph G. Definition 1. The tuple B=(G, X, Θ) is a parametric Bayesian network (pBN) with directed acyclic graph G=(V, W ) over random variables V ={v 1 , . . . , v m }, set of parameters X={x 1 , . . . , x n }, and parametric CPTs Θ={ Θ vi | v i ∈V } where Θ vi : Eval(par vi ) → pDistr(D vi ).\nThe CPT row Θ v (par) is a parametric distribution over D v given the parent evaluation par. The CPT entry Θ v (par)(d), short θ (v,d,par) , is the probability that v=d given par. A pBN without parameters, i.e., X = ∅, is an ordinary BN. A pBN B defines the parametric distribution function Pr B over Eval(V ). For well-formed instantiation u, BN B\n[u] = (G, Θ[u]\n) is obtained by replacing the parameter x i in the parametric functions in the CPTs of B by u(x i ). Example 1. Fig. 1a shows a pBN over variables V = {C, S, A, P } (initial letters of node names) and parameters X = {p, q}. Instantiating with u 0 (p) = 0.72 and u 0 (q) = 0.95 yields the BN B[u 0 ] as indicated using dashed boxes. pBN constraints. Constraints Pr B (H|E) ∼ λ involve a hypothesis H, an evidence E, ∼ ∈ {≤, ≥} and threshold 0 ≤ λ ≤ 1. For the instantiation u :\nX → R, B[u] |= φ if and only if Pr B[u] (H | E) ∼ λ." }, { "figure_ref": [], "heading": "Sensitivity functions.", "publication_ref": [ "b7", "b12", "b12", "b10", "b30", "b9", "b28" ], "table_ref": [], "text": "Sensitivity functions are rational functions that relate the result of a pBN query to the parameters [Castillo et al., 1997;Coupé and van der Gaag, 2002]. Example 2. For the pBN B 1 in Fig. 1a, let φ 1 be the constraint:\nPr B1 (C = no | A = pos ∧ P = pos) ≤ 0.009.\nThis reduces to the sensitivity function f B1,φ1 = 361 /34900•p•q+8758•q+361. Using instantiation u with u(p) = 0.92075 and u(q\n) = 0.97475, f B1,φ1 [u] = 0.008798 ≤ 0.009, i.e., B 1 [u] |= φ 1 .\nHigher degree sensitivity functions. Contrary to the existing literature that is often restricted to multi-linear sensitivity functions, we are interested in analyzing pBNs with sensitivity functions of higher degrees. Example 3. Consider a variant of the COVID-19 example, where the person only takes the antigen test twice rather than taking both the antigen and PCR tests to diagnose COVID-19; see Fig. 2 for the parametric CPTs. Then, \nf B2,φ2 = Pr B2 (C = no | A1 = pos ∧ A2 = pos) = 950 • 9 • t 2 • s 2 8850 • t 2 + 349 • p 2 + 950 • s 2 + 151 • r 2 .\n. pBN B = (G, X, Θ ) is a parametrization of BN B = (G, Θ) over X w.r.t. Θ modif if θ (v,d,par) = f for some f ∈ Q[X] if θ (v,d,par) ∈ Θ modif .\nThe parametrization B is monotone iff for each parametric entry θ (v,d,par) = f , f is a monotonic polynomial.\nThe parametrization B is valid iff B is a pBN, i.e., iff Θ v (par) ∈ pDistr(D v ) for each random variable v and its parent evaluation par. To ensure validity, upon making a CPT entry parametric, the complementary entries in the row should be parametrized, too. This is ensured by covariation schemes. The most established co-variation scheme in the literature is the linear proportional [Coupé and van der Gaag, 2002] scheme that has several beneficial characteristics [Renooij, 2014], e.g., it preserves the ratio of CPT entries. Definition 3 (Linear proportional co-variation). A linear proportional co-variation over X maps the CPT Θ v onto the parametric CPT Θ v based on d k ∈ D v , where\n   θ (v,dk,par) = x for some x ∈ X θ (v,dj ,par) = 1-x 1-θ (v,dk,par) • θ (v,dj ,par) for d j = d k ∈ D v .\nNote that we allow the repetition of parameters in multiple distributions and multiple CPTs, see e.g., Fig. 2. 3.2. Formal Problem Statement. Consider BN B = (G, Θ), its valid parametrization B = (G, X, Θ ) over X with constraint φ. Let u 0 be the original value of the parameters X in B and d : U × U → R ≥0 a distance measure. Let d0 denote an upperbound for d {u∈U } (u 0 , u).\nThe minimum-distance parameter tuning problem is to find\nu min = argmin {u∈U | B[u]|=φ} d(u, u 0 ). Its generalized vari- ant for 0 ≤ ≤ d0 and 0 ≤ η ≤ 1 is: The ( , η)-parameter tuning problem is to find u ∈ U s.t. B[u] |= φ, d(u, u 0 ) ≤ , and d(u, u min ) ≤ (1-η) • d0 .\n( d0 , 1)-tuning gives the minimum-distance tuning. We call η the coverage factor that determines, intuitively speaking, the minimality of the result; we discuss this in Sec. 5. We consider two distance measures. Euclidean distance (EC-distance). The ECdistance between u and u (both in U) is defined as:\nEC(u, u ) = xi∈X u(x i ) -u (x i ) 2 .\nCorollary 1. For n = |X|, d0 = √ n is an upperbound for the EC distance of any u 0 from any instantiation u ∈ U.\nCorollary 2. Let 0 ≤ ≤ √ n and u 0 (x i ) -n ≤ u(x i ) ≤ u 0 (x i ) + n . Then, EC(u, u 0 ) ≤ .\nChan-Darwiche distance (CD distance) [Chan and Darwiche, 2005] is a distance measure to quantify the distance between the probability distributions of two BNs, defined as:\nCD(u, u ) = ln max w∈ Eval(V ) Pr B[u ] (w) Pr B[u] (w) -ln min w∈ Eval(V ) Pr B[u ] (w) Pr B[u] (w) ,\nwhere both 0 /0 = 1 and ∞ /∞ = 1 by definition. Whereas the EC-distance can be computed in O(n), this is for CDdistances NP-complete [Kwisthout and van der Gaag, 2008].\nIt has known closed-forms only for limited cases, e.g., single parameter and single-CPT pBNs [Chan and Darwiche, 2004]. Let Θ v be the CPT that is parametrized to Θ v by a monotone parametrization. Let the minimum (maximum) probability entry of Θ v be θ min (θ max ) parametrized to θ min with x (θ max with y). Let lb x , ub x , lb y , ub y = 0, 1 be the upperbounds and the lowerbounds of x and y in U. Corollary 3.\nAn upperbound for the CD distance of u 0 from any instantiation u ∈ U is:\nd0 = ln max {θ min [lb x ], θ min [ub x ]} θ min -ln min {θ max [lb y ], θ max [ub y ]} θ max\nas derived from the single CPT closed-form [Chan, 2005].\nCorollary 4. Let 0 ≤ ≤ d0 and α=e /2 . Let for each θ ∈ Θ v , α -1 • θ [u 0 ] ≤ θ [u] ≤ α • θ [u 0 ] . Then, CD(u, u 0 ) ≤ .\nNote that zero probabilities are often considered to be logically impossible cases in the BN, see e.g., [Kisa et al., 2014].\nChanging the parameter values from (and to) 0 yields CD distance ∞. We consider -bounded parameter tuning: we (i) forbid zero probabilities in Θ modif (the CPT entries that are explicitly modified), (ii) use the linear proportional co-variation scheme (Def. 3) that is zero-preserving [Renooij, 2014], i.e., it forbids changing the co-varied parameters from non-zero to zero. This co-variation scheme, in general, optimizes the CD-distance for single CPT pBNs, see [Renooij, 2014]." }, { "figure_ref": [ "fig_3", "fig_3", "fig_3", "fig_3", "fig_3" ], "heading": "Parametric Markov Chains", "publication_ref": [ "b38" ], "table_ref": [], "text": "Parametric Markov chains are an extension of Markov chains (MCs) that allow parametric polynomials as transition labels: \nDefinition 4. A parametric Markov chain (pMC) M is a tuple (S,\np • q + 8758 • q + 361 . Let R ⊆ R n with n = |X| and let M, R |= ϕ if and only if ∀u ∈ R. M[u] |= ϕ.\nWe now define the parameter synthesis problems for pMC M and reachability constraint ϕ that are relevant to our setting. Definition 5 (Region partitioning). Partition region R into R + , R -, and R ? such that:\nR + ⊆ {u ∈ R | M[u] |= ϕ}, satisfying instantiations R -⊆ {u ∈ R | M[u] |= ¬ϕ}, refuting instantiations and R ? = R \\ (R + ∪ R -) with ||R ? || ≤ (1-η)•||R|| for some given coverage factor 0 ≤ η ≤ 1.\nThe sub-region R ? denotes the fragment of R that is inconclusive for ϕ. This fragment should cover at most fraction 1-η of R's volume. Region partitioning is exact if R ? = ∅. Definition 6 (Region verification). For the region R and the specification φ, the problem is to check whether:\nM, R |= ϕ R is accepting or M, R |= ¬ϕ R is rejecting or M, R |= ϕ ∧ M, R |= ¬ϕ R is inconclusive .\n4.1. Parameter Lifting. The parameter lifting algorithm (PLA) [Quatmann et al., 2016] is an abstraction technique that reduces region verification to a probabilistic model checking problem for which efficient algorithms exist [Katoen, 2016]. It first removes parameter dependencies by making all parameters unique. Then it considers for each parameter x i only its bounds lb xi and ub xi within the region R. This yields a non-parametric Markov Decision Process (MDP). Definition 7. A Markov decision process (MDP) is a tuple M = (S, s I , Act, P) with a finite set S of states, an initial state s I ∈ S, a finite set Act of actions, and a (partial) transition probability function P : S × Act → Distr(S). While resolving the non-determinism in the obtained MDP, a trade-off between maximizing or minimizing x i may occur. This occurs in particular when parameter x i repeats in the outgoing transitions of multiple states, see e.g., parameter t in Fig. 3a. PLA handles the issue by a relaxation step that introduces intermediate parameters; see e.g., Fig. 3b, where parameter t in the outgoing edges of state C=no, A=pos is replaced by t . The relaxation step yields an over-approximation of the region verification problem.\nExample 6. Consider the pMC in Fig. 3(a) and region t ∈ [0.0075, 0.0125]. Fig. 3b shows the pMC after relaxation, e.g., parameter t in the outgoing transitions of state C=no, A=pos is replaced by t . Fig. 3c shows the MDP obtained by substituting the parameters with their extremal values. Consider e.g., state C = no, S = no. Its outgoing dotted (blue) edges have probability 0.0075 and 1-0.0075 obtained from lb(t). The probabilities 0.0125 and 1-0.0125 of the dashed (purple) edges stem from ub(t).\nAfter substitution, region verification reduces to a simple model-checking query that over-approximates the original higher-order polynomial, e.g., Example 3. The region accepts ϕ for the pMC if the resulting MDP satisfies ϕ for all schedulers. If all the schedulers satisfy ¬ϕ, the region is rejecting. If the region is neither accepting nor rejecting (i.e., inconclusive), it is partitioned into smaller subregions that are more likely to be either accepting or rejecting. Partitioning ends when at least η% of the region is conclusive." }, { "figure_ref": [], "heading": "Parametric Markov Chain for pBNs.", "publication_ref": [], "table_ref": [], "text": "To enable the use of PLA to parameter tuning of pBNs, we map pBNs onto pMCs as proposed in [Salmani and Katoen, 2021a;Salmani and Katoen, 2021b]; see the next example. Example 7. Consider the acyclic pMC in Fig. 1b for the pBN in Fig. 1a for the topological ordering : C < S < A < P . Initially, all variables are don't care. The initial state can evolve into C=yes and C=no. Its transition probabilities 0.05 and 0.95 come from the CPT entries of C. Generally, at \"level\" i of the pMC, the outgoing transitions are determined by the CPT of the i+1-st variable in .\npBN constraints relate to reachability constraints in its pMC. Example 8. Consider the pBN B 1 from Fig. 1a and the query Pr B (C = no | A = pos ∧ P = pos) from Example 2. Let M B be the pMC in Fig. 1b. The query coincides with\n1 -Pr M B (♦ C = yes ∨ A = neg ∨ P = neg) 1 -Pr M B (♦ A = neg ∨ P = neg) = 361 34900 • p • q + 8758 • q + 361 = f B,φ ; see Ex. 2." }, { "figure_ref": [ "fig_4" ], "heading": "Region-based Minimal Change Tuning", "publication_ref": [], "table_ref": [], "text": "Our approach to the parameter tuning problem for a pBN and constraint φ consists of two phases. The -close partitioning exploits parameter lifting on the pMC for reachability constraint ϕ of φ. We start with a rectangular region enclosing the original instantiation u 0 . This region is iteratively expanded if it rejects ϕ. Inconclusive regions are partitioned. Any iteration that finds some accepting subregion ends the first phase.\nThe second phase (Alg. 2) extracts a minimal-change instantiation from the set of accepting sub-regions.\nAlgorithm 1: Minimal change tuning \n1 M B ← computeMC(B) 2 ← reachSpec(φ) 3 0 ← d0 • γ K-1 4 ← 0 5 function minChgTuning(M B , u0, ϕ, η): 6 while ≤ d0 do 7 R+ ← -partitioning(M B , ϕ, , η) 8 if R+ = ∅ then 9 ← γ -1 •\n23 α ← e /2 24 R ← x j ∈X [u0(xj ) • α -1 , u0(xj ) • α] 25 return R 5.1. Obtaining Epsilon-Close Subregions. Alg.\n1 is the main thread of our approach. Its inputs are the pBN B = (V, W, X, Θ), the initial instantiation u 0 : X → R, and the constraint φ. It starts with obtaining the pMC M B of the pBN and reachability constraint ϕ of φ (lines 1-2). The output is either (i) u 0 if M[u 0 ] |= ϕ, or (ii) the instantiation u with a minimal distance from u 0 or (iii) no instantiation if ϕ is infeasible2 . The hyper-parameters of the algorithm are the coverage factor 0 < η < 1, the region expansion factor 0 < γ < 1, and the maximum number of iterations K ∈ N. The hyper-parameters γ and K steer the iterative procedure. They initially determine the distance bound , see line 3. The bound determines region R , i.e., how far the region bounds deviate from u 0 . PLA then verifies R . If R is rejecting, is extended by the factor γ -1 (l. 9). Figure 4 visualizes the procedure for γ= 1 /2 and n=2. At iteration i, (a) PLA is invoked on R and either (b) R + = ∅ or (c) R + = ∅. For case (b), the iterative procedure ends and Alg. 2 is called to obtain an instantiation in R + that is closest to u 0 . For case (c), the region is expanded by factor γ -1 and passed to PLA, see (d). Note that the loop terminates when the distance bound reaches its upper bound d0 (l. 2). We refer to Sec. 3 Corollary 1 and 3 for computing d0 .\nRegion expansion schemes. Region R is determined by factor , see line 16. The methods makeRegion-EC and makeRegion-CD detail this for each of our distance measures. For the EC distance (line 20), the parameters have an absolute deviation from u 0 . We refer to Corollary 2, Sec. 3. For CD distance (lines 23 and 24), the deviation is relative to the initial value of the parameter. We refer to Corollary 4, Sec. 3. Such deviation schemes ensure -closeness of R both for EC and CD distance, that is, all the instantiations in R have at most distance from u 0 .\nRemark. For pBNs with a single parameter, already checked regions can be excluded in the next iterations. Applying such a scheme to multiple parameters yet may not yield minimal-change instantiation. To see this, let u 0 (x) = 0.15 and u 0 (y) = 0.15 and take 0 = 0.05. Assume R 0 = [0.1, 0.2] × [0.1, 0.2] is rejecting. Limiting the search in the next iteration to R 1 = [0.2, 0.4] × [0.2, 0.4] may omit the accepting sub-regions that are at a closer distance from u 0 , e.g. if the region [0.1, 0.2] × [0.2, 0.4] includes some satisfying sub-regions. This is why we include the already-analyzed intervals starting from u 0 in the next iterations. However, as will be noted in our experimental results, this does not have a deteriorating effect on the performance: the unsuccessful intermediate iterations are normally very fast as they are often analyzed by a single model checking: the model checker in a single step determines that the entire region is rejecting and no partitioning is needed." }, { "figure_ref": [ "fig_7" ], "heading": "Obtaining a Minimal-Distance", "publication_ref": [], "table_ref": [], "text": "Instantiation. Once R + = ∅ ⊆ R is found, it remains to find the instantia- tion u + in R + that is closest to u 0 . Region R + includes infinitely many instantiations, yet minimal-distance instan- tiation is computable in O(k•n) for n = |X| and the re- gions R + = {R +,1 , • • • , R +,k }: (i)\nPLA ensures that the subregions R +,1 to R +,k are rectangular and mutually disjoint. (ii) Due to the triangle equality of the distance measure, a minimum-distance instantiation can be picked from the bounded region. Alg. 2 simplifies finding u + . In the first step, we pick u +,i from R +,i that has minimal distance from u 0 (l. 3-18). The idea is simple: for every x j ∈ X, three cases are possible, see Fig. 5. Let lb xj be the lower bound for Algorithm 2: R + -minimal distance instantiation 1 function getMinDistInst(R+, u0): \n2 // R+ = {R+,1, • • • , R +,k } 3 U+ ← ∅ 4 for i ← 1 to k do 5 // R+,i = x j ∈X [lbx j , ubx j ] 6 for xj ∈ X do 7 if lbx j < u0(xj ) < ubx j then 8 u+,i(xj ) = u0(xj )" }, { "figure_ref": [], "heading": "Example.", "publication_ref": [], "table_ref": [], "text": "We apply the entire algorithm on the pBN in Fig. 1a, constraint φ = Pr(C=no | A=pos ∧ P=pos) ≤ 0.009, u 0 : p → 0.72, q → 0.95, and η=0.99. The pMC M B,E is given in Fig. 1c and the reachability constraint is ϕ : Pr(♦s 10 ) ≤ 0.009. We take d0 = √ n, γ= 1 /2, and 0 = 1 /16. 1. The initial region R is obtained by = 1 /8:\n[0.72-1 /8 √ 2, 0.72+ 1 /8 √ 2] × [0.95-1 /8 √ 2, 0.95+ 1 /8 √ 2].\n2. Running PLA on R gives no sub-region accepting ϕ.\n3. Expanding the region by the factor γ -1 = 2 yields: \n[0.72-1 /4 √ 2, 0.72+ 1 /4 √ 2] × [0.95-1 /4 √ 2, 0.95+ 1 /4 √ 2]." }, { "figure_ref": [ "fig_7" ], "heading": "Running", "publication_ref": [], "table_ref": [], "text": "q)=ub q =0.9495 as ub q < u 0 (q); see Fig. 5. 6. Running Alg. 2 using the distance measure of choicehere: EC-distance-for the candidates U + returns u + closest to u 0 : u + (p) = 0.92075 and u + (q) = 0.97475; distance 0.040913125." }, { "figure_ref": [], "heading": "Discussion.", "publication_ref": [], "table_ref": [], "text": "Let us shortly discuss the hyper-parameters: η, γ, and K. The hyper-parameter η is the coverage factor for the region partitioning. Our region partitioning algorithm, i.e., parameter lifting (line 17 Alg. 1) works based on this factor: the procedure stops when R ? is at most (1 -η) • ||R|| of the entire region R, e.g., for η = 0.99, it continues until at least 99% of R is either rejecting or accepting and at most 1% is unknown. This relates η to the approximation bounds of our algorithm, see Problem statement 3. Intuitively speaking, the coverage factor η means that if the algorithm provides a minimal-distance value D(u, u 0 ) = d, then with the probability 1-η there may exist a smaller value distance than d that works too but has not been found. One can thus consider η as the confidence factor for the minimality of the results. The hyper-parameter γ specifies the factor by which the region is expanded at each iteration, see Line 9, Alg. 1. The hyperparameters γ and K specify the size of the initial region, see Lines 3 and 16 of Alg. 1. Our experiments (not detailed in the paper) with γ = {0.2, 0.5, 0.8} and K = {2, • • • , 12} reveal that γ = 1 /2 and K = 6 gave the best balance. Large values for γ (0.8) and small K lead to unnecessary computations in the initial iteration for the simple cases i.e., when small perturbations of the parameters make the constraint satisfied. Small values for γ (0.2) lead to large regions in the next iterations due to the expansion by γ -1 ." }, { "figure_ref": [], "heading": "Experimental Evaluation", "publication_ref": [ "b24", "b42", "b30", "b3", "b36", "b45", "b49", "b0" ], "table_ref": [ "tab_3" ], "text": "We empirically evaluated our approach using a prototypical realization on top of the probabilistic model checker Storm [Hensel et al., 2022] (version 1.7.0). As baseline, we used the latest version (10.4) of Bayesserver3 , a commercial BN analysis tool that features sensitivity analysis and parameter tuning for pBNs. It supports pBNs with a single parameter only. We parametrized benchmarks from bnlearn repository and defined different constraints. We (i) parametrized the CPTs of the parents (and grandparents) of the evidence nodes, and (ii) used the SamIam tool4 to pick the CPT entries Θ modif most relevant to the constraint. To get well-formed BNs, we used the linear proportional co-variation, see Def. 3. In all experiments, we picked evidences from the last nodes in the topological order; they have a long dependency path to the BN roots. This selection reflects the worst-case in [Salmani and Katoen, 2020]. We took γ= 1 /2 and K=6 for our experiments, see Alg.1. We conducted all our experiments on a 2.3 GHz Intel Core i5 processor with 16 GB RAM.\nRQ1: Comparison to Bayesserver. We used small, medium, and large BNs from the bnlearn repository. For each BN, we parameterized one distribution in a single CPT to align with the restrictions from Bayesserver. Figure 6a indicates the results, comparing the tuning times (in sec) of Bayesserver (x-axis) to those of our implementation (y-axis). The latter includes the time for region refinements by PLA. For each pair (pBN, constraint), we did experiments for coverage factors η = 1-10 -i for i ∈ {1, • • • , 6}. The influence of η on the exactness of our results is addressed under RQ2.\n1 0 -3 1 0 -2 1 0 - 1 1 0 0 1 0 1 10 -3 10 -2 10 -1 10 0 10 1 Bayesserver tuning time [s] Storm tuning time [s] cancer sachs alarm hailf. hepar2 win95. (a) 1 0 - 1 1 0 - 2 1 0 - 3 1 0 - 4 1 0 - 5 1 0 - 6 10 -7 10 -6 10 -5 10 -4 10 -3 10 -2 10 -1 NF refinement factor (1 -η) diff(u Storm , u Bayes ) sachs alarm hepar2 hailf.\nwin95.\n(b)\n1 0 - 1 1 0 - 2 1 0 - 3 1 0 - 4 1 0 - 5 1 0 - 6 3 4 5 6 refinement factor (1 -η) number of iterations sachs alarm hepar2 hailf. win95. (c) 1 0 - 1 1 0 - 2 1 0 - 3 1 0 - 4 1 0 - 5 1 0 - 6 10 -2 10 -1 10 0 refinement factor (1 -η) Storm tuning time [s] alarm hepar2 hailf.\n(d) Figure 6: The plots (a), (b): Storm vs. Bayesserver, the tuning time and the instantiation closeness; see RQ1. The plots (b), (c), (d): the effect of the coverage factor (η) on the tightness of the distance, the number of iterations, and the tuning time; see RQ2. Corner case: the number of iterations only changes when a satisfying instantiation was not found (NF) with a certain coverage; see sachs at η=0.9 and η=0.99.\nFindings: Storm outperforms Bayesserver for several benchmarks (cancer, sachs, win95pts, and multiple instances of hailfinder), whereas Bayesserver is faster by about an order of magnitude for the other pBNs, such as hepar2. Explanation: Bayesserver exploits specific methods for one-way sensitivity analysis and relies on the linear form of the sensitivity function. These techniques are very efficient yet not applicable to pBNs with multiple parameters in multiple CPTs. For our experiments on those subclasses, see RQ4. Such subclasses are not supported by Bayesserver and-to the best of our knowledge-not any existing BN tool. This applies e.g., also to Bayesfusion which considers only the change of a single parameter at a time, and SamIam which is limited to the single parameter and single-CPT.\nRQ2: Sensitivity to the coverage factor η. For each pBN and constraint, we decreased (the refinement factor) 1-η in a step-wise manner by a factor 10 -1 . To quantify the tightness of our results, we measured how our approximatelyclose instantiation, denoted u Storm , differs from the absolute minimum-distance from Bayesserver, denoted u Bayes . Figure 6b (log-log scale) plots the tightness of the results |u Storm -u Bayes | (y-axis) against the refinement factor (x-axis). Figures 6c and6d indicate the number of iterations and the tuning time (log scale, seconds) for each refinement factor. Findings: (I) Mostly, 1-η bounds the difference between our approximately-close solution and Bayesserver's solution. For e.g., η = 1-10 -4 , the difference is at most 10 -4 . (II) On increasing the coverage, the difference to the true minimal distance rapidly decreases. ((III) The computation time mod-erately increases on increasing the coverage, but the number of iterations was mostly unaffected. Explanation: (I, II) Recall that the value of 1-η bounds the size of the unknown regions; see Def. 5. This indicates why |u Storm -u Bayes | relates to 1-η. (III) At a higher coverage factor, the region partitioning is more fine-granular possibly yielding more accepting regions to analyze. Therefore the computation becomes more expensive. The timing is, however, not correlated to the number of iterations. This is because the -close iterations before the last iteration are often completed by a single region verification and are very fast. Similar observations have been made for n>1 parameters; see RQ4.\nRQ3: Sensitivity to the threshold λ. We varied the constraint's threshold (λ) by steps of 1 /20 for the benchmarks alarm hepar2, and hailfinder with n=1 and η=99.99999%. Figures 7a,7b, and 7c display the outcomes with the x-axis indicating the threshold and the y-axis indicating the tuning time (in seconds), the distance (log-scale), and the number of iterations. Findings: By strengthening the threshold, the possibly satisfying regions get further away from u 0 . Thus the distance, the number of iterations, and sometimes the tuning time grow. Similar findings are valid for n > 1 parameters; see RQ4, Table 1. Explanation: Region refinement starts with small regions in the close vicinity of the original values of the parameters. Therefore, for the constraints close to the original probability Pr B[u0],φ , the number of iterations is low, the distance is naturally small, and the minimal-change tuning is completed faster without the need to analyze larger regions. (c) Figure 7: The effect of the constraint restrictiveness (the value of the threshold) on -bounded tuning: (a) on the tuning time, (b) on the distance, (c) and on the number of iterations; see RQ3. Corner cases: for the constraints with thresholds before 0.4, the original BN satisfies the constraints. For hailfinder with the thresholds λ≥0.55 and alarm with the thresholds λ≥0.7, the constraints become infeasible. Findings: (I) Approximately-close parameter tuning is feasible for pBNs with up to 8 parameters. This is significantly higher than the state of the art-one parameter. As the number of sub-regions by PLA grows exponentially, treating more parameters is practically infeasible. (II) More parameters often ease finding satisfying instantiations. E.g., the threshold ≥ 0.25 is unsatisfiable for win95pts with n=2, but is satisfied with n=4. (III) The results for multiple parameter pBNs confirm the findings for RQ2 and RQ3; see the rows for each pBN instance with (i) varied coverage and (ii) varied threshold. (IV) The unsatisfiability of a constraint can be computed fast (with 100% confidence), regardless of the number of parameters, see e.g., alarm with 8 parameters for the constraint ≤ 0.001. For infeasibility, a single verification suffices; no partitioning is needed.\n≥ 0 . 4 ≥ 0 . 5 ≥ 0 . 6 ≥ 0 . 7 ≥ 0 . 8 ≥ 0 . 9 1 10 -2 10 -1 10 0 constraint's threshold (λ) Storm tuning time [s] alarm hepar2 hailf. (a) ≥ 0 . 3 ≥ 0 . 4 ≥ 0 . 5 ≥ 0 . 6 ≥ 0 . 7 ≥ 0 . 8 ≥ 0 . 9 1 0-Sat 10 -2 10 -1 10 0 Infeas constraint's threshold (λ) EC(u,u0) alarm hepar2 hailf. (b) ≥ 0 . 3 ≥ 0 . 4 ≥ 0 . 5 ≥ 0 . 6 ≥ 0 . 7 ≥ 0 . 8 ≥ 0 .\nRQ5: Handling pBNs with parameter dependencies. Parameter lifting algorithm (Section 4.1) enables handling models with parameter dependencies, see e.g., Example 3: the parameters are allowed in multiple local distributions and the pBN sensitivity function is of higher degree. We extended our experiments to such cases for win95pts and alarm: we parameterized the entries θ 1 Findings: (I) Our method is applicable to pBNs with parameter dependencies where the sensitivity function is of a higher degree. (II) For the same coverage factor, the pBNs with parameter dependency are more expensive to analyze. See e.g., the two rows for win95pts with 8 parameters, threshold ≥ 0.115, and the coverage 80%. This is due to more complex sensitivity functions that give a higher number of sub-regions to verify. (III) The pBNs with parameter dependency yielded notably smaller distances.\n7 Epilogue Related work. Kwisthout and van der Gaag [2008] studied the theoretical complexity of tuning problems. Renooij [2014] studied the properties of the co-variation schemes for BN tuning. She shows that the linear proportional scheme optimizes the CD distance for single-CPT pBNs. Similar are the studies by Bolt and van der Gaag [2015;2017] that consider tuning heuristics for distance optimizations. Peng and Ding [2005] propose an iterative proportional fitting procedure (IPFP) to minimize the KL-divergence distance for a set of constraints. The method does not scale to large networks. Santos et al. [2013] exploits linear programming for BN parameter tuning. Yak-aboski and Santos [2018] consider a new distance measure for parameter learning and tuning of BNs. Leonelli [2019] considers nonlinear sensitivity functions, yet only for single parameter pBNs and Ballester-Ripoll and Leonelli [2022] efficiently compute the derivatives of sensitivity functions to select the most relevant parameters to a query, yet they limit to single parameter variation." }, { "figure_ref": [], "heading": "Conclusion.", "publication_ref": [ "b5", "b47" ], "table_ref": [], "text": "A novel algorithm for parameter tuning in Bayesian networks is presented and experimentally evaluated. Whereas existing algorithms come with severe restrictions-single parameters and/or linear functions-our approach is applicable to multiple (in practice about 8) parameters, large BNs (up to 100 variables), and polynomial functions. Future work includes considering balanced tuning heuristic [Bolt and van der Gaag, 2017] and using monotonicity of parameters [Spel et al., 2019]." }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "This research was funded by the ERC AdG Projekt FRAP-PANT (Grant Nr. 787914). We kindly thank Alexandra Ivanova for her implementation efforts and Tim Quatmann for the fruitful discussions." } ]
This paper addresses the -close parameter tuning problem for Bayesian networks (BNs): find a minimal -close amendment of probability entries in a given set of (rows in) conditional probability tables that make a given quantitative constraint on the BN valid. Based on the state-of-the-art "region verification" techniques for parametric Markov chains, we propose an algorithm whose capabilities go beyond any existing techniques. Our experiments show that -close tuning of large BN benchmarks with up to eight parameters is feasible. In particular, by allowing (i) varied parameters in multiple CPTs and (ii) inter-CPT parameter dependencies, we treat subclasses of parametric BNs that have received scant attention so far.
Finding an -Close Minimal Variation of Parameters in Bayesian Networks
[ { "figure_caption": "Our running example: (a) the parametric BN COVID-19, (b) the parametric MC of COVID-19, for the topological order C;S;A;P, (c) the parametric MC of COVID-19 tailored to the evidence Antigen Test = pos and PCR Test = pos, abstracted from variable valuations.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "such substitution. Parametric distributions. Let Distr(D) denote the set of probability distributions over D. Let Q[X] be the set of multivariate polynomials with rational coefficients over X. A parametric probability distribution is the function µ : D → Q[X] with d∈D µ(d) = 1. Let pDistr(D) denote the set of parametric probability distributions over D with parameters in X. Instantiation u is well-formed for µ ∈ pDistr(D) iff 0 ≤ µ(d)[u] ≤ 1 and Σ d∈D µ(d)[u] = 1. Co-variation scheme. A co-variation scheme cov : Distr(D) × D → pDistr(D) maps a probability distribution µ to a parametric distribution µ based on a given d ∈ D. Distance measure. The function d : U ×U → R ≥0 is a distance measure if for all u, u , u ∈ U, it satisfies: (I) positiveness: d(u, u ) ≥ 0, (II) symmetry: d(u, u ) = d(u , u), and (III) triangle inequality: d(u, u ) ≤ d(u, u ) + d(u , u ).", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The parameter lifting steps for the COVID-19 example with parameter dependencies (Fig. 2): (a) the original (sub-)pMC Relaxation -----→ (b) pMC without parameter dependencies Substitution ------→ (c) non-parametric MDP. Note that 0.9875 ≤ 1-t, 1-t , 1-t ≤ 0.9925.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: The iterative procedure of region-based parameter tuning for n = 2 and γ = 1 /2.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "partitioning(M B , ϕ, , η): 16 R ← makeRegion-d(u0, ) 17 R+, R-, R ? ← PLA(M B , R , ϕ, η)18 return R+ 19 function makeRegion-EC(u0, ): 20 R ← x j ∈X [u0(xj )n , u0(xj )+ n ] 21 return R 22 function makeRegion-CD(u0, ):", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "parameter x j in the region R +,i and ub xj be its upper bound.(1) If u 0 (x j ) ∈ [lb xj , ub xj ], we set u +,i (x j ) = u 0 (x j ). This yields the least distance, i.e., 0 in the dimension of x j , see Fig. 5 (left). (2) u 0 (x j ) < lb xj , see Fig. 5 (middle). Then lb xj has the least distance from u 0 (x j ) in the dimension x j , i.e., |lb xj -u 0 (x j )| < |a-u 0 (x j )| for every value a ∈ [lb x , ub x ] and similarly for CD-distance, ln lbx j u0(xj ) < ln a u0(xj ) . In this case, we set u +,i (x j ) = lb xj to minimize the distance in dimension x. (3) By symmetry, for u 0 (x j ) > ub xj , we set u +,i (x j ) = ub xj ; see Fig. 5 (right). It remains to compute the distance for each candidate in U + = {u +,1 , • • • , u +,k } and pick u + closest to u 0 (l. 19).", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Obtaining the minimal-change value for parameter x ∈ X.", "figure_data": "", "figure_id": "fig_7", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "PLA gives 12 regions accepting ϕ, e.g.,", "figure_data": "R +,1 : [0.92075, 0.960375] × [0.97475, 1] and R +,2 :[0.960375, 0.97028175] × [0.9431875, 0.9495].5. Running Alg. 2 on R +,1 through R +,12 givesminimal distance candidates u +,1 through u +,12 ;e.g., u +,2 (p)=lb p =0.960375 as u 0 (p) < lb p andu +,2", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The scalability of our approach to pBNs with multiple parameters, detailed results for (left) win95pts and (right) alarm.RQ4: Scaling the number of parameters. We took the win95pts and alarm benchmarks and parameterized them in multiple ways. Their pMCs have 9, 947 and 1, 118 states and 17, 948 and 3, 147 transitions respectively. The set of parameters for each pBN is including (and doubles the number of) parameters in the previous pBN. Table1(left) and (right) list the results for win95pts and alarm. We list for each pBN, the number of affected CPTs, the number of parameters, the threshold λ, and the coverage η. E.g., win95pts with par=8 has 8 parameters occurring in 4 CPTs. The columns EC, iter, and t(s) report the EC-distance, the number of iterations, and the total time in seconds (incl. model building time, time for region refinement, and tuning time) respectively. TO and MO indicate time-out (30 minutes) and memory-out (>16 GB).", "figure_data": "pBN infoconstraintsettingresultspBN infoconstraintsettingresultspCPTparthresh.cover.EC itert(s)pCPTparthresh.cover.EC itert(s)12≥ 0.11599% 0.836528587854.20012≤ 0.3085%1.1383430945 2.09012≥ 0.25100%Infeasible63.75012≤ 0.3099%1.0913712085 2.13434≥ 0.11590% 0.135047734625.21012≤ 0.30 99.99%1.0902020675 302.134≥ 0.2580% 0.447911821745.21014≤ 0.3070%0.9094916165 5.70134≥ 0.2590% 0.4238956239430.8214≤ 0.3085%0.8902176705 142.134≥ 0.2595% 0.40983990784 1128.514≤ 0.3090%--TO34≥ 0.2599%--TO14≤ 0.20100%Infeasible6 1.86948≥ 0.11580% 0.880567998524.49614 * (3)≤ 0.3099%0.4724062505 2.73148≥ 0.11590% 0.1350477346215.9118≤ 0.3020% 0.56281591135 534.648≥ 0.3060% 0.88056799856809.718≤ 0.2520% 0.94054851025 216.648≥ 0.35 99.99%Infeasible63.75718≤ 0.2020%1.0103959626 305.948 * (2)(2)≥ 0.11580%0.0131535401180.618 * (3)≤ 0.7020%0.1186710265 608.2416≥ 0.3510%-1MO18≤ 0.001100%Infeasible6 1.886", "figure_id": "tab_3", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "• • • θ k ∈ Θ modif over the same parameter x ∈ X when the original values of θ 1 • • • θ k were the same in the original BN. The eleventh row in Table 1 (left) and the eighth and eleventh rows in Table 1 (right) correspond to such cases. The term 8 * (2)(2) e.g., denotes that out of the 8 parameters, 2 parameters repeatedly occurred in two distinct distributions. The term 8 * (3) denotes that out of the 8 parameters, one was occurring in 3 distinct distributions.", "figure_data": "", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" } ]
Bahare Salmani; Joost-Pieter Katoen
[ { "authors": "Leonelli Ballester-Ripoll; Rafael Ballester-Ripoll; Manuele Leonelli", "journal": "PMLR", "ref_id": "b0", "title": "You only derive once (YODO): automatic differentiation for efficient sensitivity analysis in Bayesian networks", "year": "2022" }, { "authors": " Barreiro", "journal": "", "ref_id": "b1", "title": "", "year": "2021" }, { "authors": "Pablo Barreiro; Jesús San-Román; Maria Del Mar; Francisco Javier Carretero; Candel", "journal": "Revista Española de Quimioterapia", "ref_id": "b2", "title": "Infection and infectivity: Utility of rapid antigen tests for the diagnosis of covid-19", "year": "2021" }, { "authors": "Van Bolt; Gaag Der", "journal": "", "ref_id": "b3", "title": "", "year": "2015" }, { "authors": "H Janneke; Linda C Bolt; Van Der Gaag", "journal": "Springer", "ref_id": "b4", "title": "Balanced tuning of multi-dimensional Bayesian network classifiers", "year": "2015" }, { "authors": "Van Bolt; Gaag Der", "journal": "", "ref_id": "b5", "title": "", "year": "2017" }, { "authors": "H Janneke; Linda C Bolt; Van Der Gaag", "journal": "Int. J. Approx. Reason", "ref_id": "b6", "title": "Balanced sensitivity functions for tuning multi-dimensional Bayesian network classifiers", "year": "2017" }, { "authors": " Castillo", "journal": "", "ref_id": "b7", "title": "", "year": "1997" }, { "authors": "Enrique F Castillo; José Manuel Gutiérrez; Ali S Hadi", "journal": "IEEE Trans. Syst. Man Cybern. Part A", "ref_id": "b8", "title": "Sensitivity analysis in discrete Bayesian networks", "year": "1997" }, { "authors": "Chan ; Darwiche ; ; Hei Chan; Adnan Darwiche", "journal": "AUAI Press", "ref_id": "b9", "title": "Sensitivity analysis in Bayesian networks: From single to multiple parameters", "year": "2004" }, { "authors": "Chan ; Darwiche ; ; Hei Chan; Adnan Darwiche", "journal": "Int. J. Approx. Reason", "ref_id": "b10", "title": "A distance measure for bounding probabilistic belief change", "year": "2005" }, { "authors": "Chan ; ; Hei Chan", "journal": "", "ref_id": "b11", "title": "Sensitivity analysis of probabilistic graphical models", "year": "2005" }, { "authors": "Van Coupé; Gaag Der", "journal": "", "ref_id": "b12", "title": "", "year": "2002" }, { "authors": "M H Veerle; Linda C Coupé; Van Der Gaag", "journal": "Ann. Math. Artif. Intell", "ref_id": "b13", "title": "Properties of sensitivity analysis of Bayesian belief networks", "year": "2002" }, { "authors": " Cubuktepe", "journal": "", "ref_id": "b14", "title": "", "year": "2018" }, { "authors": "Nils Murat Cubuktepe; Sebastian Jansen; Joost-Pieter Junges; Ufuk Katoen; Topcu", "journal": "Springer", "ref_id": "b15", "title": "Synthesis in pMDPs: A tale of 1001 parameters", "year": "2018" }, { "authors": " Cubuktepe", "journal": "", "ref_id": "b16", "title": "", "year": "2022" }, { "authors": "Nils Murat Cubuktepe; Sebastian Jansen; Joost-Pieter Junges; Ufuk Katoen; Topcu", "journal": "IEEE Trans. Autom. Control", "ref_id": "b17", "title": "Convex optimization for parameter synthesis in MDPs", "year": "2022" }, { "authors": " Darwiche", "journal": "", "ref_id": "b18", "title": "", "year": "2009" }, { "authors": "Adnan Darwiche", "journal": "Cambridge University Press", "ref_id": "b19", "title": "Modeling and Reasoning with Bayesian Networks", "year": "2009" }, { "authors": " Dinnes", "journal": "", "ref_id": "b20", "title": "", "year": "2022" }, { "authors": "Jacqueline Dinnes; Pawana Sharma; Sarah Berhane; Susanna S Van Wyk; Nicholas Nyaaba; Julie Domen; Melissa Taylor; Jane Cunningham; Clare Davenport; Sabine Dittrich", "journal": "Cochrane Database of Systematic Reviews", "ref_id": "b21", "title": "Rapid, point-ofcare antigen tests for diagnosis of sars-cov-2 infection", "year": "2022" }, { "authors": " Heck", "journal": "", "ref_id": "b22", "title": "", "year": "2022" }, { "authors": "Linus Heck; Jip Spel; Sebastian Junges; Joshua Moerman; Joost-Pieter Katoen", "journal": "Springer", "ref_id": "b23", "title": "Gradientdescent for randomized controllers under partial observability", "year": "2022" }, { "authors": " Hensel", "journal": "", "ref_id": "b24", "title": "", "year": "2022" }, { "authors": "Christian Hensel; Sebastian Junges; Joost-Pieter Katoen; Tim Quatmann; Matthias Volk", "journal": "Int. J. Softw. Tools Technol. Transf", "ref_id": "b25", "title": "The probabilistic model checker storm", "year": "2022" }, { "authors": " Katoen", "journal": "", "ref_id": "b26", "title": "", "year": "2016" }, { "authors": "Joost-Pieter Katoen", "journal": "ACM", "ref_id": "b27", "title": "The probabilistic model checking landscape", "year": "2016" }, { "authors": " Kisa", "journal": "", "ref_id": "b28", "title": "", "year": "2014" }, { "authors": "Doga Kisa; Guy Van Den; Arthur Broeck; Adnan Choi; Darwiche", "journal": "KR. AAAI Press", "ref_id": "b29", "title": "Probabilistic sentential decision diagrams", "year": "2014" }, { "authors": "Van Kwisthout; Gaag Der; Johan Kwisthout; Linda C Van Der Gaag", "journal": "AUAI Press", "ref_id": "b30", "title": "The computational complexity of sensitivity analysis and parameter tuning", "year": "2008" }, { "authors": " Leonelli", "journal": "", "ref_id": "b31", "title": "", "year": "2019" }, { "authors": "Manuele Leonelli", "journal": "Int. J. Approx. Reason", "ref_id": "b32", "title": "Sensitivity analysis beyond linearity", "year": "2019" }, { "authors": " Nishiura", "journal": "", "ref_id": "b33", "title": "", "year": "2020" }, { "authors": "Hiroshi Nishiura; Tetsuro Kobayashi; Takeshi Miyama; Ayako Suzuki; Sung-Mok Jung; Katsuma Hayashi; Ryo Kinoshita; Yichi Yang; Baoyin Yuan; Andrei R Akhmetzhanov", "journal": "Int. Journal of Infectious Diseases", "ref_id": "b34", "title": "Estimation of the asymptomatic ratio of novel coronavirus infections (covid-19)", "year": "2020" }, { "authors": " Pearl", "journal": "Morgan kaufmann", "ref_id": "b35", "title": "Judea Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference", "year": "1988" }, { "authors": "Ding Peng", "journal": "", "ref_id": "b36", "title": "", "year": "2005" }, { "authors": "Yun Peng; Zhongli Ding", "journal": "AUAI Press", "ref_id": "b37", "title": "Modifying Bayesian networks by probability constraints", "year": "2005" }, { "authors": " Quatmann", "journal": "", "ref_id": "b38", "title": "", "year": "2016" }, { "authors": "Tim Quatmann; Christian Dehnert; Nils Jansen; Sebastian Junges; Joost-Pieter Katoen", "journal": "", "ref_id": "b39", "title": "Parameter synthesis for Markov models: Faster than ever", "year": "2016" }, { "authors": " Renooij", "journal": "", "ref_id": "b40", "title": "", "year": "2014" }, { "authors": "Silja Renooij", "journal": "Int. J. Approx. Reason", "ref_id": "b41", "title": "Co-variation for sensitivity analysis in Bayesian networks: Properties, consequences and alternatives", "year": "2014" }, { "authors": "Katoen Salmani", "journal": "Springer", "ref_id": "b42", "title": "Bayesian inference by symbolic model checking", "year": "2020" }, { "authors": "Katoen Salmani; Bahare Salmani; Joost-Pieter Katoen", "journal": "Springer", "ref_id": "b43", "title": "Fine-tuning the odds in Bayesian networks", "year": "2021" }, { "authors": "Katoen Salmani; Bahare Salmani; Joost-Pieter Katoen", "journal": "", "ref_id": "b44", "title": "Fine-tuning the odds in Bayesian networks", "year": "2021" }, { "authors": " Santos", "journal": "", "ref_id": "b45", "title": "", "year": "2013" }, { "authors": "Eugene Santos; Qi Gu; Eunice E Santos", "journal": "Int. J. Approx. Reason", "ref_id": "b46", "title": "Bayesian knowledge base tuning", "year": "2013" }, { "authors": " Spel", "journal": "", "ref_id": "b47", "title": "", "year": "2019" }, { "authors": "Jip Spel; Sebastian Junges; Joost-Pieter Katoen", "journal": "ATVA", "ref_id": "b48", "title": "Are parametric Markov chains monotonic?", "year": "2019" }, { "authors": "Santos Yakaboski", "journal": "IEEE Computer Society", "ref_id": "b49", "title": "Chase Yakaboski and Eugene Santos. Bayesian knowledge base distance-based tuning", "year": "2018" } ]
[ { "formula_coordinates": [ 2, 54, 319.52, 64.99, 9.65 ], "formula_id": "formula_0", "formula_text": "I i = [lb xi , ub xi ]." }, { "formula_coordinates": [ 2, 54, 376.63, 243, 22.18 ], "formula_id": "formula_1", "formula_text": "f = 2x 2 1 +x 2 , u(x 1 ) = 3 and u(x 2 ) = 2, f (u) = 20. Let f [u] denote" }, { "formula_coordinates": [ 2, 315, 279.93, 243, 19.7 ], "formula_id": "formula_2", "formula_text": "[u] = (G, Θ[u]" }, { "formula_coordinates": [ 2, 344.46, 383.27, 184.09, 28.91 ], "formula_id": "formula_3", "formula_text": "X → R, B[u] |= φ if and only if Pr B[u] (H | E) ∼ λ." }, { "formula_coordinates": [ 2, 351.18, 473.83, 170.64, 14.58 ], "formula_id": "formula_4", "formula_text": "Pr B1 (C = no | A = pos ∧ P = pos) ≤ 0.009." }, { "formula_coordinates": [ 2, 315, 520.69, 243, 20.61 ], "formula_id": "formula_5", "formula_text": ") = 0.97475, f B1,φ1 [u] = 0.008798 ≤ 0.009, i.e., B 1 [u] |= φ 1 ." }, { "formula_coordinates": [ 2, 355.81, 655.52, 161.38, 37.47 ], "formula_id": "formula_6", "formula_text": "f B2,φ2 = Pr B2 (C = no | A1 = pos ∧ A2 = pos) = 950 • 9 • t 2 • s 2 8850 • t 2 + 349 • p 2 + 950 • s 2 + 151 • r 2 ." }, { "formula_coordinates": [ 3, 54, 195.39, 243, 36.43 ], "formula_id": "formula_7", "formula_text": ". pBN B = (G, X, Θ ) is a parametrization of BN B = (G, Θ) over X w.r.t. Θ modif if θ (v,d,par) = f for some f ∈ Q[X] if θ (v,d,par) ∈ Θ modif ." }, { "formula_coordinates": [ 3, 54, 401.12, 232.2, 32.6 ], "formula_id": "formula_8", "formula_text": "   θ (v,dk,par) = x for some x ∈ X θ (v,dj ,par) = 1-x 1-θ (v,dk,par) • θ (v,dj ,par) for d j = d k ∈ D v ." }, { "formula_coordinates": [ 3, 54, 535.01, 243, 55.98 ], "formula_id": "formula_9", "formula_text": "u min = argmin {u∈U | B[u]|=φ} d(u, u 0 ). Its generalized vari- ant for 0 ≤ ≤ d0 and 0 ≤ η ≤ 1 is: The ( , η)-parameter tuning problem is to find u ∈ U s.t. B[u] |= φ, d(u, u 0 ) ≤ , and d(u, u min ) ≤ (1-η) • d0 ." }, { "formula_coordinates": [ 3, 86.62, 679.55, 134.01, 19.7 ], "formula_id": "formula_10", "formula_text": "EC(u, u ) = xi∈X u(x i ) -u (x i ) 2 ." }, { "formula_coordinates": [ 3, 315, 74.73, 243, 29.73 ], "formula_id": "formula_11", "formula_text": "Corollary 2. Let 0 ≤ ≤ √ n and u 0 (x i ) -n ≤ u(x i ) ≤ u 0 (x i ) + n . Then, EC(u, u 0 ) ≤ ." }, { "formula_coordinates": [ 3, 315, 147.65, 226.6, 19.86 ], "formula_id": "formula_12", "formula_text": "CD(u, u ) = ln max w∈ Eval(V ) Pr B[u ] (w) Pr B[u] (w) -ln min w∈ Eval(V ) Pr B[u ] (w) Pr B[u] (w) ," }, { "formula_coordinates": [ 3, 317.04, 316.24, 242.06, 20.32 ], "formula_id": "formula_13", "formula_text": "d0 = ln max {θ min [lb x ], θ min [ub x ]} θ min -ln min {θ max [lb y ], θ max [ub y ]} θ max" }, { "formula_coordinates": [ 3, 315, 353.3, 243, 23.24 ], "formula_id": "formula_14", "formula_text": "Corollary 4. Let 0 ≤ ≤ d0 and α=e /2 . Let for each θ ∈ Θ v , α -1 • θ [u 0 ] ≤ θ [u] ≤ α • θ [u 0 ] . Then, CD(u, u 0 ) ≤ ." }, { "formula_coordinates": [ 3, 315, 547.53, 243, 19.77 ], "formula_id": "formula_15", "formula_text": "Definition 4. A parametric Markov chain (pMC) M is a tuple (S," }, { "formula_coordinates": [ 4, 54, 200, 243, 45.26 ], "formula_id": "formula_16", "formula_text": "p • q + 8758 • q + 361 . Let R ⊆ R n with n = |X| and let M, R |= ϕ if and only if ∀u ∈ R. M[u] |= ϕ." }, { "formula_coordinates": [ 4, 54, 309.78, 243, 52.43 ], "formula_id": "formula_17", "formula_text": "R + ⊆ {u ∈ R | M[u] |= ϕ}, satisfying instantiations R -⊆ {u ∈ R | M[u] |= ¬ϕ}, refuting instantiations and R ? = R \\ (R + ∪ R -) with ||R ? || ≤ (1-η)•||R|| for some given coverage factor 0 ≤ η ≤ 1." }, { "formula_coordinates": [ 4, 55.5, 435.08, 231.81, 21.95 ], "formula_id": "formula_18", "formula_text": "M, R |= ϕ R is accepting or M, R |= ¬ϕ R is rejecting or M, R |= ϕ ∧ M, R |= ¬ϕ R is inconclusive ." }, { "formula_coordinates": [ 4, 329.68, 575.85, 191.63, 47.35 ], "formula_id": "formula_19", "formula_text": "1 -Pr M B (♦ C = yes ∨ A = neg ∨ P = neg) 1 -Pr M B (♦ A = neg ∨ P = neg) = 361 34900 • p • q + 8758 • q + 361 = f B,φ ; see Ex. 2." }, { "formula_coordinates": [ 5, 56.56, 253.62, 165.98, 73.83 ], "formula_id": "formula_20", "formula_text": "1 M B ← computeMC(B) 2 ← reachSpec(φ) 3 0 ← d0 • γ K-1 4 ← 0 5 function minChgTuning(M B , u0, ϕ, η): 6 while ≤ d0 do 7 R+ ← -partitioning(M B , ϕ, , η) 8 if R+ = ∅ then 9 ← γ -1 •" }, { "formula_coordinates": [ 5, 54, 451.36, 210.66, 58.84 ], "formula_id": "formula_21", "formula_text": "23 α ← e /2 24 R ← x j ∈X [u0(xj ) • α -1 , u0(xj ) • α] 25 return R 5.1. Obtaining Epsilon-Close Subregions. Alg." }, { "formula_coordinates": [ 5, 315, 563.62, 243, 64.52 ], "formula_id": "formula_22", "formula_text": "Instantiation. Once R + = ∅ ⊆ R is found, it remains to find the instantia- tion u + in R + that is closest to u 0 . Region R + includes infinitely many instantiations, yet minimal-distance instan- tiation is computable in O(k•n) for n = |X| and the re- gions R + = {R +,1 , • • • , R +,k }: (i)" }, { "formula_coordinates": [ 6, 56.96, 82.53, 149.15, 57.29 ], "formula_id": "formula_23", "formula_text": "2 // R+ = {R+,1, • • • , R +,k } 3 U+ ← ∅ 4 for i ← 1 to k do 5 // R+,i = x j ∈X [lbx j , ubx j ] 6 for xj ∈ X do 7 if lbx j < u0(xj ) < ubx j then 8 u+,i(xj ) = u0(xj )" }, { "formula_coordinates": [ 6, 73.93, 510.87, 223.83, 12.47 ], "formula_id": "formula_24", "formula_text": "[0.72-1 /8 √ 2, 0.72+ 1 /8 √ 2] × [0.95-1 /8 √ 2, 0.95+ 1 /8 √ 2]." }, { "formula_coordinates": [ 6, 73.93, 559.34, 223.83, 12.47 ], "formula_id": "formula_25", "formula_text": "[0.72-1 /4 √ 2, 0.72+ 1 /4 √ 2] × [0.95-1 /4 √ 2, 0.95+ 1 /4 √ 2]." }, { "formula_coordinates": [ 7, 67.56, 53.11, 244.99, 119.29 ], "formula_id": "formula_27", "formula_text": "1 0 -3 1 0 -2 1 0 - 1 1 0 0 1 0 1 10 -3 10 -2 10 -1 10 0 10 1 Bayesserver tuning time [s] Storm tuning time [s] cancer sachs alarm hailf. hepar2 win95. (a) 1 0 - 1 1 0 - 2 1 0 - 3 1 0 - 4 1 0 - 5 1 0 - 6 10 -7 10 -6 10 -5 10 -4 10 -3 10 -2 10 -1 NF refinement factor (1 -η) diff(u Storm , u Bayes ) sachs alarm hepar2 hailf." }, { "formula_coordinates": [ 7, 320.76, 53.89, 238.9, 116.32 ], "formula_id": "formula_28", "formula_text": "1 0 - 1 1 0 - 2 1 0 - 3 1 0 - 4 1 0 - 5 1 0 - 6 3 4 5 6 refinement factor (1 -η) number of iterations sachs alarm hepar2 hailf. win95. (c) 1 0 - 1 1 0 - 2 1 0 - 3 1 0 - 4 1 0 - 5 1 0 - 6 10 -2 10 -1 10 0 refinement factor (1 -η) Storm tuning time [s] alarm hepar2 hailf." }, { "formula_coordinates": [ 7, 79.39, 559.43, 456.69, 120.51 ], "formula_id": "formula_29", "formula_text": "≥ 0 . 4 ≥ 0 . 5 ≥ 0 . 6 ≥ 0 . 7 ≥ 0 . 8 ≥ 0 . 9 1 10 -2 10 -1 10 0 constraint's threshold (λ) Storm tuning time [s] alarm hepar2 hailf. (a) ≥ 0 . 3 ≥ 0 . 4 ≥ 0 . 5 ≥ 0 . 6 ≥ 0 . 7 ≥ 0 . 8 ≥ 0 . 9 1 0-Sat 10 -2 10 -1 10 0 Infeas constraint's threshold (λ) EC(u,u0) alarm hepar2 hailf. (b) ≥ 0 . 3 ≥ 0 . 4 ≥ 0 . 5 ≥ 0 . 6 ≥ 0 . 7 ≥ 0 . 8 ≥ 0 ." } ]
2024-03-22
[ { "figure_ref": [ "fig_0", "fig_1", "fig_3" ], "heading": "Introduction", "publication_ref": [ "b10", "b11", "b19", "b34", "b30", "b12", "b30", "b34", "b30", "b34", "b20", "b39", "b43", "b34", "b30", "b45" ], "table_ref": [], "text": "As an expansion of horizontal object detection [11,12,20], oriented object detection has a wider applications in many [35,36,38,45], smoothing function is explicitly applied for detector's output θp during loss calculation; (b) while in independent-optim methods [31,41], smoothing function is implicitly embedded in the model, and θp is decoded from detector's output fp. According to our analysis, only the latter can really solve boundary discontinuity problem.\nscenes, such as aerial images [3,30], panoramic images [13,28,29], scene text [9], 3D objects [43], etc, since it can achieve a good balance between fine localization and low labeling cost. In oriented object detection, a detector needs to predict the minimal rotated bounding boxes for objects, so it has a high requirement for rotation equivariance. However, researchers have observed mutation in angular prediction when objects rotate near the boundary angle, which is commonly known as boundary discontinuity problem [31,35].\nIn previous works, the boundary discontinuity problem has been long believed to be caused by the sharp loss increase at the angular boundary during training. To address this problem, researchers designed a series of smooth loss functions to prevent the sharp loss increase, and these methods can be divided into two categories, i.e., independent-optim loss [31,33,41] and joint-optim loss (dominated by IoU-like loss) [35,36,38,45]. Due to the negative impact of the low consistency between loss and IoU-metric, the detectors trained through the independent-optim loss are usually worse than IoU-like loss. It has long been a consensus in object detection [21,40,44], so increasing IoU-like loss methods become mainstream choices for oriented object detectors.\nHowever, we experimentally find that even state-of-theart IoU-like methods do not actually solve the boundary Figure 2. When objects rotate near the boundary angle, state-of-the-art IoU-like methods (e.g., KFIoU [38], KLD [36]) actually suffer from severe mutation in angular prediction. With the correction for angle by our ACM, the prediction achieves rotation equivariance. discontinuity problem. Specifically, we select an image containing only a single object, and rotate it 360 • at 1 • intervals to obtain a series of images. These images are sequentially fed into a well-trained detector(with state-of-theart IoU-like methods) for inference. As is shown in Fig. 2, visualized results show that the predicted boxes can tightly enclose object in most cases, but collapse with a seriously deviated angle in some cases near the angular boundary.\nThrough theoretical analysis, we find that the key to addressing the problem lies in the encoding mode of the smoothing function rather than in joint or independent optimization. Although both optimization paradigms insisit loss-smoothing, the joint-optim methods have a subtle technical detail differing with independent-optim methods. As is shown as Fig. 1, in joint-optim methods [35,36,38,45], smoothing function is explicitly applied for detector's output θ p during loss calculation; while in independent-optim method [31,41], smoothing function is implicitly embedded in the model, and θ p is decoded from detector's output f p . For example, in typical joint-optim method KLD [36], Gaussian distribution is transformed from predicted angle and other parameters, not directly output from the model. It is this detail that makes those IoU-like methods not really solve boundary discontinuity problem as they expect, even though they indeed improve the overall detection performance with the benefit of joint optimization. Specifically, the model still attempts to fit the angular relationship between box and object. The relationship is actually a piecewise function with a break point at the angular boundary as Fig. 3b, which is difficult to fit for intrinsically continuous neural networks [2,15,46]. It makes angles highly unstable near breakpoints, and results in the boundary discontinuity problem. Such being the case, an intuitive idea occurs that lets the model output a Gaussian distribution. However, it is challenging to recover the original rotation angles of bounding boxes from Gaussian distributions. If we want to have one's cake and eat it too, we must find a coding function that simultaneously satisfies the smooth, joint, and reversible characteristics.\nTo deal with this issue, we propose a dual-optimization paradigm for angles as Fig. 4. We decouple reversibility and joint-optim from single smoothing function into two distinct entities f and g. The former corrects angular boundary, while the latter blends angle with other parameters. In this paradigm, the model outputs angular encoding f p , subject to explicit supervision. On this basis, another joint-optim g is applied into decoded angle θ p = f -1 p . Obviously, the role of g can be played by existing joint-optim methods. However, given that f -1 is involved in loss calculation, it is necessary to ensure that f -1 is differentiable, which is not satisfied for lots of existing encoding. Inspired by the continuous encoding of PSC [41], we propose a coding function based on the complex-exponential function, achieving the goal of differentiability of the inverse function. Finally, boundary discontinuity problem is well-addressed as Fig. 2. Overall, our contribution can be summarized as following:\n• We extract and induce the optimization logic of existing methods from mathematical perspective, for the first time clarifying the long-standing misunderstanding that IoUlike methods can solve boundary problem. • We propose a novel dual-optimization paradigm for angles, which for the first time achieves the objectives of both correcting angular boundary and blending parame-ters, achieving rotational equivariance for detection. • Extensive experiments on multiple datasets show that boundary discontinuity problem is well-addressed. Moreover, typical IoU-like methods are improved to the same level without obvious performance gap." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Rotated Object Detection", "publication_ref": [ "b10", "b11", "b19", "b31", "b6" ], "table_ref": [], "text": "In oriented detection, the minimal enclosing rotated bounding box (x, y, w, h, θ) is adopted widely to represent an oriented object, where (x, y) is center position, (w, h) is scale (i.e., width & height) and θ is rotated angle of box. There are many algorithms inherited from classic horizontal detection [5,11,12,20] to predict the rotated boxes, where ROI-Transformer [3], SCRDet [32], ReDet [7] are two-stage mainstreamed methods, while DRN [19], R 3 Det [34], S 2 A-Net [6] are single-stage methods. However, these detectors suffer from boundary discontinuity problems in varying degrees, as the issue itself is unrelated to the detectors." }, { "figure_ref": [], "heading": "Boundary Discontinuity Problem", "publication_ref": [ "b30", "b34" ], "table_ref": [], "text": "The boundary discontinuity problem has been a persistent challenge, requiring a comprehensive understanding of the antecedents and consequences of each milestone to grasp the essence of this paper. In horizontal detection, bbox-regression loss typically employs joint-optim IoU-Loss, which has reached a consensus without controversy. Due to the complexity and non-differentiability of IoU calculation for rotated box, it was initially considered that IoU-Loss can not be available for oriented detection. Therefore, early methods in oriented detection usually used L1-Loss for each parameters (x, y, w, h, θ). CSL [31] pointed out that using L1-Loss would lead to sharp increases in angle-regression loss at angle boundaries, termed \"boundary discontinuity problem\". By using angle classification instead of angle regression, CSL avoids the intractable problem. Subsequently, a series of methods (e.g., DCL [33] / GF-CSL [25] / MGAR [22]) based on angle classification have sprung up.\nGWD [35] argued that while CSL solved the \"boundary discontinuity problem\" caused by sharp loss increases, independently optimizing parameters was unreasonable. This is because IoU-Loss was already established as the best choice in horizontal detection. However, since rotated IoU is nondifferentiable, GWD proposed a Gaussian-based joint-optim loss to approximately replace it. Hence, GWD claimed that it can address the \"boundary discontinuity problem\" and achieve joint optimization. KLD [36] and KFIoU [38] inherit the advantages of GWD's Gaussian encoding, and improve it from distribution measurement. Due to the remarkable effect of these methods, more and more Gaussian methods have emerged, which indicates that joint-optim methods have become mainstream. Notably, the perception of the \"boundary discontinuity problem\" remained limited to sharp loss increases up to this point. Recently, PSC [41] borrows phase-shift-coding from the field of communications to improve the performance of angle prediction. It uses continuous coding to avoid quantization errors in classification methods, but it still belongs to independent optimization. Notably, PSC focuses on coding design without new insight about boundary discontinuity problem (e.g., it explicitly mentioned that GWD/KLD solved the boundary problem)." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1", "fig_1" ], "heading": "Preliminary", "publication_ref": [], "table_ref": [], "text": "3.1. The Root of All Evil is \"Box ̸ = Object\"\nFor an oriented object detector, it accepts image of object as input, and outputs bounding box with position, scale and angle parameters. However, we reveal that box and object are essentially different concepts, which will produce breakpoints in the angular ground-truth. The discontinuous ground-truth cannot be fitted exactly by continuous output of the detector especially at the breakpoints, so angular prediction near the breakpoints becomes very unstable.\nIn the interest of brevity, we denote the object instance and bounding box as O(x obj , y obj , w obj , h obj , θ obj ) and B(x box , y box , w box , h box , θ box ), respectively. The difference between O and B lies in θ rather than (x, y) and (w, h), where the range of θ obj is [0, 2π) while the range of θ box is [0, π). This is because the object holds content which needs to rotate at least one full circle to be completely overlapped, while the box is a kind of geometry without any content which just needs to rotate half of circle to be completely overlapped. For example in Fig. 3a, objects rotated with 45 • and 225 • can be distinguished by content, while the corresponding bounding boxes cannot as well.\nIn this setting, the bounding box is a truly symmetric rectangle, whose rotations θ and θ ± π are indistinguishable. As a result, the relationship between θ box and θ obj exhibits a piecewise function with a break point, rather than a linear relationship between (x box , y box , w box , h box ) and (x obj , y obj , w obj , h obj ), as is shown in Fig. 3b and Eq. ( 1).\n     (x box , y box ) = (x obj , y obj ) (w box , h box ) = (w obj , h obj ) θ box = θ obj mod π(1)\nThe detector takes the object image as input and the box as supervision, which means that the detector is actually enforced to fit Eq. ( 1)(blue solid lines in Fig. 3b). Obviously, θ box has a step-point at θ obj = π, which makes it difficult for the detector F , a continuous function essentially, to fit it accurately. Irrespective of the quality of fit achieved by detector F , there always exists a small interval (π-ϵ 1 , π+ϵ 2 ) near the breakpoint (gray region in Fig. 3b), where predicted angle (red dash line) drops rapidly from π to 0, and angular prediction becomes highly unstable, resulting in a severe degradation of the AP/IoU of boxes. In addition, angular prediction tends to fluctuate even outside the interval." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "The Devil is in Encoding Mode", "publication_ref": [ "b30" ], "table_ref": [], "text": "For the problem of angle discontinuity at the boundary, the core of the mainstream solutions is to smooth loss value at the angular boundary, and these studies are usually categorized by independent or joint optimization. However, our experiments as Fig. 2 shows that even joint-optim methods do not actually solve the boundary discontinuity problem.\nTo understand the reason behind this finding, we make a reformulation of the existing works. For convenience, let the ground-truth and prediction of θ box be denoted as θ t and θ p , respectively. The way to optimize angle in joint-optim methods can be reformulated as follows (also as Fig. 1):\nθ = arg min θp ℓ f (θ p ); f (θ t )(2)\nwhere model fits discontinuous θ p , f and ℓ are the encoding function for angle and measuring function for encoded value, respectively. For example, 1) in the case of KLD [36], f = gaussian x,y,w,h (θ), ℓ = ℓ kld . f encodes the angle and other parameters as a smooth Gaussian distribution, and ℓ just measures the distance of Gaussian distribution between prediction and ground-truth; 2) in the case of SkewIoU ). Although we cannot get explicit expression of f and l, their role must be similar to gaussian x,y,w,h (θ) and ℓ kld .\nAs a contrast, the way to optimize angle in independentoptim methods can be formulated as follows (also as Fig. 1):\nθ = f -1 arg min fp ℓ f p ; f (θ t )(3)\nwhere model fits continous f p , f -1 is the inverse function of f , and we can get angle by θ p = f -1 p . For example, 1) in the case of CSL [31], f = onehot(θ), ℓ = ℓ f ocal . f encodes the angle into a discrete distribution, and ℓ measures quality of classification; 2) in the case of PSC [41], f = cos(θ + φ i ), i = 1...N, ℓ = ℓ l1 . f encodes the angle into a continuous vector, ℓ measures the encoded vector distance.\nCompared with the diverse optimization forms (independent or joint) for f , what is more noteworthy is encoding mode of f . Note that the model in Eq. (2) outputs θ, f is explicitly applied in loss calculation, while the model in Eq. ( 3) directly outputs the value encoded by f . For detector F , the former's fitting target is still θ box ∼ θ obj with a break point, while the latter's target becomes f box ∼ f obj . Thanks to the periodic aggregation properties of f , the differences between box and object are eliminated, which will no longer suffer from difficulty about fitting breakpoints.\nTo summary up, it is a better choice to make model directly fit the smooth value rather than utilize it just in loss calculation. As for the reason why joint-optim methods do not adopt such design, it is most likely because it is difficult to recover the angle from the joint-encoding of the model output. Dramatically, the advantages of joint optimization outweigh the disadvantages of loss-smoothing, which eventually misleads researchers to believe that the boundary problem can be solved by joint optimization." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_3" ], "heading": "Dual-Optimization for Angle", "publication_ref": [], "table_ref": [], "text": "Considering that joint optimization has become the mainstream scheme at present, a convenient improvement strategy is to correct angle by angle-smoothing independent optimization with reversible coding, as well as to blend angle with other parameters by joint optimization based on corrected angle, which can be formulated as follows:\nθ = f -1 arg min z=fp [ℓ f + ℓ g ] s.t. ℓ f = ℓ z; f (θ t ) ℓ g = ℓ g(f -1 (z)); g(θ t ) (4)\nwhere model still fits continous f p , and f, g are encoding function in independent/joint optimization.\nSince f -1 participates in loss calculation in jointoptimization, f not only needs to be continuous, differentiable, and reversible (GWD/KLD/FKIoU/SkewIoU fail to satisfy), but its inverse function f -1 also needs to satisfy these properties. Discrete encodings like CSL rely on argmin to make f -1 nondifferentiable, so only continuous encodings like PSC remains a chance to be differentiable.\nTo this end, we propose a Angle Correct Module (ACM) based on complex-exponential function. The module implements the above f and f -1 , which can be easily plugged into the existing workflow of oriented object detectors to repair angular prediction. As is shown in Fig. 4, the detector needs to output angular encoding rather than angle itself when ACM works, since a consistent attribute (f , just similar to x, y, w, h) for both box and object can never cause boundary discontinity problem. This means f (θ box ) = f (θ obj ), which is equivalent to f (θ obj mod π) = f (θ obj ) due to Eq. ( 1), so f (0) = f (π). To recover the unique angular value for box, f needs to be reversible at least in [0, π). This also means that f is continuous over [0, π], and it will causes a many-to-one correspondences not only at the interval boundary but also at other points according to Rolle's theorem.\nThe contradiction implies it impossible to find a eligible f . However, the \"impossibility\" mentioned above is only restricted to the most common case where f belongs to real number domain R. When we broaden our perspective to the complex number domain C, the miracle will occur even without any bells and whistles. We will achieve the goal of reversible transformation by the simplest, yet the most classic complex transformation, i.e., complex-exponential transformation, as following:\nz = f (θ) = e jωθ\n(5)\nθ = f -1 (z) = - j ω ln z (6)\nwhere z ∈ C is encoded value, j represents imaginary unit, and ω ∈ R + is angular frequency. Due to Eq. ( 6) decoding can get unique angle only in a single cycle on the complex plane, ωθ 's range [0, ωπ) ⊆ [0, 2π), so it is necessary to satisfy ω ≤ 2. To determine the appropriate ω, we discuss the relationship of f box ∼ f obj as following:\nf box = e jωθ box = e jω(θ obj mod π) = e jωθ obj , θ obj ∈ [0, π) e jωθ obj • e -jωπ , θ obj ∈ [π, 2π) (7)\nThrough further derivation of the formula, we can find that 1) When ω = 2, Eq. ( 7) can be simplified to a straight-forward f box = f obj . f becomes a consistent attribute for both box and object, and it is perfectly in line with our design goals; 2) When ω = 1, Eq. ( 7) can be just simplified to a f obj • sign(π -θ obj ). f box and f obj has a simple relationship but still with breakpoints; 3) When ω ̸ = 2 and ω ̸ = 1, e -jωπ is no longer a real factor, which makes Eq. ( 7) difficult to simplify, and f box ∼ f obj difficult to analyze. To sum up, we finally choose ω = 2 in ACM. More details will be provided in the supplementary materials." }, { "figure_ref": [ "fig_3" ], "heading": "Loss Functions", "publication_ref": [], "table_ref": [], "text": "As is shown in Fig. 4, given a batch of images, the detector outputs the classification c p , position (x p , y p ), scale (w p , h p ), and angular encoding f p , and the corresponding ground truth is c t , (x t , y t ), (w t , h t ), and θ t . First, we calculate the loss of the angular encoding in ACM, which is\nL acm = ℓ smooth_l1 f p , f t(8)\nThen, we jointly optimize the decoded angle θ p = f -1 p , with other parameters (abbreviated as xywh), which is\nL box = ℓ B(xywh p , θ p ), B(xywh t , θ t )(9)\nwhere ℓ ∈ {ℓ riou , ℓ kld , ℓ gwd , ...}. In addition, we also calculate the classification loss, which is\nL cls = ℓ f ocal c p , c t(10)\nFinally, the total loss is as follows (λ box , λ acm are coefficients to balance each parts of loss):\nL = L cls + λ box L box + λ acm L acm(11)\nBy default, we set λ box = 1, λ acm = 0.2 in experiments." }, { "figure_ref": [], "heading": "Differences With Other Coding Methods", "publication_ref": [ "b13", "b48" ], "table_ref": [], "text": "The difference between ACM and vanilla joint-optim encoding methods is self-evident, hence the focus here is primarily HRSC2016 [14] contains images from two scenarios with ships on sea and close inshore. The training, validation and testing set include 436, 181 and 444 images, with the image size ranging from 300 × 300 to 1500 × 900. We adjust the long side of each image to a fixed size (640 pixels) and keep the original aspect ratio for training and testing. UCAS-AOD [49] contains two categories: Car and Plane, which includes 1,510 aerial images of about 659 × 1,280 pixels, with two categories of 14,596 instances in total. We randomly select 1,110 for training and 400 for testing. Finally, we adopt the same data processing strategy as HRSC2016." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b9", "b41" ], "table_ref": [], "text": "Evaluation Metric. Methods are evaluated using the standard COCO style Average Precision (AP) [10], which is the convention throughout the field of object detection. It is worth noting that AP 75 is gradually replacing AP 50 as the most reliable metric for oriented object detection due to AP 75 's higher sensitivity to angle deviation than AP 50 . Following mainstream works [36,42] " }, { "figure_ref": [], "heading": "KFIoU", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "CM-KFIoU", "publication_ref": [], "table_ref": [], "text": "Figure 5. Visualized comparison of detection results between KFIoU [38] and enhanced ACM-KFIoU. The images are arranged from left to right in order of increasing aspect-ratio of objects, and the first-col and bottom-col are the results of KFIoU and ACM-KFIoU, respectively. Our ACM greatly eliminates the angular prediction errors in the original KFIoU." }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [], "table_ref": [], "text": "Different encoding length. To comprehensively compare the costs and performance of various encoding methods, we conducted experiments as Tab. 1. The results indicate that CSL relies on relatively long encoding, while the choice of coding length for PSC is challenging. Compared with these methods, ACM achieves the best performance without the need to choose the encoding length, which is fixed to 2. Different encoding mode. To validate our motivation, that is, implicit encoding is superior to explicit encoding, we conducted comparative experiments based on various encoding methods as Tab. 2. The results show that, even for the identical encoding methods, performance is notably weaker when predicting the angle itself (explicit) rather than angular encoding (implicit). This is an observation that has long been overlooked, yet it points towards a viable direction for addressing boundary discontinuity problem in the future. Different supervision. To validate the necessity of each supervision of proposed dual-optimization, we conducted experiments as Tab. 3. The results show that, compared with full dual-optimization, performance decreased (yet still surpass baseline) after the removal of refine supervision, but there was a severe decline in performance after the removal of correct supervision. We think it is difficult to ensure that predicted frequencies align with the preset decoded frequencies if without explicit supervision, and such mismatch will make decoded angles fall into a wrong range, so performance degenerates. By the way, since the size of DOTA dataset is much larger than HRSC2016 dataset, the mismatch can get some mitigation, but just like a drop in the ocean.\nDifferent joint-optim loss on various datasets. To eliminate the influence of the classification branch on the detection results, we conducted experiments on the datasets comprising simple scenes with only single-category objects per image. The HRSC2016 dataset contains large aspect ratio ships, and the UCAS-AOD dataset contains rectangle cars and square-like planes. As is shown in Tab. 4, both AP 50 and AP 75 get significant improvement from ACM on HRSC2016(Ship) and UCAS-AOD(Car).\nIt is worth noting that the improvement of AP 50 are negligible on the UCAS-AOD(Plane) dataset, while the improvement of AP 75 are tremendous. It is never an accident, and the reasons include: 1) For square-like objects, the IoU is always over 0.5 regardless of the angle of the predicted box, making AP 50 insensitive to square-like objects. 2) When a square-like box is converted to a 2D-Gaussian distribution, the 2D-Gaussian distribution is completely symmetric like a circle, which makes it impossible for these methods (GWD, KLD, KFIoU) based on 2D-Gaussian distribution to accurately predict the angle of square-like objects. Since our ACM is friendly to square-like objects, it greatly improves these baseline methods based on 2D-Gaussian distribution To explore performance in a more general cases, we conducted experiments on a dataset comprising complex scenes with only single-category objects in each image. DOTA dataset contains a considerable number of categories and diverse environments. Experimental results at Tab. 4 show that the performance of all IoU-like methods are improved by 6.99% (GWD), 7.72% (KLD), 14.34% (KFIoU) and 14.34% (SkewIoU) on AP 75 after the ACM module is used. We also unexpectedly find that after the ACM module enhancement, both Gaussian-based loss and SkewIoU loss become very close in terms of AP 50 (73.71%, 73.95%, 74.51%, 74.21%) and AP 75 (41.97%, 42.97%, 40.49%, 42.83%), indicating that the primary distinction between them lies in their optimization capabilities for the angle. Visualized results. We provide some visualization results on the DOTAv1.0 dataset as Fig. 5. From detection results obtained by the KFIoU-based detector, we select some cases of poor angular prediction. Note that there exists slight angular deviations for boxes in KFIoU results sometimes, and significant angular errors in other times. Fortunately, most angular errors are corrected in the results of ACM-KFIoU. It is also worth noting that ACM addresses the square-likeobject case where KFIoU based on 2D Gaussian distribution fails for angular prediction. It is consistent with the quantitative results in Tab. 4, and further verifies the effectiveness of our methods." }, { "figure_ref": [], "heading": "Comparison with the State-of-the-Art", "publication_ref": [], "table_ref": [], "text": "Tab. 5 presents a comprehensive comparison of recent detectors on DOTA-v1.0 dataset. It is important to note that the performance of different methods may vary due to several factors, including image resolution, network architecture, detection framework, training strategies, and various optimization techniques employed. In light of these variations, it becomes challenging to establish completely fair comparisons among the different approaches. However, despite these challenges, our method has managed to achieve competitive results, at around 78.53% / 79.45% on AP 50 ." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we experimentally find that widely used IoUlike methods do not actually solve the well-known boundary discontinuity problem. On further analysis, we find that the key to solution lies in the encoding mode of the smoothing function rather than in joint or independent optimization. Moreover, we propose a dual-optimization paradigm integrated with complex-exponential angular coding, which achieves the objectives of both correcting angular boundary and blending parameters. Finally, extensive experiments show that our methods effectively eliminate boundary problem and significantly improve detection performance for the object detector." }, { "figure_ref": [ "fig_4", "fig_4" ], "heading": "Mathematical Meaning of Polar Mapping", "publication_ref": [], "table_ref": [], "text": "In this perspective, encoding corresponds to the Cartesian coordinates of a unit vector, while decoding corresponds to the polar coordinates representation of the same unit vector. As is shown in Fig. 6a, given a vector with polar angle ϕ and radius(length) ρ in 2-dimensional space, it can be decompose as (ρ cos(ϕ), ρ sin(ϕ)) in Cartesian coordinates. When the radius ρ is fixed and ϕ is just considered in single period, the polar angle and Cartesian coordinates are one-to-one correspondences. Therefore, even leaving aside we can utilize this relationship to design f and obtain the corresponding f -1 as & Fig. 6b. In contrast, PSC coding [41] does not have such a clear mathematical meaning, so it needs to be experimentally determined to encoding length hyperparameters. To determine the appropriate ω, we discuss the relationship of f box ∼ f obj as following:\nf box = e iωθ box = e iω(θ obj mod π) = e iωθ obj , θ obj ∈ [0, π) e iωθ obj • e -iωπ , θ obj ∈ [π, 2π) (16) 1) When ω = 2, e -iωπ = 1, then f box = e iωθ box = e iωθ obj , θ obj ∈ [0, π) e iωθ obj , θ obj ∈ [π, 2π) = f obj(17)\n2) When ω = 1, e -iωπ = -1, then\nf box = e iωθ box = e iωθ obj , θ obj ∈ [0, π) -e iωθ obj , θ obj ∈ [π, 2π) = f obj , θ obj ∈ [0, π) -f obj , θ obj ∈ [π, 2π) = f obj • sign(π -θ obj )(18)\nThrough further derivation of the formula, we can find that 1) When ω = 2, Eq. ( 16) can be simplified to a straightforward f box = f obj . f becomes a consistent attribute for both box and object, and it is perfectly in line with our design goals; 2) When ω = 1, Eq. ( 16) can be just simplified to a f obj • sign(π -θ obj ). f box and f obj has a simple relationship but still with breakpoints; 3) When ω ̸ = 2 and ω ̸ = 1, e -iωπ is no longer a real factor, which makes Eq. ( 16) difficult to simplify, and f box ∼ f obj difficult to analyze. To sum up, we finally choose ω = 2 in ACM." }, { "figure_ref": [ "fig_5", "fig_5", "fig_5", "fig_5" ], "heading": "Perspective 2: Polar Mapping", "publication_ref": [], "table_ref": [], "text": "From the perspective of polar-coordinate mapping, ACM has more clear mathematical meaning and simple real number expression, so that we can carry out more direct analysis. Although the single dimensional cos(ωθ) and sin(ωθ) are many-to-one, integration of them can achieve a one-to-one effect in a higher dimension, making f an reversible transformation. Due to polar coordinate decoding can get unique angle only in a single cycle, ωθ 's range [0, ωπ) ⊆ [0, 2π), so it is necessary to satisfy ω ≤ 2.\nWith encoding operation, original relationship θ box ∼ θ obj (Fig. 7a) becomes f box ∼ θ obj (Fig. 7b), where f box is the result of encoding function f (•) applied on θ box . Thus, the waveform of f box ∼ θ obj at [0, 2π) is equivalent to repeating the encoded sin / cos waveform at [0, π) twice, due to the sawtooth wave of θ box ∼ θ obj .\nObviously, the main issue of ω > 2 (e.g. ω > 4) lies in the incomplete decoding range, which will have a serious impact on angular prediction. In the valid angular frequency range, only when ω = 2, both encoding components are smooth (continuous and with continuous gradient) at θ obj = π. It indeed completely eliminates the breakpoints in the components and thus completely solves the boundary problem; otherwise (ω ̸ = 2), there is always be breakpoints in the components. Specially, when ω = 1, although its cosinoidal component is continuous, its sinusoidal component do not include any breakpoints, which is equivalent to partially solving the problem. Therefore, decoded angle waveform is significantly closer to the perfect sawtooth wave compared with original prediction (Fig. 7a), but there is still a gap compared with ω = 2.\nBy comparing prediction(dash lines) with groundtruth(solid lines) in Fig. 7b(top), we can find once groundtruth of wrapped value contains breakpoints, its prediction will become significantly worse, and according unwrapped angle, too. Finally, only ω = 2 is the optimal choice that makes f continuous, differentiable, and reversible for rectangular objects." }, { "figure_ref": [], "heading": "Perspective 3: Experiments", "publication_ref": [], "table_ref": [], "text": "Although we have analyzed from two different perspectives that ω = 2 is the more appropriate angular frequency, the experiment is destined to be the more direct perspective. We conducted experiments with angular frequencies (ω = 1, 2, 4), as is shown in Tab. 6. Compared with original KFIoU [38], enhanced version with ACM(ω = 1) get remarkable improvement since sinusoidal component in decomposition of the angle has no breakpoints for rectangles. It is consistent with the phenomenon (the smaller distortion area) observed in the en/decoding waveform diagrams. Moreover, ACM(ω = 2) eliminates all breakpoints in both two components in decomposition of the angle for rectangles, so it achieves greater improvement. Due to the inability to unwrap the full angular range for rectangles, ACM(ω = 4) exhibits severe performance degradation, especially for HRSC2016 dataset consisting entirely of large aspect ratio ships. When adopted the fusion of two angular frequencies simultaneously (ω = 2, 4, details in Sec. 8.5 below), compared to ω = 2, the results have little effect on large aspect ratios objects on HRSC2016 dataset and the results have slightly improved on DOTA dataset. This is because the DOTA dataset contains both large aspect ratio objects and square-like objects. Overall, AP (especially AP 75 ) can benefit a lot from ACM, which verifies our analysis. In following experiments, we adopt mixed angular frequencies (ω = 2, 4)." }, { "figure_ref": [ "fig_5" ], "heading": "Extension to Square-like Object", "publication_ref": [], "table_ref": [], "text": "When the value of the object's width and height are close to each other, the bounding box will become a square-like from rectangle, which possesses stronger symmetry and leads to the period of B angle shrinking from π to π 2 . As a result, breakpoints will occur at more locations (i.e., π 2 , π, and 3π 2 ). In this case, if we continue to use Angle Correct Module proposed in the previous section, we should set ω to 4 accordingly, as is shown in Figure 7b(bottom). It is worth noting that when ω = 2, breakpoints still exist in f x at O angle = π 2 , π, 3π 2 , while f y suffers from gradient breakpoints at these positions although it is continuous, which is similar to the case of ω = 1 for the rectangle." }, { "figure_ref": [], "heading": "Generalization for Varied Aspect Ratio", "publication_ref": [ "b49" ], "table_ref": [], "text": "Considering that the actual scene contains both square-like and rectangular objects, we attempt to use wrapped values with two frequencies (denoted as f (ω) , where ω = 2, 4) simultaneously and fuse the unwrapped results to obtain a more accurate angular prediction. Similar strategies can also be found in previous work [41,50]. For boxes rotated within [0, π 2 ), both f (2) and f (4) can unwrap correct angles. For boxes rotated within [ π 2 , π), f (2) still unwraps correctly, while f (4) 's unwrapped angles will be offset by one decoding-period π 2 to fall in [0, π 2 ). Therefore, ideally the difference between θ (2) ∈ [0, π) and θ (4) ∈ [0, π 2 ) could only be 0 or π 2 , but it only affects rectangle (T = π) and not square-like (T = π 2 ) in both training and inference phase. Note that f (2) suffers from breakpoints only for square-like rather than rectangle, and f (4) is immune to breakpoints for both rectangle and square-like, which just fails to correctly determine period range belonging to angles. Thus we can utilize coarse θ (2) to correct the period range of fine θ (4) as follows, where relaxation condition outside the parentheses are adopted in practice due to the independent errors of f (2) & f (4) . Finally, we use this fusion strategy to adapt objects with varied aspect ratio. θ = θ (4) + π 2 , if θ (2) -θ (4) > π 4 (θ (2) -θ (4) = π 2 ) θ (4) , if θ (2) -θ (4) ≤ π 4 (θ (2) = θ (4) ) (19) where the inequality condition (e.g. θ (2) -θ (4) > π 4 ) is just a relaxed version of the equality condition (e.g. θ (2) -θ (4) = π 2 ). The latter is just the judgment condition in the ideal state, while the former is actually adopted in practice, which brings better numerical stability." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This work is supported by National Key R&D Program of China (2022YFD2001601), National Natural Science Foundation of China (62372433, 62072438, 61931008, U21B2024, 62071415), Zhejiang Provincial Natural Science Foundation of China(LDT23F01011F01, LDT23F01015F01, LDT23F01014F01), \"Pioneer\" and \"Leading Goose\" R&D Program of Zhejiang Province (2022C01068)." }, { "figure_ref": [], "heading": "Supplementary Material", "publication_ref": [], "table_ref": [], "text": "7. Another understanding of ACM 7.1. From Complex Function to Polar Mapping For the complex-exponential-based encoding proposed in the main text, leveraging Euler's formula allows for its transformation into a polar-coordinate mapping, as following:\nwhere ω ∈ R + is angular frequency, arctan2 is another version of arctan with quadrant assignment, and z re , z im are real-part and imagine-part of complex coding z ∈ Z, respectively. By hiding the complex mark of the encoding, we can regard it as a 2D polar coordinate encoding, as following:\nwhere ω ∈ R + is still angular frequency, arctan2 is another version of arctan with quadrant assignment, z x , z y are xaxis-component and y-axis-component of 2D vector z ∈ R 2 , respectively. This form is similar to continuous PSC encoding [41], but note that PSC cannot perform the above conversion." } ]
Oriented object detection has been developed rapidly in the past few years, where rotation equivariance is crucial for detectors to predict rotated boxes. It is expected that the prediction can maintain the corresponding rotation when objects rotate, but severe mutation in angular prediction is sometimes observed when objects rotate near the boundary angle, which is well-known boundary discontinuity problem. The problem has been long believed to be caused by the sharp loss increase at the angular boundary, and widely used joint-optim IoU-like methods deal with this problem by losssmoothing. However, we experimentally find that even stateof-the-art IoU-like methods actually fail to solve the problem. On further analysis, we find that the key to solution lies in encoding mode of the smoothing function rather than in joint or independent optimization. In existing IoU-like methods, the model essentially attempts to fit the angular relationship between box and object, where the break point at angular boundary makes the predictions highly unstable. To deal with this issue, we propose a dual-optimization paradigm for angles. We decouple reversibility and joint-optim from single smoothing function into two distinct entities, which for the first time achieves the objectives of both correcting angular boundary and blending angle with other parameters. Extensive experiments on multiple datasets show that boundary discontinuity problem is well-addressed. Moreover, typical IoU-like methods are improved to the same level without obvious performance gap.
Rethinking Boundary Discontinuity Problem for Oriented Object Detection
[ { "figure_caption": "Figure 1 .1Figure 1. Two optimization paradigms for angle in oriented object detection: (a) in joint-optim methods [35, 36, 38, 45], smoothing function is explicitly applied for detector's output θp during loss calculation; (b) while in independent-optim methods [31, 41], smoothing function is implicitly embedded in the model, and θp is decoded from detector's output fp. According to our analysis, only the latter can really solve boundary discontinuity problem.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Box ̸ = Object: (a) objects rotated with 45 • and 225 • [colorful mark] share the same box rotated with 45 • [black mark], which causes (b) the relationship [blue line] between angle of box and object to become a piecewise function with a breakpoint [gray region], differing from the (position, scale). Not only is the prediction [red line] of the breakpoint region mutational, but the prediction of other regions also becomes fluctuant.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "[45], f and ℓ are implicit functions derived from SkewIoU (θ xywh p , θ xywh t", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Overview of proposed Dual-Optimization paradigm and ACM-Coder. The detector outputs angular ACM-encoding fp, subject to explicit supervision. On this basis, another IoU-like loss based on joint-encoded g(•) is applied onto ACM-decoded angle f -1 (fp). The paradigm achieves the objectives of both correcting angular boundary and blending parameters.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Based on (a) polar coordinate decomposition, we define (b) a 2-dimensional wrapping function f (θ).", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Waveform analysis: (a) original angular relationship between box and object. (b) based on polar-coordinate mapping, we define a 2-dimensional en/decoding function f (•). It will compound onto the original sawtooth wave θ box = θ obj (mod π), and exhibits different effect for rectangular(top, T = π) & square-like(bottom, T = π2 ) objects when angular frequency ω = 1, 2, 4. The target and prediction are marked as solid line and dash line, respectively. Areas of inaccurate angular prediction are highlighted in gray. The optimal angular frequency for rectangular & square-like objects is 2 and 4, respectively.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Ablation study of different encoding length.", "figure_data": "MethodEncoding LengthHRSC2016 AP 50 AP 75Direct188.26 62.95319.50 2.56CSL60 9048.96 13.58 90.49 61.4318090.53 77.76389.91 79.20PSC20 6090.55 79.54 90.62 79.8618090.56 79.51ACM290.57 86.33on independently-optim encoding methods, especially PSC[41], which is a continuous encoding as ACM. ACM andPSC are distinct under any circumstances. Basically, the en-coding formula of PSC is {cos(ωθ+φ", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Ablation study of different encoding mode.", "figure_data": "MethodEncoding ModeHRSC2016 AP 50 AP 75 AP 50 AP 75 DOTA-v1.0Directn/a88.26 62.95 71.97 26.11CSLimplicit explicit90.53 77.76 70.83 38.71 6.06 1.05 33.29 10.90PSCimplicit explicit89.91 79.20 71.41 39.35 53.43 33.65 50.02 23.08ACMimplicit explicit90.57 86.33 74.99 41.44 54.66 31.45 50.67 19.91", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation study of different supervision.", "figure_data": "Model L acm L box OutputHRSC2016 AP 50 AP 75 AP 50 AP 75 DOTAθn/an/a88.26 62.95 71.97 26.11✓90.57 86.33 74.99 41.44f p✓37.37 13.98 54.67 19.67✓✓90.47 88.33 74.21 42.83", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": ", we adopt AP 75 as main metric, while AP 50 is auxiliary metric.", "figure_data": "Training Details. All approaches are implemented in Py-Torch, and training is done on NVIDIA RTX 3090 GPUs.We choose the anchor-free method CenterNet [48] to buildthe rotated detector and ImageNet pretrained ResNet-50 [8]as the backbone. The network is optimized by Adam for 140epochs with the learning rate dropped by 10× at 100 and 130epochs. As the DOTA dataset takes a large image resolution", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Ablation study on various datasets with different joint-optim loss. Base detector is CenterNet. (+5.69) 86.71 (+24.84) 88.69 (+1.44) 29.15 (+0.69) 90.35 (+0.01) 76.00 (+37.78) 73.71 (+0.59) 41.97 (+6.99) (+0.54) 87.45 (+8.16) 88.76 (+1.22) 30.40 (+0.41) 90.39 (+0.06) 75.65 (+46.46) 73.95 (+0.54) 42.97 (+7.72) (+2.29) 87.77 (+24.82) 88.31 (+2.57) 34.81 (+10.37) 90.40 (+0.06) 74.48 (+57.67) 74.51 (+2.54) 40.49 (+14.38) SkewIoU 90.47 (+1.08) 88.33 (+11.09) 88.27 (+0.54) 29.13 (+1.74) 90.37 (+0.03) 75.13 (+11.49) 74.21 (+0.59) 42.83 (+4.37)", "figure_data": "MethodHRSC2016 (Ship) AP 50 AP 75UCAS-AOD (Car) AP 50 AP 75UCAS-AOD (Plane) AP 50 AP 75DOTA-v1.0 AP 50 AP 75GWD [35]84.9461.8787.2528.4690.3438.2273.1234.98ACM-GWD 90.63 KLD [36] 90.0179.2987.5429.9990.3329.1973.4135.25ACM-KLD 90.55 KFIoU [38] 88.2662.9585.7424.4490.3416.8171.9726.11ACM-KFIoU 90.55 SkewIoU [45] 89.3976.4387.7327.5990.3463.6473.6238.01ACM-", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Performance of different detectors on DOTA-v1.0 dataset. MS indicates that multi-scale training/testing is used. .70 24.10 60.20 38.30 64.40 64.80 90.90 77.20 70.40 46.50 37.10 57.10 61.90 64.00 60.50 RoI-Trans. [4] ✓ 88.64 78.52 43.44 75.92 68.81 73.68 83.59 90.74 77.27 81.46 58.39 53.54 62.83 58.93 47.67 69.56 O 2 -DNet [26] ✓ 89.31 82.14 47.33 61.21 71.32 74.03 78.62 90.76 82.23 81.36 60.93 60.17 58.21 66.98 61.03 71.04 .50 53.84 74.78 80.77 82.81 88.92 90.82 87.18 86.53 64.09 66.27 77.51 79.62 69.57 78.53 RoI-Trans.-ACM ✓ 85.55 80.53 61.21 75.40 80.35 85.60 88.32 89.88 87.13 87.10 68.15 67.94 78.75 79.82 75.96 79.45", "figure_data": "MethodMSPLBDBRGTFSVLVSHTCBCSTSBFRAHASPHCAP 50PIoU [1] 80.90 69DAL [18] ✓ 88.61 79.69 46.27 70.37 65.89 76.10 78.53 90.84 79.98 78.41 58.71 62.02 69.23 71.32 60.65 71.78P-RSDet [47]✓88.58 77.83 50.44 69.29 71.10 75.79 78.66 90.88 80.10 81.71 57.92 63.03 66.30 69.77 63.13 72.30BBAVectors [39]✓88.35 79.96 50.69 62.18 78.43 78.98 87.94 90.85 83.58 84.35 54.13 60.24 65.22 64.28 55.70 72.32DRN [19]✓89.71 82.34 47.22 64.10 76.22 74.43 85.84 90.57 86.18 84.89 57.65 61.93 69.30 69.63 58.48 73.23CFC-Net [16]✓89.08 80.41 52.41 70.02 76.28 78.11 87.21 90.89 84.47 85.64 60.51 61.52 67.82 68.02 50.09 73.50Gliding Vertex [30]89.64 85.00 52.26 77.34 73.01 73.14 86.82 90.74 79.02 86.81 59.55 70.91 72.94 70.86 57.32 75.02Mask OBB [23]✓89.56 85.95 54.21 72.90 76.52 74.16 85.63 89.85 83.81 86.48 54.89 69.64 73.94 69.06 63.32 75.33CenterMap [24]✓89.83 84.41 54.60 70.25 77.66 78.32 87.19 90.66 84.89 85.27 56.46 69.23 74.13 71.56 66.06 76.03CSL [31]✓90.25 85.53 54.64 75.31 70.44 73.51 77.62 90.84 86.15 86.69 69.60 68.04 73.83 71.10 68.93 76.17R 3 Det [34]✓89.80 83.77 48.11 66.77 78.76 83.27 87.84 90.82 85.38 85.51 65.67 62.68 67.53 78.56 72.62 76.47GWD [35]✓86.96 83.88 54.36 77.53 74.41 68.48 80.34 86.62 83.41 85.55 73.47 67.77 72.57 75.76 73.40 76.30SCRDet++ [37]✓90.05 84.39 55.44 73.99 77.54 71.11 86.05 90.67 87.32 87.08 69.62 68.90 73.74 71.29 65.08 76.81KFIoU [38]✓89.46 85.72 54.94 80.37 77.16 69.23 80.90 90.79 87.79 86.13 73.32 68.11 75.23 71.61 69.49 77.35DCL [33]✓89.26 83.60 53.54 72.76 79.04 82.56 87.31 90.67 86.59 86.98 67.49 66.88 73.29 70.56 69.99 77.37RIDet [17]✓89.31 80.77 54.07 76.38 79.81 81.99 89.13 90.72 83.58 87.22 64.42 67.56 78.08 79.17 62.07 77.62PSC [41]✓89.86 86.02 54.94 62.02 81.90 85.48 88.39 90.73 86.90 88.82 63.94 69.19 76.84 82.75 63.24 78.07KLD [36]✓88.91 85.23 53.64 81.23 78.20 76.99 84.58 89.50 86.84 86.38 71.69 68.06 75.95 72.23 75.42 78.32CenterNet-ACM✓89.84 85", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Ablation study of different encoding mode.", "figure_data": "ω = 1 ω = 2 ω = 4HRSC2016 AP50 AP75AP50DOTAAP7588.2662.9571.9726.11✓90.44 (+2.18) 78.90 (+15.95) 73.51 (+1.54) 39.29 (+13.18)✓90.58 (+2.32) 86.12 (+23.17) 73.08 (+1.11) 39.62 (+13.51)✓24.90 (-63.36) 20.82 (-42.13) 35.50 (-36.47)17.29 (-8.82)✓✓90.55 (+2.29) 87.77 (+24.82) 74.51 (+2.54) 40.49 (+14.38)8. Determination of Angular Frequency8.1. Perspective 1: Complex Function", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" } ]
Hang Xu; Xinyuan Liu; Haonan Xu; Yike Ma; Zunjie Zhu; Chenggang Yan; Feng Dai
[ { "authors": "Zhiming Chen; Kean Chen; Weiyao Lin; John See; Hui Yu; Yan Ke; Cong Yang", "journal": "Springer", "ref_id": "b0", "title": "Piou loss: Towards accurate oriented object detection in complex environments", "year": "2020" }, { "authors": "George Cybenko", "journal": "Mathematics of control, signals and systems", "ref_id": "b1", "title": "Approximation by superpositions of a sigmoidal function", "year": "1989" }, { "authors": "Jian Ding; Nan Xue; Yang Long; Gui-Song Xia; Qikai Lu", "journal": "", "ref_id": "b2", "title": "Learning roi transformer for oriented object detection in aerial images", "year": "2019" }, { "authors": "Jian Ding; Nan Xue; Yang Long; Gui-Song Xia; Qikai Lu", "journal": "", "ref_id": "b3", "title": "Learning roi transformer for oriented object detection in aerial images", "year": "2019" }, { "authors": "Ross Girshick", "journal": "", "ref_id": "b4", "title": "Fast r-cnn", "year": "2015" }, { "authors": "Jiaming Han; Jian Ding; Jie Li; Gui-Song Xia", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b5", "title": "Align deep features for oriented object detection", "year": "2021" }, { "authors": "Jiaming Han; Jian Ding; Nan Xue; Gui-Song Xia", "journal": "", "ref_id": "b6", "title": "Redet: A rotation-equivariant detector for aerial object detection", "year": "2021" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b7", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Yingying Jiang; Xiangyu Zhu; Xiaobing Wang; Shuli Yang; Wei Li; Hua Wang; Pei Fu; Zhenbo Luo", "journal": "", "ref_id": "b8", "title": "R2cnn: rotational region cnn for orientation robust scene text detection", "year": "2017" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "Springer", "ref_id": "b9", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Tsung-Yi Lin; Piotr Dollár; Ross Girshick; Kaiming He; Bharath Hariharan; Serge Belongie", "journal": "", "ref_id": "b10", "title": "Feature pyramid networks for object detection", "year": "2017" }, { "authors": "Tsung-Yi Lin; Priya Goyal; Ross Girshick; Kaiming He; Piotr Dollár", "journal": "", "ref_id": "b11", "title": "Focal loss for dense object detection", "year": "2017" }, { "authors": "Xinyuan Liu; Hang Xu; Bin Chen; Qiang Zhao; Yike Ma; Chenggang Yan; Feng Dai", "journal": "", "ref_id": "b12", "title": "Sph2pob: Boosting object detection on spherical images with planar oriented boxes methods", "year": "2023" }, { "authors": "Zikun Liu; Liu Yuan; Lubin Weng; Yiping Yang", "journal": "", "ref_id": "b13", "title": "A high resolution optical satellite image dataset for ship recognition and some new baselines", "year": "2017" }, { "authors": "Zhou Lu; Hongming Pu; Feicheng Wang; Zhiqiang Hu; Liwei Wang", "journal": "Advances in neural information processing systems", "ref_id": "b14", "title": "The expressive power of neural networks: A view from the width", "year": "2017" }, { "authors": "Qi Ming; Lingjuan Miao; Zhiqiang Zhou; Yunpeng Dong", "journal": "", "ref_id": "b15", "title": "Cfc-net: A critical feature capturing network for arbitraryoriented object detection in remote sensing images", "year": "2021" }, { "authors": "Qi Ming; Lingjuan Miao; Zhiqiang Zhou; Xue Yang; Yunpeng Dong", "journal": "IEEE Geoscience and Remote Sensing Letters", "ref_id": "b16", "title": "Optimization for arbitrary-oriented object detection via representation invariance loss", "year": "2021" }, { "authors": "Qi Ming; Zhiqiang Zhou; Lingjuan Miao; Hongwei Zhang; Linhao Li", "journal": "", "ref_id": "b17", "title": "Dynamic anchor learning for arbitraryoriented object detection", "year": "2021" }, { "authors": "Xingjia Pan; Yuqiang Ren; Kekai Sheng; Weiming Dong; Haolei Yuan; Xiaowei Guo; Chongyang Ma; Changsheng Xu", "journal": "", "ref_id": "b18", "title": "Dynamic refinement network for oriented and densely packed object detection", "year": "2020" }, { "authors": "Kaiming Shaoqing Ren; Ross He; Jian Girshick; Sun", "journal": "", "ref_id": "b19", "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "year": "2015" }, { "authors": "Hamid Rezatofighi; Nathan Tsoi; Junyoung Gwak; Amir Sadeghian; Ian Reid; Silvio Savarese", "journal": "", "ref_id": "b20", "title": "Generalized intersection over union: A metric and a loss for bounding box regression", "year": "2019" }, { "authors": "Hao Wang; Zhanchao Huang; Zhengchao Chen; Ying Song; Wei Li", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b21", "title": "Multigrained angle representation for remotesensing object detection", "year": "2022" }, { "authors": "Jinwang Wang; Jian Ding; Haowen Guo; Wensheng Cheng; Ting Pan; Wen Yang", "journal": "Remote Sensing", "ref_id": "b22", "title": "Mask obb: A semantic attentionbased mask oriented bounding box representation for multicategory object detection in aerial images", "year": "2019" }, { "authors": "Jinwang Wang; Wen Yang; Heng-Chao Li; Haijian Zhang; Gui-Song Xia", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b23", "title": "Learning center probability map for detecting objects in aerial images", "year": "2020" }, { "authors": "Jian Wang; Fan Li; Haixia Bi", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b24", "title": "Gaussian focal loss: Learning distribution polarized angle prediction for rotated object detection in aerial images", "year": "2022" }, { "authors": "Haoran Wei; Yue Zhang; Zhonghan Chang; Hao Li; Hongqi Wang; Xian Sun", "journal": "ISPRS Journal of Photogrammetry and Remote Sensing", "ref_id": "b25", "title": "Oriented objects as pairs of middle lines", "year": "2020" }, { "authors": "Gui-Song Xia; Xiang Bai; Jian Ding; Zhen Zhu; Serge Belongie; Jiebo Luo; Mihai Datcu; Marcello Pelillo; Liangpei Zhang", "journal": "", "ref_id": "b26", "title": "Dota: A large-scale dataset for object detection in aerial images", "year": "2018" }, { "authors": "Hang Xu; Qiang Zhao; Yike Ma; Xiaodong Li; Peng Yuan; Bailan Feng; Chenggang Yan; Feng Dai", "journal": "", "ref_id": "b27", "title": "Pandora: A panoramic detection dataset for object with orientation", "year": "2022" }, { "authors": "Hang Xu; Xinyuan Liu; Qiang Zhao; Yike Ma; Chenggang Yan; Feng Dai", "journal": "", "ref_id": "b28", "title": "Gaussian label distribution learning for spherical image object detection", "year": "2023" }, { "authors": "Yongchao Xu; Mingtao Fu; Qimeng Wang; Yukang Wang; Kai Chen; Gui-Song Xia; Xiang Bai", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b29", "title": "Gliding vertex on the horizontal bounding box for multi-oriented object detection", "year": "2020" }, { "authors": "Xue Yang; Junchi Yan", "journal": "Springer", "ref_id": "b30", "title": "Arbitrary-oriented object detection with circular smooth label", "year": "2020" }, { "authors": "Xue Yang; Jirui Yang; Junchi Yan; Yue Zhang; Tengfei Zhang; Zhi Guo; Xian Sun; Kun Fu", "journal": "", "ref_id": "b31", "title": "Scrdet: Towards more robust detection for small, cluttered and rotated objects", "year": "2019" }, { "authors": "Xue Yang; Liping Hou; Yue Zhou; Wentao Wang; Junchi Yan", "journal": "", "ref_id": "b32", "title": "Dense label encoding for boundary discontinuity free rotation detection", "year": "2021" }, { "authors": "Xue Yang; Junchi Yan; Ziming Feng; Tao He", "journal": "", "ref_id": "b33", "title": "R3det: Refined single-stage detector with feature refinement for rotating object", "year": "2021" }, { "authors": "Xue Yang; Junchi Yan; Qi Ming; Wentao Wang; Xiaopeng Zhang; Qi Tian", "journal": "PMLR", "ref_id": "b34", "title": "Rethinking rotated object detection with gaussian wasserstein distance loss", "year": "2021" }, { "authors": "Xue Yang; Xiaojiang Yang; Jirui Yang; Qi Ming; Wentao Wang; Qi Tian; Junchi Yan", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b35", "title": "Learning high-precision bounding box for rotated object detection via kullback-leibler divergence", "year": "2021" }, { "authors": "Xue Yang; Junchi Yan; Wenlong Liao; Xiaokang Yang; Jin Tang; Tao He", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b36", "title": "Scrdet++: Detecting small, cluttered and rotated objects via instance-level feature denoising and rotation loss smoothing", "year": "2022" }, { "authors": "Xue Yang; Yue Zhou; Gefan Zhang; Jirui Yang; Wentao Wang; Junchi Yan; Xiaopeng Zhang; Qi Tian", "journal": "", "ref_id": "b37", "title": "The kfiou loss for rotated object detection", "year": "2023" }, { "authors": "Jingru Yi; Pengxiang Wu; Bo Liu; Qiaoying Huang; Hui Qu; Dimitris Metaxas", "journal": "", "ref_id": "b38", "title": "Oriented object detection in aerial images with box boundary-aware vectors", "year": "2021" }, { "authors": "Jiahui Yu; Yuning Jiang; Zhangyang Wang; Zhimin Cao; Thomas Huang", "journal": "", "ref_id": "b39", "title": "Unitbox: An advanced object detection network", "year": "2016" }, { "authors": "Yi Yu; Feipeng Da", "journal": "", "ref_id": "b40", "title": "Phase-shifting coder: Predicting accurate orientation in oriented object detection", "year": "2023" }, { "authors": "Ying Zeng; Xue Yang; Qingyun Li; Yushi Chen; Junchi Yan", "journal": "", "ref_id": "b41", "title": "Ars-detr: Aspect ratio sensitive oriented object detection with transformer", "year": "2023" }, { "authors": "Yu Zheng; Danyang Zhang; Sinan Xie; Jiwen Lu; Jie Zhou", "journal": "Springer", "ref_id": "b42", "title": "Rotation-robust intersection over union for 3d object detection", "year": "2020" }, { "authors": "Zhaohui Zheng; Ping Wang; Wei Liu; Jinze Li; Rongguang Ye; Dongwei Ren", "journal": "", "ref_id": "b43", "title": "Distance-iou loss: Faster and better learning for bounding box regression", "year": "2020" }, { "authors": "Dingfu Zhou; Jin Fang; Xibin Song; Chenye Guan; Junbo Yin; Yuchao Dai; Ruigang Yang", "journal": "IEEE", "ref_id": "b44", "title": "Iou loss for 2d/3d object detection", "year": "2019" }, { "authors": "Ding-Xuan Zhou", "journal": "Applied and computational harmonic analysis", "ref_id": "b45", "title": "Universality of deep convolutional neural networks", "year": "2020" }, { "authors": "Lin Zhou; Haoran Wei; Hao Li; Wenzhe Zhao; Yi Zhang; Yue Zhang", "journal": "IEEE Access", "ref_id": "b46", "title": "Arbitrary-oriented object detection in remote sensing images based on polar coordinates", "year": "2020" }, { "authors": "Xingyi Zhou; Dequan Wang; Philipp Krähenbühl", "journal": "", "ref_id": "b47", "title": "Objects as points", "year": "2019" }, { "authors": "Haigang Zhu; Xiaogang Chen; Weiqun Dai; Kun Fu; Qixiang Ye; Jianbin Jiao", "journal": "IEEE", "ref_id": "b48", "title": "Orientation robust object detection in aerial images using deep convolutional neural network", "year": "2015" }, { "authors": "Yixing Zhu; Jun Du; Xueqing Wu", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b49", "title": "Adaptive period embedding for representing oriented objects in aerial images", "year": "2020" } ]
[ { "formula_coordinates": [ 4, 102.45, 139.31, 184.58, 41.5 ], "formula_id": "formula_0", "formula_text": "     (x box , y box ) = (x obj , y obj ) (w box , h box ) = (w obj , h obj ) θ box = θ obj mod π(1)" }, { "formula_coordinates": [ 4, 110.87, 493.96, 176.16, 16.6 ], "formula_id": "formula_1", "formula_text": "θ = arg min θp ℓ f (θ p ); f (θ t )(2)" }, { "formula_coordinates": [ 4, 103.34, 690.72, 183.69, 18.67 ], "formula_id": "formula_2", "formula_text": "θ = f -1 arg min fp ℓ f p ; f (θ t )(3)" }, { "formula_coordinates": [ 4, 357.97, 512.44, 187.81, 51.76 ], "formula_id": "formula_3", "formula_text": "θ = f -1 arg min z=fp [ℓ f + ℓ g ] s.t. ℓ f = ℓ z; f (θ t ) ℓ g = ℓ g(f -1 (z)); g(θ t ) (4)" }, { "formula_coordinates": [ 5, 120.99, 521.89, 67.7, 10.81 ], "formula_id": "formula_4", "formula_text": "z = f (θ) = e jωθ" }, { "formula_coordinates": [ 5, 121.11, 538.11, 165.92, 22.31 ], "formula_id": "formula_5", "formula_text": "θ = f -1 (z) = - j ω ln z (6)" }, { "formula_coordinates": [ 5, 73.61, 643.32, 213.42, 40.79 ], "formula_id": "formula_6", "formula_text": "f box = e jωθ box = e jω(θ obj mod π) = e jωθ obj , θ obj ∈ [0, π) e jωθ obj • e -jωπ , θ obj ∈ [π, 2π) (7)" }, { "formula_coordinates": [ 5, 372.95, 471.55, 172.83, 9.65 ], "formula_id": "formula_7", "formula_text": "L acm = ℓ smooth_l1 f p , f t(8)" }, { "formula_coordinates": [ 5, 344.73, 522.79, 201.05, 9.65 ], "formula_id": "formula_8", "formula_text": "L box = ℓ B(xywh p , θ p ), B(xywh t , θ t )(9)" }, { "formula_coordinates": [ 5, 384.73, 577.88, 161.05, 9.65 ], "formula_id": "formula_9", "formula_text": "L cls = ℓ f ocal c p , c t(10)" }, { "formula_coordinates": [ 5, 355.52, 632.96, 190.26, 9.65 ], "formula_id": "formula_10", "formula_text": "L = L cls + λ box L box + λ acm L acm(11)" }, { "formula_coordinates": [ 11, 320.82, 433.09, 224.96, 125.42 ], "formula_id": "formula_11", "formula_text": "f box = e iωθ box = e iω(θ obj mod π) = e iωθ obj , θ obj ∈ [0, π) e iωθ obj • e -iωπ , θ obj ∈ [π, 2π) (16) 1) When ω = 2, e -iωπ = 1, then f box = e iωθ box = e iωθ obj , θ obj ∈ [0, π) e iωθ obj , θ obj ∈ [π, 2π) = f obj(17)" }, { "formula_coordinates": [ 11, 345.14, 586.41, 200.64, 84.63 ], "formula_id": "formula_12", "formula_text": "f box = e iωθ box = e iωθ obj , θ obj ∈ [0, π) -e iωθ obj , θ obj ∈ [π, 2π) = f obj , θ obj ∈ [0, π) -f obj , θ obj ∈ [π, 2π) = f obj • sign(π -θ obj )(18)" } ]
10.1609/aimag.v38i3.2741
2023-05-17
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b16", "b33", "b19", "b33", "b19", "b17", "b5", "b29", "b24", "b30", "b12", "b43", "b36", "b21", "b39", "b23", "b27", "b43", "b18", "b41" ], "table_ref": [], "text": "Explainable artificial intelligence has applications as long as machine learning methods are part of a problem. Therefore, given the widespread popularity of models like neural networks and random forests, it is not difficult to find use cases. In this article, we will talk about the employability field, which has been defined as \"an individual's ability to obtain and maintain initial employment, move between roles within the same organization, obtain new employment if necessary, and/or generally secure suitable and satisfying work\" [17].\nBoth job candidates and potential employers enter the job market with differing knowledge, competence, and abilities, collectively referred to as \"skills\" [34,20]. As a result of the imbalance between supply and demand in all economies [34,20], it is challenging to match job applicants with available positions. Current trends indicate an increasing shift towards the demand-driven provision of education, where employers also have a voice, resulting in the participation of multiple stakeholders (with different interests and objectives).\nDue to the complexity of these issues, Artificial Intelligence (AI) is regarded as a technology that can aid in the resolution of numerous employability-related obstacles. Reported solutions attempt to leverage AI to this end: predictive methods are used to estimate whether an individual with particular skills would meet market demands [18]; machine-learning-based classifiers are used to label job advertisements [6]; intelligent chatbot systems are developed for HR service delivery [30]; and AI-based recommender systems [25] work by pre-filtering a candidate pool [31]. These examples demonstrate AI's growing significance in research and industry. However, their \"black box\" nature makes it extremely difficult, if not impossible, to explain their predictions [13]. This opaqueness can be crucial for regulatory affairs [44] and has hindered the discovery of unfair algorithmic biases that can be present in models that are biased against certain groups, which could lead to widespread discrimination based on race or gender in hiring practices [37].\nExplainable Artificial Intelligence (XAI) has emerged to investigate potential solutions for these deficiencies, and multiple methodologies [22,40] have been developed in recent years. This paper focuses on a popular local, model-agnostic post-hoc method, namely counterfactual explanations. A counterfactual explanation can be defined as the minimal and irreducible set of input features that must be altered to change the predicted class of an observation [24]. For instance, a person denied a job could provide a counterfactual explanation: If your skills included Python and Cloud Computing, the prediction would change from unsuitable to suitable for this data science position.\nGiven its simplicity, this type of explanation is well-suited for how humans prefer to receive explanations [28] and satisfies the GDPR requirement for providing explanations for automated decision-making systems [44]. In addition, counterfactual explanations can serve purposes other than elucidating predictions. For example, we can use counterfactuals to guide a feasible path to alter output decisions [19] and as a bias detection tool [42]. Taking into account its simplicity and adaptability, we identify several applicable use cases for counterfactual explanations involving diverse requirements, methodological implications, and relevant stakeholders. To illustrate how they can be applied in real-world scenarios, we use a dataset containing over 12 million job listings from a Belgian employment agency to examine several use cases. We demonstrate that the resulting explanations are promptly provided, which is essential for real-world applications, and that they are sparse compared to other popular XAI methodologies (LIME and SHAP). This property is frequently claimed to be advantageous for producing understandable explanations. Therefore, we emphasize that this study exemplifies novel applications of counterfactual explanations in the field of employability that extend beyond their use as a decision explanation methodology. Moreover, for each use case, we further expand the discussion outside the employability field, generalizing the applicability of counterfactual explanations to any relevant problem." }, { "figure_ref": [], "heading": "Materials and Methods", "publication_ref": [ "b25", "b39", "b40", "b22", "b23", "b38", "b23", "b38", "b42", "b15" ], "table_ref": [], "text": "VDAB (Vlaamse Dienst voor Arbeidsbemiddeling en Beroepsopleiding), a public employment service in Flanders, Belgium, provided the raw datasets used to construct the model and generate counterfactual instances. This source is proprietary (secondary) and comprises pseudorandomized HR data that contain detailed information about the skills profiles of job seekers and vacancies [26]. As the dataset is proprietary and subject to terms and conditions, such as a Data Processing Agreement, its processing and publication have legal limitations to its usage and publication. The processed datasets include job requirements as binary column features, where 0 indicates that the position does not require the skill and 1 indicates that it does. Our dataset label is a number representing the reach of jobs that a particular set of skills has, i.e., the number of jobs partially or entirely fulfilled by the skills present in the row (equal to 1).\nThe dataset contains 1,396,179 rows, where each column represents a skill, with an average of 11.04 mentioned skills per position. The skills characteristics in the dataset are divided into four main categories: studies, study areas, competencies, and languages. The studies include all academic degrees recognized in Belgium, from high school diplomas to bachelor's, master's, and doctoral degrees. The study area is an aggregation level to multiple studies; for example, in the IT study area, bachelor's degrees in software engineering and master's degrees in computer science are grouped together. The competencies include more specific skills, such as mastery of specific software applications, and soft skills, such as team leadership. Finally, it includes information about known languages. The complete dataset (considering all study areas) has 5000 features after preprocessing, of which 4450 are competencies, 500 studies, and 50 languages.\nFor machine learning tasks, we employ CatBoost, a cutting-edge gradient boosting algorithm for machine learning that is effective for regression problems. The models are constructed using two sets of data: (1) all jobs and (2) only jobs requiring an information technology degree. For each dataset, a regression model was developed utilizing a train/test split of 3/1 and a 3-fold cross-validation grid search for parameter tuning. The test RMSE for the all jobs model was 1,899, while the test RMSE for the information technology jobs model was 249. Since we employ a regression model for a classification problem, we specify a threshold that determines whether or not the instance (set of skills) has a high job reach. In our experiments, high job reach is defined as a number greater than the 90th percentile for the group under consideration. This number is 29,559 for all jobs and 164,150 for IT-related positions. This paper uses three XAI methodologies: LIME, SHAP, and SEDC. LIME, a feature importance method, is implemented using its original Python package [40,41], where we describe unfavorable instances for the top 20 most important features while all other parameters are set to their default values. SHAP is another popular feature importance method that is also implemented using its original Python implementation [23]. We generate the same explanations for SHAP as for LIME, with all other parameters set to their default values. For the counterfactual explanation method, we choose to implement the logic of SEDC, a greedy best-first search algorithm [24,39]. We chose SEDC because it is ideally suited to manage a large number of binary features [24,39]. Since SEDC is designed for binary classification models, we wrap regression models in which a threshold is used to define a binary class. This approach, where we define a threshold or target in regression models, is frequently used for counterfactual explanations applied to regression models [43,16]." }, { "figure_ref": [], "heading": "Quantitative Results", "publication_ref": [ "b23", "b38", "b9" ], "table_ref": [ "tab_0" ], "text": "We conducted multiple experiments with distinct data subsets to demonstrate the potential for counterfactual explanations. All experiments are performed on a 2.60GHz Intel(R) Core(TM) i7-6700HQ processor with 16GB of memory and an NVIDIA GeForce GTX 1070 Mobile. In the first experiment, we examine the quantitative metrics of our counterfactual algorithm, namely the speed and sparsity of the resulting explanations, using a sample of 1,000 CVs randomly selected from the test dataset. Taking into account all counterfactuals generated, the average number of changes is 3.33, which is a significantly low value compared to the average number of skills (11.04) and standard deviation (6.7). In terms of time, the average generation time of a counterfactual is 56 seconds (with a standard deviation of 17.5 seconds). The maximum time required to generate a counterfactual is 89 seconds, which is still acceptable given that further scaling improvements can be easily achieved by deploying more high-performance computing equipment. In Table 1, we contrast the counterfactual explanations produced by SEDC with the Feature Importance ranking produced by LIME and SHAP. In this comparison, we calculate the proportion of instances in which a previously unfavorable decision was reversed following the modification of the five most relevant (or significant) suggested features. This result, unsurprising given our previous experiments [24,39,10], demonstrates the superior performance of counterfactual explanations in locating the smallest subset of relevant features for a class change. In the employment field, where acquiring new skills is generally viewed as costly and time-consuming, these findings provide further support for the superiority of counterfactual explanations over FI explanations." }, { "figure_ref": [], "heading": "Feature changes LIME", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Counterfactual Use Cases", "publication_ref": [], "table_ref": [], "text": "In the following sections, we will present multiple use cases where counterfactual explanations can represent a solution or tool for diverse problems in the field of employability. We use this field as a practical example for applications using the real-world data provided by VDAB. We then further expand the discussion for each use case, allowing usage in more general contexts." }, { "figure_ref": [ "fig_0" ], "heading": "Counterfactual as Explanations", "publication_ref": [ "b1", "b24" ], "table_ref": [], "text": "Automated suggestions in consumer services have increased with the development of big data and artificial intelligence. However, AI-based recommendations can also have unfavorable effects and compromise user autonomy [2]. Consequently, measures designed to increase the transparency of automated suggestion decisions are viewed as necessary enhancements. For the employment field, recommendations can be made for both sides of the recruitment process: candidates receive job application suggestions [25], and hiring teams receive talent suggestions.\nIn both instances, simply providing the machine learning results as a recommendation obscures the decision-making factors, thereby causing the previously mentioned issues. Counterfactual explanations can effectively address these problems in job recommendations due to their already-mentioned properties: they are easy to comprehend, operate at the instance level, can be applied to any prediction model, and permit customization by allowing the assignment of weights to features.\nTherefore, in the employability context, we can use counterfactual explanations to explain the suggestions derived from models that recommend jobs based on a candidate's characteristics, focusing on highlighting the skills that, if absent, would alter the recommendation. This use case is evaluated in our implementation by producing results similar to those depicted in Figure 1, where the positive classification result for a determined candidate's CV is explained by indicating which skills should be removed to no longer be a fit for the selected job area." }, { "figure_ref": [], "heading": "Counterfactual as Decision Support", "publication_ref": [ "b11", "b26", "b4", "b10" ], "table_ref": [], "text": "The use of artificial intelligence raises an important question: how can people, in general, have faith in complex models? This concern is pertinent and crucial to any application in the employability field, as it is a fundamental matter that affects most people's lives. Furthermore, trust is an essential factor in the effectiveness of AI systems in society [12]. Therefore, adopting XAI methodologies can have a positive effect, as explanations can aid in the acceptance of the model. Moreover, counterfactual explanations can be highlighted as an effective method of addressing this challenge because they are based on a thought process commonly used by humans to explain outcomes [27] and are therefore intuitively simple to comprehend. Overall, the interpretability perks of counterfactual explanations may contribute to not just explaining the decision to users but also making them more confident about the model. The literature shows that this improvement in the acceptability of machine learning algorithms has a decisive impact on their incorporation and can foster hypotheses about causality, therefore leading to a better comprehension of the problem [5]. However, increased trustworthiness can negatively affect the decision interpretation. This drawback occurs because justifications can induce users to give too much trust in models which are not wholly correct [11]. Therefore, prudence and critical thinking may be essential when explanations are given since they are not the root causes for the outcome of the actual problem but rather mathematical inferences obtained from a model. Future research can better explore the handling of trust while maintaining an inquisitive posture towards decisions, where the human-computer interface plays a definitive role. Despite the current challenges, the model's trust is still essential since its absence can critically endanger its use, particularly for high-stake decision fields like employability." }, { "figure_ref": [], "heading": "Counterfactual as Legal Compliance Method", "publication_ref": [ "b8", "b14", "b8", "b43", "b3" ], "table_ref": [], "text": "Given the significant impact that AI has on society, a number of global initiatives [9,15] aims to develop policies and regulations. Even the EU Commission explicitly identifies AI systems used in the workplace as high-risk, which means that they will be subject to stringent regulations [9]. Particularly significant is the EU General Data Protection Regulation, which mandates that automated decision-making systems provide explanations for their decisions if they can directly affect individuals (which is definitely the case with employability-related applications). In this case, counterfactual explanations have the characteristics required to comply with such regulations [44], and they also have the advantage of not disclosing, if correctly treated, the inner workings of the model when ex-plaining decisions [4]. This characteristic enhances the security of models that are considered trade secrets." }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "Counterfactual as Guidance", "publication_ref": [ "b7", "b18", "b3", "b31", "b34", "b18", "b35", "b6", "b37", "b34", "b20", "b28", "b35" ], "table_ref": [], "text": "As previously stated, insights on improving one's employment prospects have significant personal, social, and economic value. Since machine learning models are frequently used for employment prediction [8], XAI methodologies can be utilized (without modifying the original model) to better comprehend why a person has a low probability of entering the labor market.\nSpecifically, counterfactual explanations are well-suited, by definition, to provide guidance regarding what a person must change to achieve a different outcome [19,4]. As a result, they can be used to provide personalized career advice to job-seeking individuals, as the generated explanations highlight the skills that the candidate must acquire to improve their chances of employment. We can use as an illustrative example a simple predictive model that classifies a CV as recommended or not recommended for a particular job field, as demonstrated in Figure 2. In this instance, a counterfactual algorithm will suggest modifying the model's input (the individual's resume skills) to change the predictive class to be recommended. The main difference with the use case of explanations described in Section 4.1, is that here we will focus on absent features that, if added, would alter the predicted class, while explanations highlight the present features that, if deleted, would change the predicted class. This case shows that counterfactuals can have other objectives for the user other than only offering explanations and enhancing trust. Here, we describe its function as guidance to changing the classification output, which consequently includes new constraints. These additional limitations happen because simple or realistic counterfactuals [32] are insufficient to give guidance. For the former (simple), it is easy to comprehend since it can suggest unrealistic states, like being married and single at the same time, while for the latter (realistic), although some states are possible, like having 20 year old, this is unachievable to somebody older than that age. Therefore, in this case, counterfactuals must be actionable [35], which is a characteristic that considers what feasible changes the factual point can make.\nActionable counterfactuals can potentially require drawing a causal relationship between features [19]. For example, a counterfactual suggesting a single modification, getting a degree, may consider that all other features remain the same, which may not be an actionable change since variables like age are directly related to acquiring new education. Given this causal relationship between features, counterfactuals can even be deceptive [36] since the collateral changes provoked by following the counterfactual suggestion may affect the counterfactual objective to change the original class, we depict this scenario in Figure 3. Currently, some methods allow defining actionability constraints by manually assigning weights to changes [7], making it easier or more difficult to alter particular characteristics or even create rules that drive feature modifications [38]. However, given a large number of features and interactions of some problems, the manual definition of changes can be exhaustive or even impractical. Therefore some methods try to emulate actionability by using statistical or machine learning approaches. For example, FACE [35] uses the dataset to calculate feasible paths by using density metrics, and ALIBI [21] uses encoders trained over the original dataset to create a loss term related to counterfactual feasibility. On top of that, we must not discard the role of personal preferences over counterfactual suggestions, where multiple diverse counterfactual explanations, such as done in DiCE [29], or the interative generation process, can give more autonomy to the user.\nIn conclusion, for our specific use case applied to employability, we see that counterfactuals can take advantage of complex predictive models to give valuable suggestions that consider both personal skills and market demands. But we can quickly expand this applicability to more general cases where guidance to change the machine learning prediction has some value. In the medical field, for example, this has enormous potential to improve people's lives, although the already cited causal relationship between features and confounding variables represents a fundamental challenge [36]. Therefore, future advances in counterfactual generators may expand the functionality of predictive models, promoting them as trustworthy sources of guidance to change undesirable decisions." }, { "figure_ref": [ "fig_3", "fig_5" ], "heading": "Counterfactual as Analytical Tool", "publication_ref": [ "b9", "b2" ], "table_ref": [ "tab_1" ], "text": "Even though counterfactual explanations provide results for a single instance CV, the aggregation of multiple counterfactual explanations can provide valuable insights into the modeled problem. Figure 4 depicts the most cited competencies in numerous counterfactual explanations generated over factual CVs for the IT field; therefore, this chart identifies which skills, if obtained, will contribute the most to changing the classification of the candidate to a high job reach. This information is valuable because it considers both sides of employment, the skills of candidates, and the needs of employers, thereby optimizing the skill match. Therefore, multiple counterfactuals can provide a new view where the values represent modifications to alter the prediction of the analyzed instances. We can highlight the utility of this approach if we consider the following theoretical case: a governmental institute wants to know which skills they should offer to improve employability levels. The most obvious approach is to find the skills most requested by employers; we did this with VDAB data in Figure 5. However, if we consider people learning the two most requested skills, the employability enhancement, measured by the people classified as having high job reach, is worse than the top suggestions made by counterfactuals. We depict this result in Table 2, which shows the superiority of counterfactual explanations in finding the most effective set of features to change the prediction outcome. We then can expand this use case to any application where the model's decision is a relevant metric to be enhanced in a population. The characteristic of considering individual and dataset aspects makes aggregate counterfactual explanations a potentially helpful analysis to find which minimal set of changes can lead to the best impact on the objective score, especially in high non-linear models in which individual feature contributions may not consistently affect decisions [10]. In addition, the aggregated review of the explanations can create new knowledge. For example, in the medical field, studies [3] detected relevant parameters that affected the decision of complex machine learning models by evaluating the explanation of various instances." }, { "figure_ref": [], "heading": "Additional Use Cases", "publication_ref": [], "table_ref": [], "text": "The last section used the VDAB data to exemplify five different use cases in which counterfactuals can be used, including its classical explanation case and additional cases related to decision support, compliance, guidance, and analytics. However, although not tested with the data, we identified three additional use cases in which counterfactuals could assist in the employability field. The first one is related to bias detection. Although counterfactuals cannot solve the problem of biased models, they can be a helpful tool to evaluate if protected variables change the prediction decisions. This application can consist remarkable repercussions on employment-related tasks since historical data on which machine learning algorithms are trained may include historical bias against minority groups, for example. The second use case is related to the applicability of the counterfactual results to debug possible unexpected behaviors leading to misclassifications. Given the decision nature of counterfactual explanations, it can show the features that would revert such behavior and, consequently, lead to improvements or a better understanding of the underlying causes for the model's behavior. The third use case is also related to a technical aspect, where the possibility of automatically generating counterfactuals by using the multiple generative methods (present in literature) allows the adaptation of it as a pipe segment in an MLOps framework, which specific business rules could be checked (such as a bias for certain features) and controlled." }, { "figure_ref": [], "heading": "Counterfactual Explanation Limitations", "publication_ref": [ "b39", "b21", "b0", "b0", "b32", "b13" ], "table_ref": [], "text": "Despite the multiple application cases in which counterfactual explanations can be useful, there are also circumstances that they are not optimal or even advisable. The most significant inappropriate application is when prediction scoring, rather than prediction decision, is the subject of study. Therefore, counterfactual explanations do not give a clear picture of how the variation of features influences the prediction score. For this case, popular feature importance methods such as LIME [40] and SHAP [22] are much more appropriate since they evaluate how each feature influences the model's prediction scoring. Moreover, counterfactual explanations can represent a threat to intellectual property if not correctly used, as additional points of different classes can be used in model extraction attacks [1], which can create faithful copies of the original model even under low query budgets [1]. Multiple counterfactuals can also empower malicious users to play with the model, allowing them to exploit possible flaws [33], similar to what is done with adversarial attacks. If not properly treated, counterfactual explanations also create privacy concerns as the feature changes and background information can lead to the so-called explanation linkage attack [14], which may possibly identify anonymized individuals. Finally, counterfactual explanations can be misleading in several conditions. For example, as we previously discussed, if the model is not reliable enough, counterfactual changes may not reflect a change in the outcome in reality. Also, if the model changes over time (a typical occurrence in real-world production environments), a counterfactual may not be valid in different versions and then be a source of confusion or dispute. For the employment context, all these limitations are relevant since they can be applicable to any kind of model and problem. Therefore, practitioners must carefully evaluate those concerns and, if one or more are pertinent, find possible solutions or avoid using them." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b8", "b31" ], "table_ref": [], "text": "The advancements in artificial intelligence have led to the creation of powerful methodologies that have altered the manner in which we analyze complex problems, such as those pertaining to employability. These highly efficient methodologies are notable for their complexity, which contributes to their inherent inexplicability. As the EU Commission identifies the employment sector as a high-risk area for the use of AI [9], it is essential to provide sufficient information on these methodologies. The XAI field provides the means to address this issue: we demonstrated how a specific XAI methodology, counterfactual explanations, can be applied in the employability field by presenting five real use cases that result in more interpretable decisions and beyond. These use cases reach diverse objectives and stakeholders: career advice for potential hires, explanations of job recommendations to job posting systems applicants, gaining new insights from models and data for institutions and policymakers, gaining trust and social acceptance for the general public, and legal compliance for authorities. We anticipate that this novel perspective on the application of counterfactual explanations will provide the necessary insights to resolve current issues and inspire future research in the field. In addition, we anticipate that future work can include other novel objectives such as bias detection and model improvement. Finally, since there is a great degree of variation in how counterfactuals can be generated [32] (where multiple implementation strategies are described in the literature), prospective researchers can also examine how these differences affect results for particular use cases." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This research received funding from the Flemish Government (AI Research Program) and used secondary, pseudonimized HR data provided by VDAB." } ]
In eXplainable Artificial Intelligence (XAI), counterfactual explanations are known to give simple, short, and comprehensible justifications for complex model decisions. However, we are yet to see more applied studies in which they are applied in real-world cases. To fill this gap, this study focuses on showing how counterfactuals are applied to employability-related problems which involve complex machine learning algorithms. For these use cases, we use real data obtained from a public Belgian employment institution (VDAB). The use cases presented go beyond the mere application of counterfactuals as explanations, showing how they can enhance decision support, comply with legal requirements, guide controlled changes, and analyze novel insights.
Unveiling the Potential of Counterfactuals Explanations in Employability
[ { "figure_caption": "Fig. 1 .1Fig. 1. Illustrative example of how a job area suggestion could be explained to the user.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2.Example of how automated career advice could present to the user, for a specific profession, which skills she/he must acquire to be suitable or more competitive.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig.3. Example of how a counterfactual explanation, increasing the salary from 7,000 to 9,000, can lead to different outcomes depending on the action taken. If the person achieves a higher salary by getting a raise (point a), the model will change the classification from rejected to approved, while if the person increases the salary by getting a new job (point b), the change in the classification is not achieved. This happens because the counterfactual considers all other features are constant, therefore, possibly leading to ineffective advice if the person is not aware.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Most frequently occurring missing competences, retrieved from counterfactual explanations derived from decisions made by the predictive model based on people that have a degree related to IT.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "computer programs and applications Program in a specific language Follow-up technical and economic regulations Analyse functional problems Document support Analyse customer needs Develop technical specifications of IT applications Execute functional tests Develop application linked to a database Production of software solutions in an environment", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. Most requested competencies asked in IT related positions.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Percentage", "figure_data": "SHAP SEDC13.7 % 6.9 % 9.1 %26.1 % 14.0 % 20.5 %36.3 % 20.4 % 40.2 %46.6 % 25.0 % 97.6 %56.7 % 27.2 % 100 %", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Table showing the percentage of instances with classification modified to a favorable outcome when changing 1 to 10 top features of most requested competencies asked by IT positions (AVG) and the top 10 most frequently counterfactual features.", "figure_data": "12345678910Top SelectedAVG (%)1.6 3.5 3.2 5.1 5.9 11.8 17.6 26.4 29.6 48.0SEDC (%)9.1 15.3 34.5 70.5 78.4 89.9 92.8 97.1 98.2 98.3", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Raphael Mazzine Barbosa De Oliveira; Sofie Goethals; Dieter Brughmans; David Martens
[ { "authors": "Ulrich Aivodji; Alexandre Bolot; Sébastien Gambs", "journal": "", "ref_id": "b0", "title": "Model extraction from counterfactual explanations", "year": "2020" }, { "authors": "Mark Alfano", "journal": "Synthese", "ref_id": "b1", "title": "Technologically scaffolded atypical cognition: the case of YouTube's recommender system", "year": "2020" }, { "authors": "Filippo Arcadu", "journal": "NPJ digital medicine", "ref_id": "b2", "title": "Deep learning algorithm predicts diabetic retinopathy progression in individual patients", "year": "2019" }, { "authors": "Solon Barocas; Andrew D Selbst; Manish Raghavan", "journal": "", "ref_id": "b3", "title": "The hidden assumptions behind counterfactual explanations and principal reasons", "year": "2020" }, { "authors": "Deepti Vinayak; Bidkar ", "journal": "", "ref_id": "b4", "title": "A literature review to underline necessity of explainability in AI and discuss existing explainable AI techniques", "year": "2021" }, { "authors": "Roberto Boselli", "journal": "Future Generation Computer Systems", "ref_id": "b5", "title": "Classifying online job advertisements through machine learning", "year": "2018" }, { "authors": "Dieter Brughmans; Pieter Leyman; David Martens", "journal": "", "ref_id": "b6", "title": "Nice: an algorithm for nearest instance counterfactual explanations", "year": "2023" }, { "authors": "D Cherry; Enrique D Casuat; Festijo", "journal": "IEEE", "ref_id": "b7", "title": "Predicting students' employability using machine learning approach", "year": "2019" }, { "authors": "", "journal": "Europan Commision", "ref_id": "b8", "title": "Europe Fit for the Digital Age: Commission Proposes New Rules and Actions for Excellence and Trust in Artificial Intelligence", "year": "2021" }, { "authors": "Carlos Fernandez-Loria; Foster Provost; Xintian Han", "journal": "", "ref_id": "b9", "title": "Explaining data-driven decisions made by AI systems: The counterfactual approach", "year": "2020" }, { "authors": "Marzyeh Ghassemi; Luke Oakden-Rayner; Andrew L Beam", "journal": "The Lancet Digital Health", "ref_id": "b10", "title": "The false hope of current approaches to explainable artificial intelligence in health care", "year": "2021" }, { "authors": "Ella Glikson; Anita Williams Woolley", "journal": "Academy of Management Annals", "ref_id": "b11", "title": "Human trust in artificial intelligence: Review of empirical research", "year": "2020" }, { "authors": "Sofie Goethals; David Martens; Theodoros Evgeniou", "journal": "Journal of Big Data", "ref_id": "b12", "title": "The non-linear nature of the cost of comprehensibility", "year": "2022" }, { "authors": "Sofie Goethals; Kenneth Sörensen; David Martens", "journal": "", "ref_id": "b13", "title": "The privacy issue of counterfactual explanations: explanation linkage attacks", "year": "2022" }, { "authors": "Bryce Goodman; Seth Flaxman", "journal": "AI Magazine", "ref_id": "b14", "title": "European Union Regulations on Algorithmic Decision-Making and a \"Right to Explanation", "year": "2017-10" }, { "authors": "Suryabhan Singh; Hada ; Miguel A Carreira-Perpinan", "journal": "Springer", "ref_id": "b15", "title": "Exploring counterfactual explanations for classification and regression trees", "year": "2021" }, { "authors": "Jim Hillage; Emma Pollard", "journal": "DfEE Publications", "ref_id": "b16", "title": "Employability: developing a framework for policy analysis", "year": "1998" }, { "authors": "Bangsuk Jantawan; Cheng-Fa Tsai", "journal": "", "ref_id": "b17", "title": "The application of data mining to build classification model for predicting graduate employment", "year": "2013" }, { "authors": " Amir-Hossein; Bernhard Karimi; Isabel Scholkopf; Valera", "journal": "", "ref_id": "b18", "title": "Algorithmic Recourse: from Counterfactual Explanations to Interventions", "year": "2020" }, { "authors": "Davos Klosters", "journal": "", "ref_id": "b19", "title": "Matching skills and labour market needs: Building social partnerships for better skills and better jobs", "year": "2014" }, { "authors": "Arnaud Van Looveren; Janis Klaise", "journal": "", "ref_id": "b20", "title": "Interpretable Counterfactual Explanations Guided by Prototypes", "year": "2019" }, { "authors": "M Scott; Su-In Lundberg; Lee", "journal": "Advances in neural information processing systems", "ref_id": "b21", "title": "A unified approach to interpreting model predictions", "year": "2017" }, { "authors": "M Scott; Lundberg", "journal": "Nature machine intelligence", "ref_id": "b22", "title": "From local explanations to global understanding with explainable AI for trees", "year": "2020" }, { "authors": "David Martens; Foster Provost", "journal": "MIS quarterly", "ref_id": "b23", "title": "Explaining data-driven document classifications", "year": "2014" }, { "authors": "Jorge Martinez-Gil; Bernhard Freudenthaler; Thomas Natschlager", "journal": "", "ref_id": "b24", "title": "Recommendation of job offers using random forests and support vector machines", "year": "2018" }, { "authors": "Yoosof Mashayekhi", "journal": "", "ref_id": "b25", "title": "Quantifying and reducing imbalance in networks", "year": "2021" }, { "authors": "Alice Mceleney; Ruth Mj Byrne", "journal": "Thinking & Reasoning", "ref_id": "b26", "title": "Spontaneous counterfactual thoughts and causal explanations", "year": "2006" }, { "authors": "Tim Miller", "journal": "Artificial Intelligence", "ref_id": "b27", "title": "Explanation in artificial intelligence: Insights from the social sciences", "year": "2019" }, { "authors": "Ramaravind Kommiya Mothilal; Amit Sharma; Chenhao Tan", "journal": "", "ref_id": "b28", "title": "Explaining Machine Learning Classifiers through Diverse Counterfactual Explanations", "year": "2019" }, { "authors": "F Dena; Mujtaba; Nihar R Mahapatra", "journal": "IEEE", "ref_id": "b29", "title": "Ethical considerations in aibased recruitment", "year": "2019" }, { "authors": "Jessica Ochmann; Sandra Zilker; Sven Laumer", "journal": "", "ref_id": "b30", "title": "The evaluation of the black box problem for AI-based recommendations: An interview-based study", "year": "2021" }, { "authors": "Raphael Mazzine Barbosa De Oliveira; David Martens", "journal": "Applied Sciences", "ref_id": "b31", "title": "A framework and benchmarking study for counterfactual generating methods on tabular data", "year": "2021" }, { "authors": "Martin Pawelczyk", "journal": "PMLR", "ref_id": "b32", "title": "Exploring counterfactual explanations through the lens of adversarial examples: A theoretical and empirical analysis", "year": "2022" }, { "authors": "M Mpho; Kaelo Pheko; Molefhe", "journal": "International Journal of Adolescence and Youth", "ref_id": "b33", "title": "Addressing employability challenges: a framework for improving the employability of graduates in Botswana", "year": "2017" }, { "authors": "Rafael Poyiadzi", "journal": "Ethics, and Society", "ref_id": "b34", "title": "FACE: feasible and actionable counterfactual explanations", "year": "2020" }, { "authors": "Mattia Prosperi", "journal": "Nature Machine Intelligence", "ref_id": "b35", "title": "Causal inference and counterfactual prediction in machine learning for actionable healthcare", "year": "2020" }, { "authors": "Arun Rai", "journal": "Journal of the Academy of Marketing Science", "ref_id": "b36", "title": "Explainable AI: From black box to glass box", "year": "2020" }, { "authors": "Goutham Ramakrishnan; Yun Chan Lee; Aws Albarghouthi", "journal": "", "ref_id": "b37", "title": "Synthesizing Action Sequences for Modifying Model Decisions", "year": "2019" }, { "authors": "Yanou Ramon", "journal": "Advances in Data Analysis and Classification", "ref_id": "b38", "title": "A comparison of instance-level counterfactual explanation algorithms for behavioral and textual data: SEDC, LIME-C and SHAP-C", "year": "2020" }, { "authors": "Marco Tulio Ribeiro; Sameer Singh; Carlos Guestrin", "journal": "", "ref_id": "b39", "title": "Why should i trust you?\" Explaining the predictions of any classifier", "year": "2016" }, { "authors": "Marco Tulio Ribeiro; Sameer Singh; Carlos Guestrin", "journal": "", "ref_id": "b40", "title": "Model-agnostic interpretability of machine learning", "year": "2016" }, { "authors": "Kacper Sokol; Peter A Flach", "journal": "", "ref_id": "b41", "title": "Counterfactual explanations of machine learning predictions: opportunities and challenges for AI safety", "year": "2019" }, { "authors": "Thomas Spooner", "journal": "", "ref_id": "b42", "title": "Counterfactual Explanations for Arbitrary Regression Models", "year": "2021" }, { "authors": "Sandra Wachter; Brent Mittelstadt; Chris Russell", "journal": "Harv. JL & Tech", "ref_id": "b43", "title": "Counterfactual explanations without opening the black box: Automated decisions and the GDPR", "year": "2017" } ]
[]
2023-05-17
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b8", "b15", "b18", "b5", "b15", "b20", "b14", "b1", "b13", "b0", "b24", "b7", "b10", "b21", "b22", "b3", "b7", "b11", "b12", "b17", "b29", "b12", "b26" ], "table_ref": [], "text": "T1-weighted magnetic resonance imaging (T1-MRI) is one of the indispensable medical imaging methods for noninvasive diagnosing neurological disorder [9]. Existing approaches [16,19] based on T1-MRI focus on extracting region of interests (ROIs) to analyze structural atrophy information associated with disease progression. However, some works [6,16,21] heavily rely on manual defined and selected ROIs, which have limitations in explaining the individual brain specificity. To address this issue, Lian et al. [15] localize discriminative regions by a pretrained module, where region localization and following feature learning cannot reinforce each other, resulting a coarse feature representation. Additionally, as inter-regional correlations are unavailable in T1-MRI directly, most related works [2,14] ignore inter-regional correlations or replace them with a generalized global information. These conventional modular approaches have limitations in explaining high-dimensional brain structure information [1,25].\nBrain network is a vital method to analysis brain disease, which has been widely used in functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI). However, the structural brain network with T1-MRI is still underexplored due to the lack of direct regional connectivity. Recent advances [8,11,22,23] in graph convolution neural networks (GCNs) have optimized brain networks construction with fMRI and DTI. Given the successful application of GCN in these modalities, we think that it also has potential for construction of structural brain network using T1-MRI. Current approaches [4,8,12,13] to brain network construction involve the selection of ROIs and modeling interregional correlations, in which anatomical ROIs are employed as nodes, and internode correlations are modeled as edges. Some researches [18,30] have demonstrated that brain connectivity displays hierarchical structure distribution, yet most GCN-based methods [13,27] treat all nodes equally and ignore the hierarchical nature of brain connectivity. These structural brain networks are fixed and redundant, which may lead to coarse feature representation and suboptimal performance in downstream tasks.\nTo address these issues, we propose novel dynamic structural brain network construction method named hierarchical prototypes embedding GCN (DH-ProGCN) to dynamically construct disease-related structural brain network based on T1-MRI. Firstly, a prototypes learning method is used to cluster spatially-correlated channel and generate several critical brain regions as prototypes. Then, we introduce a contrastive loss function to constrain the hierarchical distribution among prototypes to obtain the hierarchical brain semantic structure embdedding in the latent space. After that, DH-ProGCN utilizes a self-attention mechanism to dynamically construct hierarchical correlations of critical regions for constructing structural brain network. GCN is applied to explore the correlation of the structural brain network for Mild Cognitive Impairment (MCI) conversion prediction. We verify the effectiveness of DH-ProGCN on the AlzheimerâĂŹs Disease Neuroimaging Initiative-1 (ADNI-1) and ADNI-2 dataset. DH-ProGCN achieves state-of-the-art (SOTA) performance for the the classification of progressive mild cognitive impairment (pMCI) and stable mild cognitive impairment (sMCI) based on T1-MRI. " }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Backbone", "publication_ref": [ "b23" ], "table_ref": [], "text": "In this study, we utilize a Convmixer-like [24] block as the backbone to achieve primary discriminative brain regions localization, which could provide a large enough channel dimension for subsequent channel clustering with relatively low complexity. Specifically, depicted in Fig. 1(A), the backbone consists of a patch embedding layer followed by several full-convolution blocks. Patch embedding comprises a 5 × 5 × 5 convolution, and the full convolution block comprikuaijses a 5 × 5 × 5 depthwise convolution (grouped convolution with groups equal to the number of channels) and a pointwise convolution (kernel size is 1 × 1 × 1) with 2048 channels. By the backbone, features of discriminative regions are finally extracted as F b ∈ R C×D×H×W , where D, H, W and C indicate depth, height, width and the number of channels, respectively." }, { "figure_ref": [], "heading": "Dynamic Hierarchical Prototype Learning", "publication_ref": [ "b28", "b16", "b24" ], "table_ref": [], "text": "Prototypes Definition. In this study, we regard feature maps of each channel as corresponding to the response of distinct brain regions relevant to tasks. Following [29], we utilize the location of each peak response as the channel information. Intuitively, a position vector composed of peak response coordinates of each channel is defined as the candidate prototype. Position vectors of training images can be obtained as following:\n[t 1 x , t 1 y , t 1 z , t 2 x , t 2 y , t 2 z , . . . , t Ω x , t Ω y , t Ω z ](1)\nwhere [t i x , t i y , t i z ] represents the peak response coordinate of the i-th image and Ω represents the number of images in the training set. K-means [17] is used to achieve prototypes initialization. Specifically, vectors of all channels are clustered to obtain N sets of clusters K = {k n } N n=1 , and prototypes are defined as clustering centers Γ = {γ n } N n=1 which are taken as N critical regions for the discriminative localization (i.e., ROIs). F h ∈ R N ×D×H×W represents features of clustering centers.\nDynamic Hierarchical Prototype Exploring. Inter-regional spatial connectivity is fixed, but the correlation between them is dynamic with disease progression. We argue that there are structural correlations between different regions, just as the complex hierarchical functional connectome in rich-clubs [25] organization with fMRI. We therefore explore the hierarchical semantic structure of critical brain regions by the hierarchical prototype clustering method.\nSpecifically, we start by using the initial prototypes as the first hierarchy clustering prototypes, denoted as Γ 0 = {γ 0 n } N0 n=1 . Then, K-means is applied iteratively to obtain parent prototypes of the lower-hierarchy prototypes\nΓ i-1 = {γ i-1 n } Ni-1 n=1 , denoted as Γ i = {γ i n } Ni n=1\n, where i represents the i-th hierarchy and N i represents the number of clusters at i-th hierarchy, corresponding to the cluster\nK i = {k i n } Ni n=1 .\nIn this paper, i is set as 2. The number of prototypes in the first, second and third hierarchy is set as 16, 8 and 4, respectively.\nTo facilitate optimal clustering of the network during training, we use two fully convolutional layers with two contrastive learning loss functions L node and L edge to approximate the clustering process. With L node , each channel clustering is enforced to become more compact inside and have significant inter-class differences with other clusterings, enabling all prototypes to be well separated:\nL node = - 1 L L l=1 N l n=1 u∈K l n log exp u • γ l n /φ l n N l i =n exp u • γ l i /φ l n (2\n)\nφ l n = u∈K l n u -γ l n 2 |K l n | • log(|K l n | + α)(3)\nWhere L is the total number of layers, and N l is the number of clusters in the l-th layer. K l n , γ l n , and φ l n denote the set of all elements, the cluster center (prototype), and the estimation of concentration of the n-th cluster in the l-th layer, respectively. α is a smoothing parameter to prevent small clusters from having overly-large φ.\nThe cluster concentration φ measures the closeness of elements in a cluster. A larger φ indicates more elements in the cluster or smaller total average distance between all elements and the cluster center. Ultimately, L node compels all elements u in K l n to be close to their cluster center γ l n and away from other cluster center at the same level.\nSimilarly, L edge aims to embed the hierarchical correlation between clustering prototypes, which can be expressed as:\nL edge = - 1 L L-1 l=1 N l n=1 log exp γ l n • P arent(γ l n )/τ N l i =n exp γ l n • γ l i /τ(4)\nP arent(γ l n ) represents the parent prototype of the prototype γ l n , and τ is a temperature hyper-parameter. L edge forces all prototypes γ l in the l-th layer to be close to their parent prototype and away from other prototypes within the same level." }, { "figure_ref": [ "fig_0" ], "heading": "Brain Network Graph Construction and Classification", "publication_ref": [], "table_ref": [], "text": "Through Section 2.2, critical brain regions are clustered in a hierarchical semantic latent space. We hereby employ the prototypes regions as nodes and correlations between them as edges to construct structural brain network graphs as shown in Fig. 1" }, { "figure_ref": [], "heading": "(C).", "publication_ref": [ "b25" ], "table_ref": [], "text": "We first apply a self-attention mechanism [26] to compute inter-region correlations to generate edges of the brain network. Then, the features F h is input to three separate fully connected layers to obtain three vectors: query, key, and value, which are used to compute attention scores A ∈ R N ×N between each pair of prototypes, followed by being used to weight the value vector and obtain the output of the self-attention layer as following operation:\nA = Attention(Q, K, V ) = sof tmax( QK T √ d k )V(5)\nwhere\nQ ∈ R N ×d k , K ∈ R N ×d k , V ∈ R N ×N\ndenote query, key, and value, respectively. d k represents the dimension of Q, K. N represents the number of critical regions, which is set as 16 in this paper.\nWe then employ GCN to capture the topological interaction in the brain network graph and update features of nodes by performing the following operation:\nGCN (X) = ReLU D-1/2  D-1/2 XΘ(6)\nwhere  = A+I is the adjacency matrix with inserted self-loops and I denotes an identity matrix. Dii = j=0 Âij is the diagonal degree matrix, and Θ represents learned weights. To prevent the network overfitting, we just use two GCN layers as the encoder to obtain the final graph feature F g ∈ R N ×D×H×W .\nTo this end, the information of critical brain regions are fully learned. Notably, as prototypes are dynamic, constructed brain network graphs are also dynamic, rather than predefined and fixed. This allows DH-ProGCN to model and explore the individual hierarchical information, providing a more personalise brain network representation for every subject.\nTo achieve the classification, we perform channel squeezing on the backbone feature F b to obtain global features F se ∈ R 1×D×H×W , concatenate it with F g and input them into the classification layer." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [ "b19", "b14" ], "table_ref": [], "text": "The data we used are from two public databases: ADNI-1 (http://www.adniinfo.org) [20], and ADNI-2. The demographic information of the subjects and preprocessing steps are shown in the supplemental material. The preprocessed images are finally resized to 91 × 109 × 91 voxels. Through the quality checking, 305 images are left from ADNI-1 (197 for sMCI, 108 for pMCI), and 250 images are left from ADNI-2 (251 for sMCI, 99 for pMCI). Note that some subjects have two images or more in different times, and we only keep the earliest one. Following [15], we train DH-ProGCN on ADNI-1 and perform independent testing on ADNI-2." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b2" ], "table_ref": [], "text": "We first train backbone with 2048 channels in all layers to extract the output features F b with the cross-entropy loss L cls1 . The cross-entropy loss L cls2 is used for the final classification. The overall loss function is defined as:\nL = L cls1 + L cls2 + L node + L edge (7\n)\nwhere L node and L edge are explained in Section 2.2. Smooth parameter α = 10 and temperature parameter τ = 0.2 following [3]. All blocks are trained by SGD optimizer with a momentum of 0.9 and weight decay of 0.001. The model is trained for 300 epochs with an initial learning rate of 0.01 that is decreased by a factor of 10 every 100 epochs. Five metrics, namely accuracy (ACC), sensitivity (SEN), specificity (SPE), and area under the curve (AUC), are used to evaluate the performance of the proposed model. We use Python based on the PyTorch package and run the network on a single NVIDIA GeForce 3090 24 GB GPU." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Comparing with SOTA Methods", "publication_ref": [ "b15", "b14", "b13", "b18", "b1" ], "table_ref": [ "tab_0" ], "text": "Six SOTA methods are used for comparison: 1) LDMIL [16] captured both local information conveyed by patches and global information; 2) H-FCN [15] implemented three levels of networks to obtain multi-scale feature representations which are fused for the construction of hierarchical classifiers; 3) HybNet [14] assigned the subject-level label to patches for local feature learning by iterative network pruning; 4) AD 2 A [10] located discriminative disease-related regions by an attention modules; 5) DSNet [19] provided disease-image specificity to an image synthesis network; 6) MSA3D [2] implemented a slice-level attention and a 3D CNN to capture subject-level structural changes.\nResults in Table 1 show the superiority of DH-ProGCN over SOTA approaches for MCI conversion prediction. Specifically, DH-ProGCN achieves ACC of 0.849 and AUC of 0.845 tested on ADNI-2 by models trained on ADNI-1. It is worth noting that our method: 1) needs no predefined manual landmarks, but achieves better diagnostic results than existing deep-learning-based MCI diagnosis methods; 2) needs no pretrain network parameters from other tasks like AD diagnosis; 3) introduces hierarchical distribution structure to connect regions and form region-based specificity brain structure networks, rather than generalizing the correlations between regions with global information. " }, { "figure_ref": [ "fig_1", "fig_1", "fig_1", "fig_2" ], "heading": "Ablation Study", "publication_ref": [ "b27", "b4", "b6", "b8" ], "table_ref": [], "text": "Effect of dynamic prototype learning. To verify the effect of dynamic prototype clustering, we compare 1) ROI-based approach [28], 2) backbone without channel clustering (BL), 3) backbone with dynamic prototypes clustering (BL+L node ). As shown in Fig. 2, results indicate that dynamic prototype clus-tering outperforms the ROI-based and backbone on MCI conversion, and could generate better feature distributions for downstream brain images analysis tasks.\nEffect of Hierarchical prototype learning. To evaluate the impact of hierarchical prototype learning, we compare backbone with flattened prototypes clustering (BL+L node ), and hierarchical clustering (BL+L node +L edge ). The results are presented in Fig. 2. With the constraint strengthened on the distribution of regions, the results are progressively improved. This implies that it makes sense to introduce hierarchical semantics into the construction of structure brain networks. Effect of Dynamic Brain Network Construction. To verify whether our constructed dynamic brain network capability outperforms the fixed architecture, we obtained the fixed brain network graph by directly connecting all critical regions after obtaining hierarchical features and feeding them into the GCN for classification. The results are shown in Fig. 2, where the dynamic brain network structure performs better, suggesting that the correlation between regions needs to be measured dynamically to construct a better brain network.\nIn addition, we visualize the sagittal, coronal and axial views of hierarchical critical regions and their connectome in Fig. 3. The left and right sub-figures represent brain network visualization of two sMCI and two pMCI subjects, respectively. In general, critical regions and correlations are varied for different subjects, which means that our method is feasible for constructing individual brain networks according to the individuals specificity. Localized regions are roughly distributed in anatomically defined parahippocampal gyrus, superior frontal gyrus, and cingulate gyrus for different sMCI subjects, lingual gyrus right, and superior longitudinal fasciculus for different pMCI subjects, which agree with previous studies. [5,7,9]." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a novel dynamic structural brain network construction method named DH-ProGCN. DH-ProGCN could dynamically cluster critical brain regions by the prototype learning, implicitly encode the hierarchical semantic structure of the brain into the latent space by hierarchical prototypes embedding, dynamically construct brain networks by self-attention and extract topology features in the brain network by GCN. Experimental results show that DH-ProGCN outperforms SOTA methods on the MCI conversion task. Essentially, DH-ProGCN has the potential to model hierarchical topological structures in other kinds of medical images. In our future work, we will apply this framework to other kinds of modalities and neurological disorders." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgements This work is partially supported by ********* and ********." } ]
Constructing structural brain networks using T1-weighted magnetic resonance imaging (T1-MRI) presents a significant challenge due to the lack of direct regional connectivity information. Current methods with T1-MRI rely on predefined regions or isolated pretrained location modules to obtain atrophic regions, which neglects individual specificity. Besides, existing methods capture global structural context only on the whole-image-level, which weaken correlation between regions and the hierarchical distribution nature of brain connectivity. We hereby propose a novel dynamic structural brain network construction method based on T1-MRI, which can dynamically localize critical regions and constrain the hierarchical distribution among them for constructing dynamic structural brain network. Specifically, we first cluster spatially-correlated channel and generate several critical brain regions as prototypes. Further, we introduce a contrastive loss function to constrain the prototypes distribution, which embed the hierarchical brain semantic structure into the latent space. Self-attention and GCN are then used to dynamically construct hierarchical correlations of critical regions for brain network and explore the correlation, respectively. Our method is evaluated on ADNI-1 and ADNI-2 databases for mild cognitive impairment (MCI) conversion prediction, and acheive the state-of-the-art (SOTA) performance. Our source code is available at http://github.com/
Dynamic Structural Brain Network Construction by Hierarchical Prototype Embedding GCN using T1-MRI
[ { "figure_caption": "Fig. 1 .1Fig. 1. The overall framework of the DH-ProGCN. (A) We first extract the feature F b via backbone, and assume that the featuremap of each channel represents different discriminative regions which are showed as images with different colors in F b . (B) The hierarchical feature F h are then obtained by hierarchical clustering on the channel dimension. (C) We utilize a self-attention mechanism to model feature correlations matrix A and learn the feature graph Fg by a GCN. (D) Fg and the global representation F b are concatenated for MCI conversion prediction.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Effects of each component of DH-ProGCN for MCI conversion prediction on ADNI-2 obtained by models trained on ADNI-1.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Sagittal, coronal and axial views of connectome in hierarchical critical regions. (A)(B) represent brain network visualization of sMCI and (C)(D) represent pMCI subjects. Nodes correspond to critical regions i.e. prototypes, and edges are form the connectivity weight between nodes.The size of node increases with its hierarchy, and nodes with same color are clustered into the same parent prototype. Lower-hierarchy prototypes within cluster are closer to its parent prototypes, and higher-hierarchy prototypes between different clusters are closer than lower-hierarchy prototypes.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Comparsion of our method with current SOTA methods for MCI conversion prediction on ADNI-2 obtained by the models trained on ADNI-1.", "figure_data": "MethodACC SEN SPE AUCLDMIL0.769 0.421 0.824 0.776H-FCN0.809 0.526 0.854 0.781HybNet0.827 0.579 0.866 0.793AD 2 A0.780 0.534 0.866 0.788DSNet0.762 0.770 0.742 0.818MSA3D0.801 0.520 0.856 0.789DH-ProGCN 0.849 0.647 0.928 0.845", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" } ]
Yilin Leng; Wenju Cui; Chen Bai; Zheng Yanyan; Jian Zheng
[ { "authors": "Ed Bullmore; Olaf Sporns", "journal": "Nature reviews neuroscience", "ref_id": "b0", "title": "Complex brain networks: graph theoretical analysis of structural and functional systems", "year": "2009" }, { "authors": "Lin Chen; Hezhe Qiao; Fan Zhu", "journal": "Frontiers in Aging Neuroscience", "ref_id": "b1", "title": "Alzheimer's disease diagnosis with brain structural mri using multiview-slice attention and 3d convolution neural network", "year": "2022" }, { "authors": "Xinlei Chen; Haoqi Fan; Ross Girshick; Kaiming He", "journal": "", "ref_id": "b2", "title": "Improved baselines with momentum contrastive learning", "year": "2020" }, { "authors": "Yuzhong Chen; Jiadong Yan; Mingxin Jiang; Tuo Zhang; Zhongbo Zhao; Weihua Zhao; Jian Zheng; Dezhong Yao; Rong Zhang; Keith M Kendrick", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b3", "title": "Adversarial learning based node-edge graph attention networks for autism spectrum disorder identification", "year": "2022" }, { "authors": "Andrea Chincarini; Paolo Bosco; Piero Calvini; Gianluca Gemme; Mario Esposito; Chiara Olivieri; Luca Rei; Sandro Squarcia; Guido Rodriguez; Roberto Bellotti", "journal": "Neuroimage", "ref_id": "b4", "title": "Local mri analysis approach in the diagnosis of early and prodromal alzheimer's disease", "year": "2011" }, { "authors": "Wenju Cui; Caiying Yan; Zhuangzhi Yan; Yunsong Peng; Yilin Leng; Chenlu Liu; Shuangqing Chen; Xi Jiang; Jian Zheng; Xiaodong Yang", "journal": "Frontiers in Neuroscience", "ref_id": "b5", "title": "Bmnet: A new region-based metric learning method for early alzheimerâĂŹs disease identification with fdg-pet images", "year": "2022" }, { "authors": "I Bradford C Dickerson; Goncharova; C Sullivan; Forchetti; Wilson; Laurel A Bennett; Beckett; Morrell", "journal": "Neurobiology of aging", "ref_id": "b6", "title": "Mri-derived entorhinal and hippocampal atrophy in incipient and very mild alzheimerâĂŹs disease", "year": "2001" }, { "authors": "Fatih Said Duran; Abdurrahman Beyaz; Islem Rekik", "journal": "Springer", "ref_id": "b7", "title": "Dual-hinet: Dual hierarchical integration network of multigraphs for connectional brain template learning", "year": "2022" }, { "authors": "Giovanni B Frisoni; Nick C Fox; Clifford R Jack Jr; Philip Scheltens; Paul M Thompson", "journal": "Nature Reviews Neurology", "ref_id": "b8", "title": "The clinical use of structural mri in alzheimer disease", "year": "2010" }, { "authors": "Yunbi Hao Guan; Erkun Liu; Pew-Thian Yang; Dinggang Yap; Mingxia Shen; Liu", "journal": "Medical image analysis", "ref_id": "b9", "title": "Multi-site mri harmonization via attention-guided deep domain adaptation for brain disorder identification", "year": "2021" }, { "authors": "N Thomas; Max Kipf; Welling", "journal": "", "ref_id": "b10", "title": "Semi-supervised classification with graph convolutional networks", "year": "2016" }, { "authors": "Baiying Lei; Nina Cheng; Alejandro F Frangi; Ee-Leng Tan; Jiuwen Cao; Peng Yang; Ahmed Elazab; Jie Du; Yanwu Xu; Tianfu Wang", "journal": "Medical image analysis", "ref_id": "b11", "title": "Self-calibrated brain network estimation and joint non-convex multi-task learning for identification of early alzheimer's disease", "year": "2020" }, { "authors": "Yueting Li; Qingyue Wei; Ehsan Adeli; Kilian M Pohl; Qingyu Zhao", "journal": "Springer", "ref_id": "b12", "title": "Joint graph convolution for analyzing brain structural and functional connectome", "year": "2022" }, { "authors": "Chunfeng Lian; Mingxia Liu; Yongsheng Pan; Dinggang Shen", "journal": "IEEE transactions on cybernetics", "ref_id": "b13", "title": "Attentionguided hybrid network for dementia diagnosis with structural mr images", "year": "2020" }, { "authors": "Chunfeng Lian; Mingxia Liu; Jun Zhang; Dinggang Shen", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b14", "title": "Hierarchical fully convolutional network for joint atrophy localization and alzheimer's disease diagnosis using structural mri", "year": "2018" }, { "authors": "Mingxia Liu; Jun Zhang; Ehsan Adeli; Dinggang Shen", "journal": "Medical image analysis", "ref_id": "b15", "title": "Landmark-based deep multi-instance learning for brain disease diagnosis", "year": "2018" }, { "authors": "Stuart Lloyd", "journal": "IEEE transactions on information theory", "ref_id": "b16", "title": "Least squares quantization in pcm", "year": "1982" }, { "authors": "David Meunier; Renaud Lambiotte; Alex Fornito; Karen Ersche; Edward T Bullmore", "journal": "Frontiers in neuroinformatics", "ref_id": "b17", "title": "Hierarchical modularity in human brain functional networks", "year": "2009" }, { "authors": "Yongsheng Pan; Mingxia Liu; Yong Xia; Dinggang Shen", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b18", "title": "Disease-imagespecific learning for diagnosis-oriented neuroimage synthesis with incomplete multimodality data", "year": "2021" }, { "authors": "Ronald Carl Petersen; Laurel A Paul S Aisen; Michael C Beckett; Anthony Collins Donohue; Danielle J Gamst; Clifford R Harvey; William J Jack; Leslie M Jagust; Arthur W Shaw; Toga", "journal": "Neurology", "ref_id": "b19", "title": "Alzheimer's disease neuroimaging initiative (adni): clinical characterization", "year": "2010" }, { "authors": "Wei Shao; Yao Peng; Chen Zu; Mingliang Wang; Daoqiang Zhang; ' Alzheimer; Disease Neuroimaging Initiative", "journal": "Computerized Medical Imaging and Graphics", "ref_id": "b20", "title": "Hypergraph based multi-task feature selection for multimodal classification of alzheimer's disease", "year": "2020" }, { "authors": "Xuegang Song; Feng Zhou; Alejandro F Frangi; Jiuwen Cao; Xiaohua Xiao; Yi Lei; Tianfu Wang; Baiying Lei", "journal": "Medical Image Analysis", "ref_id": "b21", "title": "Graph convolution network with similarity awareness and adaptive calibration for disease-induced deterioration prediction", "year": "2021" }, { "authors": "Xuegang Song; Feng Zhou; Alejandro F Frangi; Jiuwen Cao; Xiaohua Xiao; Yi Lei; Tianfu Wang; Baiying Lei", "journal": "IEEE Transactions on Medical Imaging", "ref_id": "b22", "title": "Multi-center and multi-channel pooling gcn for early ad diagnosis based on dual-modality fused brain network", "year": "2022" }, { "authors": "Asher Trockman; J Zico; Kolter ", "journal": "", "ref_id": "b23", "title": "Patches are all you need?", "year": "2022" }, { "authors": "P Martijn; Den Van; Olaf Heuvel; Sporns", "journal": "Journal of Neuroscience", "ref_id": "b24", "title": "Rich-club organization of the human connectome", "year": "2011" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b25", "title": "Attention is all you need", "year": "2017" }, { "authors": "Jin Ye; Junjun He; Xiaojiang Peng; Wenhao Wu; Yu Qiao", "journal": "Springer", "ref_id": "b26", "title": "Attention-driven dynamic graph convolutional network for multi-label image recognition", "year": "2020" }, { "authors": "Daoqiang Zhang; Yaping Wang; Luping Zhou; Hong Yuan; Dinggang Shen; ; ", "journal": "Neuroimage", "ref_id": "b27", "title": "Multimodal classification of alzheimer's disease and mild cognitive impairment", "year": "2011" }, { "authors": "Heliang Zheng; Jianlong Fu; Tao Mei; Jiebo Luo", "journal": "", "ref_id": "b28", "title": "Learning multi-attention convolutional neural network for fine-grained image recognition", "year": "2017" }, { "authors": "Changsong Zhou; Lucia Zemanová; Gorka Zamora; Claus C Hilgetag; Jürgen Kurths", "journal": "Physical review letters", "ref_id": "b29", "title": "Hierarchical organization unveiled by functional connectivity in complex brain networks", "year": "2006" } ]
[ { "formula_coordinates": [ 4, 237.3, 148.92, 243.29, 12.69 ], "formula_id": "formula_0", "formula_text": "[t 1 x , t 1 y , t 1 z , t 2 x , t 2 y , t 2 z , . . . , t Ω x , t Ω y , t Ω z ](1)" }, { "formula_coordinates": [ 4, 134.77, 366.31, 345.83, 25.34 ], "formula_id": "formula_1", "formula_text": "Γ i-1 = {γ i-1 n } Ni-1 n=1 , denoted as Γ i = {γ i n } Ni n=1" }, { "formula_coordinates": [ 4, 167.03, 402.54, 64.91, 14.87 ], "formula_id": "formula_2", "formula_text": "K i = {k i n } Ni n=1 ." }, { "formula_coordinates": [ 4, 197.91, 496.82, 278.44, 32.94 ], "formula_id": "formula_3", "formula_text": "L node = - 1 L L l=1 N l n=1 u∈K l n log exp u • γ l n /φ l n N l i =n exp u • γ l i /φ l n (2" }, { "formula_coordinates": [ 4, 476.35, 507.34, 4.24, 8.8 ], "formula_id": "formula_4", "formula_text": ")" }, { "formula_coordinates": [ 4, 251.54, 539.96, 229.05, 27.89 ], "formula_id": "formula_5", "formula_text": "φ l n = u∈K l n u -γ l n 2 |K l n | • log(|K l n | + α)(3)" }, { "formula_coordinates": [ 5, 202.37, 174.53, 278.22, 31.09 ], "formula_id": "formula_6", "formula_text": "L edge = - 1 L L-1 l=1 N l n=1 log exp γ l n • P arent(γ l n )/τ N l i =n exp γ l n • γ l i /τ(4)" }, { "formula_coordinates": [ 5, 207.82, 427.5, 272.78, 25.41 ], "formula_id": "formula_7", "formula_text": "A = Attention(Q, K, V ) = sof tmax( QK T √ d k )V(5)" }, { "formula_coordinates": [ 5, 165.05, 461.89, 177.36, 10.87 ], "formula_id": "formula_8", "formula_text": "Q ∈ R N ×d k , K ∈ R N ×d k , V ∈ R N ×N" }, { "formula_coordinates": [ 5, 220.81, 532.06, 259.78, 11.28 ], "formula_id": "formula_9", "formula_text": "GCN (X) = ReLU D-1/2 Â D-1/2 XΘ(6)" }, { "formula_coordinates": [ 6, 234.43, 405.77, 241.92, 9.71 ], "formula_id": "formula_10", "formula_text": "L = L cls1 + L cls2 + L node + L edge (7" }, { "formula_coordinates": [ 6, 476.35, 405.77, 4.24, 8.8 ], "formula_id": "formula_11", "formula_text": ")" } ]
10.1109/WACV.2016.7477558
2023-05-17
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b1", "b2", "b3", "b4", "b0", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b0" ], "table_ref": [ "tab_0" ], "text": "Modern face recognition architectures [2,3,4,5] have demonstrated exceptional performance on benchmark face recognition test sets such as Labeled Faces in the Wild (LFW) [1] and Celebrities in Frontal-Profile in the Wild (CFP-FP) [6]. achieving accuracy as high as 99.85% and 99.5%, respectively. Despite these impressive results, the main challenge for developing state-of-the-art (SOTA) industrial-ready applications does not necessarily lie in refining the algorithms but rather in obtaining relevant and large-scale datasets.\nPublicly available datasets satisfy conditions such as pose variability, image quality conditions, lightning condi-tions, and accessories. However, many of these datasets have been retracted [7] (e.g., VGGFace [8], MS1M [9], MegaFace [10]) rendering the remaining datasets scarce. The datasets still available are limited by several factors. Privacy and Ethical Concerns: The collection and use of facial images raise numerous privacy and ethical issues, which must be carefully addressed to comply with data protection regulations and ensure the responsible use of face recognition technology.\nData bias: Real-world datasets often suffer from imbalanced distributions of different demographic attributes or environmental conditions (.e.g, camera orientations, light conditions). This can lead to biased models that perform poorly on underrepresented ethnic, age or gender groups or challenging scenarios [11].\nAnnotation Quality: The accuracy of face recognition systems relies heavily on the quality of the annotations in the training dataset. Essentially, each class must contain only additional images from the same identity. Manual annotation is a time-consuming and labor-intensive process that may introduce errors or biases that can adversely affect the performance of the resulting models.\nThese limitations call for an alternative method of procuring data. In this article, we show that the usage of 3D rendered synthetic faces via the Datagen face generation platform [12,13,14], can outperform recent GAN methods [15,16] and produce comparable results to those achieved via 3D synthetic data pipelines [17].\nThe structure of this paper is as follows: In Section 2, we discuss previous related work. In Section 3, we provide details about the dataset generation and training paradigms for all our experiments. Section 4 presents our experiments and the results associated with each experiment. In Section 5, we discuss the results and their broader implications. Finally, in Section 6, we outline potential future work that we believe is necessary in the domain of synthetic-based face recognition to further boost current performance.\nThe contributions of this paper are as follows:\n• Our model attains results that are on par with the current state-of-the-art, and by leveraging the granular control our platform offers, we demonstrate the significance of intra-class variance. This is achieved by incorporating 3D rendered assets such as hats, makeup, object occlusions, hand occlusions, haircuts, and hair color changes, which contribute to the overall accuracy of our model.\n• We illustrate that by using a limited number of real images and identities, our model can achieve results comparable to those obtained by models trained on hundreds of thousands of real images. Specifically, we obtain an accuracy of 98.7% on LFW [1], whereas the current real-data SOTA is 99.86% for LFW (see Table 2).\n• We highlight how controlled data generation can contribute to a better understanding of the essential features for effective face-recognition algorithms. Specifically, we provide evidence of the importance of varied eyebrows by subsampling a small number of eyebrows from our dataset and showing that models trained on real data are highly susceptible to eyebrow structure variations. " }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b17", "b18", "b19", "b20", "b6", "b21", "b8", "b1", "b22", "b7", "b9", "b6", "b23", "b14", "b24", "b25", "b26", "b15", "b14", "b27", "b15", "b28", "b29", "b30", "b31", "b32", "b33", "b34", "b16", "b3", "b35", "b16", "b36" ], "table_ref": [], "text": "Publicly released real faces datasets. Publicly available datasets satisfy conditions such as pose variability, image quality conditions, lightning conditions, and accessories. Current available datasets include WebFace260M, which comprises 260 million images of 4 million identities [18], IMDbFace that contains 1.7 million images of 59,000 identities [19], MegaFace2 with 4.7 million images from 672,000 identities [20], the CASIA-Webface dataset, which comprised about 500,000 images spanning roughly 10,500 identities, [21] and the Glint360K dataset, containing a substantial volume of 17 million images across 360,000 identities [7,22]. MS1M, another dataset that originally held approximately 10 million images of 100,000 celebrity identities, was retracted due to a high percentage of noise [9]. MS1MV1 and MS1MV2, the cleansed versions of MS1M, included approximately 3.8 million and 5.8 million images of 85,000 celebrity identities, respectively [2,23]. Additional widely used datasets are no longer available such as VGGFace [8] and MegaFace [10]) [7] rendering the remaining datasets scarce.\nGenerative models based Face generation. A dominant member of the deep generative algorithms, GANs [24] are used also in the domain of data generation for face recognition training [15,25,26,27]. SynFace [16] reached an accuracy of 88.98% on LFW by employing the GAN based model DiscoFaceGAN [15] to generate a training dataset consisting of 10K identities with 50 images per identity. The results were further improved to 91.97% by applying Identity Mixup (IM) in the form of linear interpolation between two identities in the embedded space, indicating that the learning algorithm can be challenged to better perform with identities that are close in the embedding space. Mixing the dataset with additional 2K real identities further increased the results up to 95.78%. DiscoFaceGan results were limited by two main factors, the algorithm struggles maintaining 3D consistency [28] on variable poses, and there is a limitation to the model's intra-class variance, most severely in its ability to generate variable facial expressions [16]. SFace [29] on real data reached 98.5%, while combining both approaches (knowledge transfer and regular classification training) reached 99.13%. However, this technique requires a pretrained face recognition model (.e.g., FaceNet [30] and so it is not purely trained on synthetic data).\nDiffusion models (DM) [31,32,33] have gained increasing popularity with a fast growing community and visually striking results. As of writing the article, there is a single study comparing the ability of different DM models to create realistic and diverse faces. In the experiment, the author generates data from different models, transforms the images to an ImageNet [34] embedded space, and calculates the Fréchet Inception Distance (FID) between the embeddings of the generated and real face images. As a baseline, the author splits the 10k real images into two sets and calculates the FID between them. As expected, the comparison between real face images will receive the lowest FID score, as they are from the same distribution.\nResults show that at the time of writing the article, DM generated face images received much higher FID scores (approximately x5 higher then the baseline score for the best model) indicating that DM generated face images are still not comparable to real images. [35].\n3D Rendered Face face generation. Microsoft released a synthetic dataset [17] consisting of 1.22M images with 110K identities, reaching a final accuracy of 96.17% on LFW trained on AdaFace [4] with a backbone of Resnet 100 [36]. Further more, they showed that using aggressive augmentations can help reduce the gap between real and simulated data, showing an increase from 88.07% to 94.55% in accuracy. DigiFace [17] was generated using the pipeline introduced at Wood et al. [37], using a generative model learned from 3D 511 unique individuals to generate a total of 110K identities. Out of the pre-mentioned methods, our data generation platform most resembles that one used in order to create the DigiFace dataset.\n3 Methods" }, { "figure_ref": [ "fig_0" ], "heading": "Dataset Generation", "publication_ref": [ "b11", "b12", "b13" ], "table_ref": [], "text": "Our dataset was generated using the Datagen [12,13,14] face generation SDK. The platform uses a physically-based rendering engine that renders 2D images from 3D mesh and texture models. The SDK enables easy creation of any desired distribution. Each datapoint consists of the RGB visible spectrum image, with additional meta-data and labels (e.g., key-points, segmentation maps, depth maps, normal maps, and more).\nFor this article, we sampled a subset of 30,000 identities from the identity pool (see figure 1. Our demographics consisted of North European (68.82%), African (8.52%), Hispanic (7.94%), Mediterranean (6.38%), Southeast Asian (5.01%), South Asian (3.32%).\nFor each identity, we generated 20 samples of 256x256 or 512x512 resolutions. Both the camera and the human were rotated with yaw, pitch and roll according to approximately normal distributions (compounded of several normal distributions) of mean 0, and variance of of 25 Every identity in our platform was associated with a specific default eye color, iris shape (texture), and eyebrow style during generation. We retained these default values for each identity, as they ensured uniqueness among the different identities in our pool.\nEach male sample was generated with 15% probability to receive a beard. Glasses were samples with 15% chance of appearing, regardless of gender. Eye gaze direction was also adjusted, uniformly sampled with horizontal sides between [-0.5, 0.5] and vertical sides between [0.85, 1] meters. The gaze distance was also sampled, ranging between [0.3, 6] meters.\nHair color for each sample was also modified, relative to the identity's default values for melanin, whiteness, roughness, and redness, with uniform changes within a range of ±25%.\nAdditional variability was generated by randomly adding makeup, occlusions, hats and randomized expressions. (see Figure 2) These additions were used for creating two different batches of data. First batch contained a single addition from the above list with probabilities of 3%, 2.5%, and 3.5%, for makeup, occlusions and hats, respectively. In the second batch which constitutes 17% of the data we allowed simultaneous additions of makeup, occlusions, hats, and additional randomized expressions, each generated with a probability of 15%, 50%, 70%, and 50%, respectively. 1The randomized expressions were added in order to increase variance on-top of our platform presets and were defined by randomly sampling a single or two action units (one for the eyes, and one for the mouth), with identical probabilities." }, { "figure_ref": [], "heading": "Training", "publication_ref": [ "b1", "b35", "b37", "b38", "b2", "b0", "b5", "b40", "b16", "b41", "b16" ], "table_ref": [], "text": "All models in this study were trained using the ArcFace loss [2], incorporating a margin of 0.5 and a scale of 64 with an IResNet50 architecture [36] backbone. Models were trained on a single 16GB NVIDIA Tesla-4 GPU with batch size set to 256 for 24 epochs with a multi-step learning rate decay by a factor of 0.1 at milestones 10, 18, and 22. In order to be as similar to our validation and test data prepossessing, we utilized the RetinaFace detector [38] for facial bounding box extraction, as opposed to using the facial bounding box modality provided by the Datagen platform. The key-point modalities were then applied to perform face alignment using the similarity transform (scale, rotation and translation). Images were resized to 112x112 and normalized, with a mean of 0 and standard deviation of 0.5 for all channels. All the code was implemented using pytorch [39].\nEvaluation Protocol. Our study employs the open-set protocol for evaluating the model's performance [3]. This approach entails using disjoint identities for testing, ensuring that they are not present in the training set. Our primary aim is to address the problem of face verification, which involves comparing pairs of facial images to ascertain if they originate from the same individual. During the test phase, we apply 10-fold cross-validation on our test set, deriving the threshold from the 9 folds and applying it to the remaining fold. The face verification average accuracy is reported on LFW [1], CFP-FP [6] and AgeDB [40] benchmark datasets.\nData Augmentations. Augmentations were adapted from [17] and implemented via the albumentations python package [41]. More specifically, we used horizontal flip with a probability of 0.5 (p=0.5), conversion to gray scale (p=0. Fine-tuning. When finetuning our model, we used our pre-trained backbone and replaced the arcface head to consist of the fine-tune number of parameters. Learning rates were adjusted as in DigiFace [17] so that the backbone will train with lr/100 and the head with lr/10. The learning schedule and number of epochs remained the same as the regular training regime." }, { "figure_ref": [], "heading": "Experiments and Results", "publication_ref": [], "table_ref": [], "text": "In this section, we present our experimental design and results. First, we show our result compared to the current synthetic SOTA. Secondly we present our finetuning results, and debate how they compare to both synthetic and real SOTA. Following that, we show how different intraclass-variance factors are affecting our results, and lastly we show a use-case of how controlled data can be utilized in order to understand the the importance of different face parts in face-recognition systems." }, { "figure_ref": [], "heading": "Pure synthetic training results", "publication_ref": [ "b16", "b16" ], "table_ref": [], "text": "Our pure synthetic training results are summarized in Table 1. We compare our results to those reported by DigiFace [17] models used, and the different amounts of data. When using DigiFace dataset (DigiFace [17]) trained on our arcFace pipeline, (rows 1 and 2 in Table 1) we surpass DigiFace by 1.48% on LFW and 3% on AgeDB. Results on CFP-FP are lower both on our pipline and the DigiFace published results. This might be attributed to the chosen distribution of yaw in our dataset, that does not include many profile images. The results reported contain all the variance discussed in the method section 3." }, { "figure_ref": [ "fig_1" ], "heading": "Finetune", "publication_ref": [ "b11", "b16" ], "table_ref": [ "tab_0" ], "text": "As previously mentioned, obtaining a large quantity of real data can be challenging. However, there are situations where a limited number of samples are accessible. To examine the effects of merging real data with our synthetic dataset, we finetune a model that encompasses our full range of variability. To assess the influence of small quantities of real data, we randomly sampled varying number of identities, ranging from 10 to 2000, with 20 samples per identity. The results are summarized in Figure 3. We demonstrate that our model can achieve high accuracy comparable to those trained on hundreds of thousands of real images, even with an extremely small number of real samples. Furthermore, we show that a fine-tuned model's performance significantly exceeds that of a model trained solely on the same amount of real data. A question raised here is whether the increase in accuracy is attributable to the photo-realism gap or to the variance gap (consisting of intra-class and inter-class variability). Assuming that real world variance cannot be encompassed within a thousand images (50 identities with 20 sample per identity) we can observe that a photo-realism gap for face recognition exists, and attributes to a reduction in the error rate by 39.1% for LFW , 24.5% for CFP-FP and 30% for Age-DB as accuracy increases from 94.91% to 96.9%, 83.38% to 87.46% and 77.58% to 82.28% respectively 3. These results are slightly higher then those previously demonstrated on segmentation benchmarks [12], and might be attributed to the higher dependencies of face-recognition models on actual rgb pixel values as opposed to relationships between neighboring pixels. In Table 2 we compare the results of fine-tuning with 40K real samples to those reported by [17]. We achieve competitive results on LFW and CFP-FP falling behind on 0.68% and 0.79%, respectively." }, { "figure_ref": [], "heading": "Effects of generated variance", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "In order to explore the effects of the additional generated variance on our model we conducted two experiments. The first experiment focused on the contribution of hats, occlusions, makeup and randomized additional expressions as intra-class-variance. The second experiment was focused specifically on hair-cut variability. We separated these experiments since we hypothesized that hair-cut variance may have specific contribution to age-DB as it allows simulating significant changes of hair-cut over time often occurring along lifetime. For each experiment, we have 4: Additional variance experiment. In order to understand the impact of additional variance either generated together (multiple per image) or separate (single per image) on our model, we generated hats, makeup and occlusions with the probabilities of 3%, 2.5% and 3.5% respectively. The results show that the additional variance, although amassing to only 9% of our data improved results. Combined generated data, increased results by a lower value although amounting to a total variance of 17% of the data. This results might indicate that the additional multiple-per-image variance photos provided harder samples for the model to train probably due to multiple occlusions, which the model was not able to generalize well.\na baseline and the modified version. In our changed version, we swap baseline images with the samples containing the additional variance (.e.g., an identity had 20 baseline samples, after the swap, it has 15 old samples, and 5 new samples containing a hat). As a result, all samples are the same, except for the swapped samples. In our general variability test (Table 4) we are examining to see the effects of hats, makeup, occlusions and expressions either combined together (with a high probability of appearing together) or separated (one per image). Our datset consists of 27K unique identities with 20 samples per identity. The results show an increase between the baseline and the combined variability, increasing the averaged accuracy from 83.04% to 83.58% (LFW 93.15% to 93.65%). Introducing the variability separately increased the results further from 83.04% to 84.31% (LFW 93.15% to 94.27%).\nFigure 4: Illustration of the different hair style clusters used in our hair variability experiment (each row represents a different cluster). Hair assets were clustered into groups where each group had the same hairline, hair type (e.g., curly, straight, wavy) , thickness (.e.g., fine, medium) and general appearance (e.g., oily, dry).\nIn the second experiment, we examine the effect of adding hair cut variability to our dataset. Our dataset consists of 29K identities with 20 samples per ID. All the hair assets existing in the platform were clustered into groups of different types. Each group maintained the same hairline, hair type (e.g., curly, straight, wavy), thickness (e.g., fine hair, medium hair, coarse hair), and general appearance (e.g., oily, dry, thin, thick), with the only varying aspect being the haircut itself (see Figure 4). As in the previous experiment, there were two datasets, a baseline and the altered dataset, where the altered dataset consisted of the same ids and the majority of the previous samples, with only the samples consisting of the varying hair styles swapped. A total of 32.8% of the samples where swapped, averaging at 6 samples per identity containing variations of hair-cut.\nThe results are summarized in Table 3 and show that the average accuracy has increased from 84.34% to 85.29% (LFW from 94.5% to 94.91%). Most notably, the Age-DB accuracy increased by 2.91%. Age-DB is a diverse test set, featuring images of people at different stages of their lives. This improved performance likely reflects the model's enhanced ability to recognize faces with different hairstyles across multiple life stages, ultimately contributing to the increased accuracy on the Age-DB test set." }, { "figure_ref": [ "fig_2", "fig_2", "fig_2" ], "heading": "Controlled data use-case", "publication_ref": [ "b42", "b15" ], "table_ref": [], "text": "To underscore the efficacy of using controlled synthetic data, we follow a case study in the context sensitivity to different face parts [42]. This area is important as it increases our understanding of what different facial parts (.e.g., eyes, eyebrows, mouth, etc.) are imperative for a model to accurately classify identities.\nFor that purpose, we alternated two factors: eyes (colors and iris texture) and eyebrows.\nFirst, to show how the model reacts to valid intra-class variations (Figure 5 (b) the grey line). We measure the l2 distance in the FaceNet[? ] embedding space between a reference image of frontal pose and neutral expression and a set of varying poses and expressions on the same identity where all other aspects of the image remained similar (background, light conditions etc.,)\nPre-trained models are expected to be agnostic to pose and expression variations and therefore, we refer to this By retaining the reference, and changing the eyebrow by a single random sampled eyebrow, we see a leap in the l2 distance (see Figure 5 b blue line). The average difference between the two conditions (the gray and blue lines in Figure 5 b) is 0.415 ±0.107. Additionally, in the already high intra-class variance cases (e.g., 45 • and 0.9 intensity), the l2 distance can be above 1. This indicates that that eyebrows appearance is a descriptive factor for face recognition models and alternating it may lead to false rejection if the change is natural as part of styling or true rejection in case of fraud.\nIn the last experiment, we compare the effect of alternating eyebrows to alternating eyes. We use only frontal facing and neutral expression images in-order to check for the effects of 25 eyebrows and 100 eyes sampled from the platform pool. Different eye samples have different color and iris textures. We observe that changing the eyebrows alone account for an average increase of 0.664 in l2 distance ±0. 16. In contrast, the model is not sensitive to the eyes, this is expected, as the image sizes in modern face recognition models are usually 112x112 (160x160 for facenet), where the eyes inhabit a small number pixels." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b30", "b32", "b43", "b44", "b45", "b46", "b47", "b48", "b49" ], "table_ref": [], "text": "In this work, we have demonstrated the potential of using synthetic data for face recognition, particularly by leveraging the controlled environment offered by our 3D rendering pipeline. Our results reveal that our model, trained on synthetic data, can achieve results competitive with the state-of-the-art on multiple benchmark datasets. Moreover, we have shown that incorporating various forms of intraclass-variance in the dataset, such as hairstyles, makeup, hats, and occlusions, can improve the model's performance. This emphasizes the importance of intra-class variance in developing more robust and accurate face recognition models.\nWe also highlighted the advantage of fine-tuning our model with a limited number of real images. Our experiments indicate that even a small amount of real data can consid-erably improve the model's performance, achieving results comparable to those obtained by models trained on largescale real datasets. This finding suggests that our approach can be beneficial in scenarios where obtaining substantial volumes of real data is challenging.\nLastly, we demonstrated the value of controlled data generation in better understanding the essential features of face recognition. By incorporating the separation of variables, we are able to understand our model's weaknesses as well as what is needed to improve recognition by employing synthetic data. Our experiments showed that models trained on real data are highly sensitive to variations in eyebrow structure while not sensitive to eyes color and textures, suggesting that eyebrows can be an important factor in determining identity. This insight can help researchers and practitioners develop more robust and accurate face recognition systems by focusing on such discriminative features.\n6 Future Work\nWhile our study has shown promising results, several avenues for future work can be explored to further enhance the efficacy of synthetic data in the face recognition domain. With the growing power of DM models [31] grows the power of reducing the domain gap, and adding additional variance to controlled 3D synthetic data. Emerging research venues such as image to image text guided translation and impainting [33,43,44] as well as controlled data generation [45] might be utilized to increase the effectiveness of 3D generated data. However, in order for these models to be effective for face recognition tasks, there must be a viable and fast way for unique identity generation and preservation. The area of personalized SD [46,47] is still in its initial stages and further research is needed for investigating the combination of rendered data and diffusion models. An additional significant gap in deep face recognition pertains to aging [48,49]. The challenge arises from the natural biological transformations that occur throughout our lifetimes. These alterations, which influence the overall facial structure, including changes in the jawline, ears, nose shape, addition of wrinkles and age spots and more, complicate the task of maintaining consistent and accurate recognition. There is a growing need in generating synthetic data with reliable aging simulation." } ]
research. Our pipeline enables granular control of almost everything in the scene (e.g., pose, accessories, background, light, hair, eyebrows, eyes).
FACE RECOGNITION USING SYNTHETIC FACE DATA
[ { "figure_caption": "Figure 2 :2Figure 2: Example of the variability in our dataset for six different identities (a row per identity). Intra-class variance is enhanced by different assets (occlusions, hats, makeup, glasses, facial hair, hair color, hair-cut) as well as the varied poses, background and lighting conditions.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Finetuned vs. real results on (a) LFW, (b) CFP-FP, and (c) AgeDB datasets.The synthetic model was trained on 29K identities, with 20 samples per ID (dashed line). Fine-tuned models (red dots) were with a varying number of identities (represented by the x-axis). For reference, we also trained a model with each of the real sample batches (gray dots). Pure synthetic model outperforms training on batches of real data within the examined range (up to 20K samples). In addition, fine-tuning on allows significant improvement in results relative to pure synthetic training even with a very small amount of real data.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Alternating face parts. (a) The leftmost image is a simple front-facing enrollment image. The first upper row shows gradual changes in head rotation from 0°to 45°, while the second row illustrates gradual changes in facial expression intensity. (b) L2 distance sensitivity. Distances between a front facing neutral expression reference and our alternating conditions of yaw and expression intensity. The value of the gray curve are the baseline values considered valid within the intra-class variations while the blue values, well above this baseline, are prone to change the networks prediction. (c) Averaged l2 distances between a frontal facing, neutral expression with changing eyebrows (left) and eyes (right).A change to the eyebrows results in an average difference of 0.664, while a change to the eyes results in an average difference of 0.027. Overall, we observe that eyebrows are important in the context of facial recognition and alternating their appearance and shape may lead to predicting the photo as another identity, whereas the change in eyes-color and iris textures have much lower influence on the l2 distances probably due to the small face crop sizes used for most face verification models.", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Comparison to DigiFace synthetic SOTA after finetuning. Results show that following finetuning with the same amount of data, we achieve close to SOTA results on LFW and CFP-FP datasets, falling behind on 0.68% and 0.79% respectively. These margins might be attributed to the different models (Arcface in ours vs Adaface) used for the training, as adaface showed superior published results on CFP-FP[4].", "figure_data": "reached", "figure_id": "tab_0", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "light and shadows that are reflected on the actor. For each sample, the expression was randomly sampled from our available presets (i.e., neutral, happiness, sadness, surprise, anger, fear, contempt, disgust and mouth open). All expressions with equal probability to appear.", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "Hair Variability Baseline94.583.8774.6784.34Hair-Cut Variability 94.9183.3877.5885.29", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" } ]
Omer Granoviter; Alexey Gruzdev; Vladimir Loginov; Max Kogan
[ { "authors": "B Gary; Manu Huang; Tamara Ramesh; Erik Berg; Learned-Miller", "journal": "", "ref_id": "b0", "title": "Labeled faces in the wild: A database for studying face recognition in unconstrained environments", "year": "2007-10" }, { "authors": "Jiankang Deng; Jia Guo; Niannan Xue; Stefanos Zafeiriou", "journal": "", "ref_id": "b1", "title": "Arcface: Additive angular margin loss for deep face recognition", "year": "2019" }, { "authors": "Weiyang Liu; Yandong Wen; Zhiding Yu; Ming Li; Bhiksha Raj; Le Song", "journal": "", "ref_id": "b2", "title": "Sphereface: Deep hypersphere embedding for face recognition", "year": "" }, { "authors": "Minchul Kim; Anil K Jain; Xiaoming Liu", "journal": "", "ref_id": "b3", "title": "Adaface: Quality adaptive margin for face recognition", "year": "" }, { "authors": "Xiang An; Jiankang Deng; Jia Guo; Ziyong Feng; Xuhan Zhu; Jing Yang; Tongliang Liu", "journal": "", "ref_id": "b4", "title": "Killing two birds with one stone:efficient and robust training of face recognition cnns by partial fc", "year": "2022" }, { "authors": "Soumyadip Sengupta; Jun-Cheng Chen; Carlos Castillo; M Vishal; Rama Patel; David W Chellappa; Jacobs", "journal": "", "ref_id": "b5", "title": "Frontal to profile face verification in the wild", "year": "2016" }, { "authors": "Fadi Boutros; Vitomir Struc; Julian Fierrez; Naser Damer", "journal": "", "ref_id": "b6", "title": "Synthetic data for face recognition: Current state and future prospects", "year": "2023" }, { "authors": "M Omkar; Andrea Parkhi; Andrew Vedaldi; Zisserman", "journal": "", "ref_id": "b7", "title": "Deep face recognition", "year": "2015" }, { "authors": "Yandong Guo; Lei Zhang; Yuxiao Hu; Xiaodong He; Jianfeng Gao", "journal": "Springer", "ref_id": "b8", "title": "Ms-celeb-1m: A dataset and benchmark for large-scale face recognition", "year": "2016" }, { "authors": "Ira Kemelmacher-Shlizerman; Steven M Seitz; Daniel Miller; Evan Brossard", "journal": "", "ref_id": "b9", "title": "The megaface benchmark: 1 million faces for recognition at scale", "year": "2016" }, { "authors": "Mei Wang; Weihong Deng; Jiani Hu; Xunqiang Tao; Yaohai Huang", "journal": "", "ref_id": "b10", "title": "Racial faces in the wild: Reducing racial bias by information maximization adaptation network", "year": "2019" }, { "authors": "Eli Friedman; Assaf Lehr; Alexey Gruzdev; Vladimir Loginov; Max Kogan; Moran Rubin; Orly Zvitia", "journal": "", "ref_id": "b11", "title": "Knowing the distance: Understanding the gap between synthetic and real data for face parsing", "year": "2023" }, { "authors": "Paul Yudkin; Eli Friedman; Orly Zvitia; Gil Elbaz", "journal": "", "ref_id": "b12", "title": "Hands-up: Leveraging synthetic data for handson-wheel detection", "year": "2022" }, { "authors": "Ran Shadmi; Jonathan Laserson; Gil Elbaz", "journal": "", "ref_id": "b13", "title": "Using synthetic images to uncover population biases in facial landmarks detection", "year": "2021" }, { "authors": "Yu Deng; Jiaolong Yang; Dong Chen; Fang Wen; Xin Tong", "journal": "", "ref_id": "b14", "title": "Disentangled and controllable face image generation via 3d imitative-contrastive learning. 4", "year": "2020" }, { "authors": "Haibo Qiu; Baosheng Yu; Dihong Gong; Zhifeng Li; Wei Liu; Dacheng Tao", "journal": "", "ref_id": "b15", "title": "Synface: Face recognition with synthetic data", "year": "2021" }, { "authors": "Gwangbin Bae; Martin De; La Gorce; Tadas Baltrusaitis; Charlie Hewitt; Dong Chen; Julien Valentin; Roberto Cipolla; Jingjing Shen", "journal": "", "ref_id": "b16", "title": "Digiface-1m: 1 million digital face images for face recognition", "year": "2022" }, { "authors": "Zheng Zhu; Guan Huang; Jiankang Deng; Yun Ye; Junjie Huang; Xinze Chen; Jiagang Zhu; Tian Yang; Jiwen Lu; Dalong Du", "journal": "", "ref_id": "b17", "title": "Webface260m: A benchmark unveiling the power of million-scale deep face recognition", "year": "2021" }, { "authors": "Fei Wang; Liren Chen; Cheng Li; Shiyao Huang; Yanjie Chen; Chen Qian; Chen Change Loy", "journal": "", "ref_id": "b18", "title": "The devil of face recognition is in the noise", "year": "2018" }, { "authors": "Aaron Nech; Kemelmacher-Shlizerman", "journal": "", "ref_id": "b19", "title": "Level playing field for million scale face recognition", "year": "2017" }, { "authors": "Dong Yi; Zhen Lei; Shengcai Liao; Stan Z Li", "journal": "", "ref_id": "b20", "title": "Learning face representation from scratch", "year": "2014" }, { "authors": "Xinyi Wang; Jianteng Peng; Sufang Zhang; Bihui Chen; Yi Wang; Yandong Guo", "journal": "", "ref_id": "b21", "title": "A survey of face recognition", "year": "2022" }, { "authors": "Jiankang Deng; Yuxiang Zhou; Stefanos Zafeiriou", "journal": "", "ref_id": "b22", "title": "Marginal loss for deep face recognition", "year": "2017" }, { "authors": "Ian Goodfellow; Jean Pouget-Abadie; Mehdi Mirza; Bing Xu; David Warde-Farley; Sherjil Ozair; Aaron Courville; Yoshua Bengio", "journal": "", "ref_id": "b23", "title": "Generative adversarial nets", "year": "" }, { "authors": "Jianmin Bao; Dong Chen; Fang Wen; Houqiang Li; Gang Hua", "journal": "", "ref_id": "b24", "title": "Towards open-set identity preserving face synthesis", "year": "2018" }, { "authors": "Yujun Shen; Ping Luo; Junjie Yan; Xiaogang Wang; Xiaoou Tang", "journal": "", "ref_id": "b25", "title": "Faceid-gan: Learning a symmetry three-player gan for identity-preserving face synthesis", "year": "2018" }, { "authors": "Xianxu Hou; Linlin Shen; Zhong Ming; Guoping Qiu", "journal": "Pattern Recognition", "ref_id": "b26", "title": "Deep generative image priors for semantic face manipulation", "year": "2023" }, { "authors": "Yu Deng; Jiaolong Yang; Jianfeng Xiang; Xin Tong", "journal": "", "ref_id": "b27", "title": "Gram: Generative radiance manifolds for 3daware image generation", "year": "2022" }, { "authors": "Fadi Boutros; Marco Huber; Patrick Siebke; Tim Rieber; Naser Damer", "journal": "IEEE", "ref_id": "b28", "title": "Sface: Privacy-friendly and accurate face recognition using synthetic data", "year": "2022" }, { "authors": "Florian Schroff; Dmitry Kalenichenko; James Philbin", "journal": "", "ref_id": "b29", "title": "Facenet: A unified embedding for face recognition and clustering", "year": "2015" }, { "authors": "Jascha Sohl-Dickstein; Eric Weiss; Niru Maheswaranathan; Surya Ganguli", "journal": "", "ref_id": "b30", "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "year": "2015-07-09" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b31", "title": "Hierarchical textconditional image generation with clip latents", "year": "2022" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b32", "title": "Highresolution image synthesis with latent diffusion models", "year": "2022-06" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Li Fei-Fei", "journal": "Ieee", "ref_id": "b33", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Ali Borji", "journal": "", "ref_id": "b34", "title": "Generated faces in the wild: Quantitative comparison of stable diffusion, midjourney and dall-e 2", "year": "2022" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b35", "title": "Deep residual learning for image recognition", "year": "2015" }, { "authors": "Erroll Wood; Tadas Baltrušaitis; Charlie Hewitt; Sebastian Dziadzio; Matthew Johnson; Virginia Estellers; Thomas J Cashman; Jamie Shotton", "journal": "", "ref_id": "b36", "title": "Fake it till you make it: Face analysis in the wild using synthetic data alone", "year": "2021" }, { "authors": "Jiankang Deng; Jia Guo; Evangelos Ververas; Irene Kotsia; Stefanos Zafeiriou", "journal": "", "ref_id": "b37", "title": "Retinaface: Singleshot multi-level localisation in the wild", "year": "2020" }, { "authors": "Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga; Alban Desmaison; Andreas Kopf; Edward Yang; Zachary Devito; Martin Raison; Alykhan Tejani; Sasank Chilamkurthy; Benoit Steiner; Lu Fang; Junjie Bai; Soumith Chintala", "journal": "", "ref_id": "b38", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b39", "title": "", "year": "2019" }, { "authors": "Stylianos Moschoglou; Athanasios Papaioannou; Christos Sagonas; Jiankang Deng; Irene Kotsia; Stefanos Zafeiriou", "journal": "", "ref_id": "b40", "title": "Agedb: the first manually collected, in-the-wild age database", "year": "2017" }, { "authors": "Alexander Buslaev; Vladimir I Iglovikov; Eugene Khvedchenya; Alex Parinov; Mikhail Druzhinin; Alexandr A Kalinin", "journal": "Information", "ref_id": "b41", "title": "Albumentations: Fast and flexible image augmentations", "year": "2020" }, { "authors": "Nova Hadi Lestriandoko; Raymond Veldhuis; Luuk Spreeuwers", "journal": "Frontiers in Computer Science", "ref_id": "b42", "title": "The contribution of different face parts to deep face recognition", "year": "2022" }, { "authors": "Tim Brooks; Aleksander Holynski; Alexei A Efros", "journal": "", "ref_id": "b43", "title": "Instructpix2pix: Learning to follow image editing instructions", "year": "2023" }, { "authors": "Hadas Orgad; Bahjat Kawar; Yonatan Belinkov", "journal": "", "ref_id": "b44", "title": "Editing implicit assumptions in text-to-image diffusion models", "year": "2023" }, { "authors": "Lvmin Zhang; Maneesh Agrawala", "journal": "", "ref_id": "b45", "title": "Adding conditional control to text-to-image diffusion models", "year": "2023" }, { "authors": "Inhwa Han; Serin Yang; Taesung Kwon; Jong Chul; Ye ", "journal": "", "ref_id": "b46", "title": "Highly personalized text embedding for image manipulation by stable diffusion", "year": "2023" }, { "authors": "Rinon Gal; Moab Arar; Yuval Atzmon; H Amit; Gal Bermano; Daniel Chechik; Cohen-Or", "journal": "", "ref_id": "b47", "title": "Designing an encoder for fast personalization of textto-image models", "year": "2023" }, { "authors": "M Manisha; Sawant; M Kishor; Bhurchandi", "journal": "Artificial Intelligence Review", "ref_id": "b48", "title": "Age invariant face recognition: a survey on facial aging databases, techniques and effect of aging", "year": "2019" }, { "authors": "Leila Boussaad; Aldjia Boucetta", "journal": "Journal of King Saud University -Computer and Information Sciences", "ref_id": "b49", "title": "Deep-learning based descriptors in application to aging problem in face recognition", "year": "2022" } ]
[]
2023-05-17
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b2", "b3", "b4", "b5", "b6", "b8", "b9", "b13", "b14", "b15", "b17", "b18", "b19", "b20", "b20", "b19", "b19" ], "table_ref": [], "text": "A VIATION safety is an important issue in air trans- portation, receiving much attention from airlines and researches [1]- [3]. The flight test of an aircraft before delivery is an important means of ensuring aviation safety. Effective flight tests ensure the aircraft safety and reliability. However, when the flight test results are unreliable, the tests can not detect potential problems in aircraft. Thus, the sensors in flight tests need periodic calibration [4] to avoid unreliable results in tests. It is difficult to get the accurate sensor calibration period directly, and a short calibration period causes huge cost [5]. Therefore, it makes sense to analyze existing data to judge whether the sensor is abnormal.\nA sensor anomaly is a degradation of its performance to a certain threshold, usually caused by catastrophic failures and more subtle failures, for instance, outdated calibration, sensor deformation, and low-frequency oscillations [6]. The performance cannot be directly characterized and is implicit in the flight test data, which is characterized by long time series, imbalance and small differences between classes. Hence, this Hao Yang is with the School of Computer Science, and also with the School of Artificial Intelligence, OPtics and ElectroNics (iOPEN), Northwestern Polytechnical University, Xi'an 710072, P. R. China. (E-mail: [email protected]).\nJunyu Gao, Yuan Yuan and Xuelong Li are with the School of Artificial Intelligence, OPtics and ElectroNics (iOPEN), Xi'an 710072, P. R. China. They are also with the Key Laboratory of Intelligent Interaction and Applications, Ministry of Industry and Information Technology, Xi'an 710072, P. R. China. (E-mail: [email protected]; [email protected]; [email protected]).\nCorresponding author: Xuelong Li task is temporal data anomaly detection, suffering from the problems of difficult extraction of global information and scarcity of abnormal data, which makes sensor anomaly detection a challenging task.\nFor detecting anomaly data accurately, some work attempts to use Statistical-based detection methods [7]- [9], which need to obtain prior empirical information previously. Nevertheless, in complex aviation scenarios, the data distribution often can not be represented using conventional distributions. Similarly, traditional machine learning-based methods [10]- [14], extract manual features from data, but sensor anomaly information is implied in the fight test data, making it difficult to determine and extract effective features. At the same time, the above two methods are also insensitive to time series. Natural Language Processing-based methods [15] have difficulty in covering global contextual information when encountering ultra-long time-series data, which also leads to these methods not working in this task. Computer Vision (CV)-based anomaly detection methods [16]- [18] are well developed for anomaly detection of images. However, due to the difference in data formats, CV-based methods are challenging to operate directly in temporal data anomaly detection.\nInspired by the speech recognition field, the sequence data are convert to images. For example, speech voice data are often pre-processed by MFCCs or PLPs [19] before being fed into the network. Some works [20], [21] attempt to convert temporal data into images and use CV-based methods to solve the temporal data anomaly detection problem. Examples include Recurrence Plot (RP) [21], Markov Transition Fields (MTFs) [20], etc. However, the images generated by RP suffer from the problem of ambiguity with the original time series. The size of an image generated by Gramian Angular Fields (GAFs) [20] is positively correlated with the length squared term of the temporal data. For this particular task, i.e. temporal data of length approximately 250, 000, the image generated by GAFs is extremely memory intensive and difficult to train for the network, as general convolutional kernels do not cover such large images.\nIn addition to the problems raised above, the imbalanced data distribution is also particularly important in anomaly detection, which leads to significant performance degradation of classical classification network architectures in imbalanced datasets. Specifically, the model is more inclined to learn majority class features and ignore minority class features, which results in the network predicting all samples as majority class during training. It is also easy to rely on existing data samples and over-fitting problems for trained models.\nTo remedy the abovementioned problems, a Graphical Temporal Data Analysis (GTDA) framework is developed in this paper to tackle anomaly detection in temporal data. It can not only convert the original one-dimensional data format into an image format based on maintaining the data time series relationship but also control the size of the generated image. Besides, it can also change the data distribution by oversampling to eliminate its influence. Specifically, the framework is divided into two steps:\n1) Primarily, we propose the Series-to-Image (S2I) method. A time-valued rectilinear coordinate system is established to reflect the temporal data features directly from the image. Moreover, the proposed method can control the size of image by controlling the range of the horizontal and vertical coordinates, avoiding the problem of difficult training of the network caused by too large generated images. 2) Additionally, to alleviate the problem of uneven data distribution, we propose Cluster-based Resampling approach using Euclidean Distance (CRD) and Variance-Based Loss (VBL). CRD identifies samples with similar characteristics between two classes by clustering, and then over-samples these samples to enable the network to better distinguish these data. At the same time, the Variance-Based Loss method is proposed to fine-tune the decision boundary as well as to stabilize the training. We argue that the network has worse aggregation ability for the class with higher variance (the distribution of values calculated by the softmax function after network inference). Therefore, greater weights are given to such class.\nIn summary, our main contributions are: " }, { "figure_ref": [], "heading": "II. RELATED WORK A. Anomaly Detection", "publication_ref": [ "b21", "b24", "b25", "b27", "b28", "b30", "b31", "b34", "b21", "b24", "b25", "b27", "b28", "b30", "b31", "b36", "b37", "b45" ], "table_ref": [], "text": "The mainstream flight-related anomaly detection methods are currently clustering-based [22]- [25], neighborhood-based [26]- [28], regression-based [29]- [31] and classificationbased [32]- [35] methods. Clustering-based [22]- [25] and neighborhood-based [26]- [28] methods require mining and exploiting relationships between data and require a high level of expert domain knowledge. Regression-based methods [29]- [31] detect anomalies by fitting to serial data and detecting anomalies ground on the residuals between inferred and actual values. But the regression-based method is difficult to adapt to the dynamically changing parameters of the aircraft during operation. Classification models [32]- [37] can also perform anomaly detection. Nonetheless, the model performance significantly degrades when an imbalanced dataset is involved. Recently, Numerous studies [38]- [46] attempted to alleviate the mentioned problem by designing balanced sampling methods or balanced loss functions, such as oversampling or undersampling the data." }, { "figure_ref": [], "heading": "B. Imaging Temporal Data", "publication_ref": [ "b18", "b46", "b47", "b19", "b19" ], "table_ref": [], "text": "Encouraged by the great success of CV in classification tasks, some works attempt to visualize temporal data and then use the network architecture of CV to solve temporal tasks. In speech recognition systems, speech signals from one-dimensional data are converted to two-dimensional data using MFCCs or PLPs [19]. RP [47] can reveal the internal structure of time series and analyse temporal periodicity, nonsmoothness, etc. Short Time Fourier Transform [48] splits the time domain signal into multiple segments through a sliding window and then performs an fast Fourier transform (FFT) on each segment to generate the time-frequency spectral information of the signal. GAFs [20] convert the temporal information from a right angle coordinate system to a polar coordinate system, using angles or angular differences to represent the time domain information. The time-insensitive Markovian transfer matrix is taken into account for temporal information to propose MTFs [20]. Unlike the above methods, the proposed module is more flexible and practical in imaging ultra-long temporal data and can better combine the advantages of classification models on intuitive images." }, { "figure_ref": [], "heading": "C. Imbalanced Learning", "publication_ref": [ "b37", "b38", "b39", "b48", "b49", "b50", "b53", "b54", "b56", "b40", "b45", "b40", "b41" ], "table_ref": [], "text": "Imbalanced learning alleviates the problem of model performance degradation on imbalanced datasets from two perspectives: re-sampling and re-weighting. Resampling methods focus on adding or subtracting samples from the training set. The random undersampling approach (RUS) [38] discards the majority class samples to equilibrate the loss. The SBC approach [39], [40] determines the majority number per cluster by clustering the samples of each cluster's majority class by selecting them in a different way. SMOTE [49] is established on the KNN to manually synthesize the majority class examples. The Borderline-SOMTE [50] algorithm solves the problem of sample overlap in the sample generation process of the SMOTE algorithm. There is also some works on generating pseudo-minority class samples based on VAE [51]- [54] and GAN [55]- [57]. Reweighting methods focus on optimizing the weights of different classes. Most works [41]- [46] change the weights according to the sample number per class. The weights of loss functions in diverse classes designed by [41], [42] are in inverse ratio to the corresponding sample number. Different from existing approach, we calculate the weights for each class adaptively by computing the variance of the samples inferred from the network. It can dynamically express the degree of aggregation of the network for each class of samples." }, { "figure_ref": [], "heading": "III. METHODOLOGY", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1" ], "heading": "A. Overview", "publication_ref": [], "table_ref": [], "text": "In this section, we develop a Graphical Temporal Data Analysis (GTDA) framework. First, a graphical approach, named S2I (Section III.B), is proposed to convert one-dimensional temporal data into images. Then, a resampling method CRD (Section III.C) is designed to find suitable minority class samples for oversampling by clustering, to alleviate the above problems and to be able to achieve coarse tuning of decision boundary. In addition to this, we fine-tune the bounds by VBL (Section III.D), which characterizes the degree of aggregation of the model by variance for each category. Fig. 2 illustrate the overall framework." }, { "figure_ref": [], "heading": "B. Series-to-Image (S2I)", "publication_ref": [], "table_ref": [], "text": "In this section, we propose Series-to-Image (S2I) to transform one-dimensional temporal data into images for feeding into a general architecture for CV. This is because in practical scenarios, such as during flight test, anomalies span a large time series and are difficult to be handled by NLP-based methods such as RNN and LSTM. Therefore, a framework 2 is proposed that allows the model to encode the temporal data in the same way as processing images and flexibly introduces various CV-based classification and anomaly detection modules to improve the model's capability to model semantic or contextual knowledge on a large scale range. Besides, we want to take advantage of the CV-based classification methods on intuitive images and try to understand the abnormalities implicitly embedded in the data changes from the waveform graph perspective in order to visualize the value shifts from the time dimension.. To this end, a waveform graph criterion called S2I is designed for temporal data to image. We explore the effectiveness of the image-based data approach through CV techniques, starting from several major parameters that affect waveform graphs. This approach has two advantages: 1) Fully exploiting the role of CV-based modules in advanced semantic parsing and global semantic coverage, and 2) Effectively using classification tasks and anomaly detection methods in CV." }, { "figure_ref": [], "heading": "C. Cluster-based Resampling Approach using Euclidean Distance (CRD)", "publication_ref": [ "b38", "b38" ], "table_ref": [], "text": "Due to the extreme imbalanced distribution between classes, the Cluster-based Resampling approach using Euclidean Distance (CRD) method is proposed to alleviate the imbalance in the training datasets distribution by oversampling the minority class samples. We first cluster all the training samples into some clusters. According to [39], the samples in different clusters have different features and they have similar features when they are in the one cluster. So if there are more majority classes in a cluster, that cluster has more majority class features. However, unlike [39], which retains more prominent features when under-sampling, we believe that such clusters, which with far more negative samples than positive, or conversely, already have distinct negative or positive sample features, and therefore do not need to retain more information when oversampling or under-sampling. With this in mind, we propose CRD.\nLet the number of samples in the training set be N , where the number of majority (negative) samples is N M A and the number of minority (positive) samples is N M I . we first cluster the training set into k clusters. N i M A and N i M I denote the number of negative and positive class samples in the i th cluster, respectively. Therefore, the ratio of negative class to positive in the i th cluster is N i M A /N i M I . Assume that the ratio of the negative and positive sample number after generation is set to m : 1, then the formula is as follows: \nNN i M I = N i M A m × 1 k -1 × k i=1 (1 - N i M A /N i M I k j=1 N j M A /N j M I ),(1) where\nN i M A /N i M I k j=1 N j M A /N j M I\nis designed to reduce the oversampling frequency of the proportionally larger clusters and increase the oversampling frequency of the positive samples in the proportionally smaller clusters. The value of k i=1 (1 -\nN i M A /N i M I k j=1 N j M A /N j M I\n) can be calculated to be k -1 and therefore multiplied by 1 k-1 , making the whole coefficient considered as a weighting.\nIn the process of clustering, the number of samples in each cluster can be determined by using formula 1. And in the concrete implementation, the Euclidean distance is used to calculation the distance between any two samples." }, { "figure_ref": [], "heading": "D. Variance-Based Loss (VBL)", "publication_ref": [ "b44", "b57", "b59" ], "table_ref": [], "text": "We then propose a Variance-Based Loss (VBL) to equilibrate the loss during training by adaptively attaching weights to each class. The design of VBL is considered from two aspects: 1) In the object classification, cross-entropy loss is a common and effective function for balanced datasets. But in imbalanced datasets, the small number of abnormal samples and the equiprobability of sample selection in the training process make the minority class contribute less to the loss function, which eventually leads the model to tend to predict all samples to be predicted as majority class. 2) Although CRD pulls the decision boundary back between the two classes and moves it away from the minority class by oversampling the minority class more, the decision boundary is still sometimes 9 Generate (X, y) ← ROS(X , y , u, NN M I ); 10 return (X, y). difficult to train. This is because the redundancy of the data generated during the execution of CRD still causes the model to be easily over-fitted to the minority class.\nTo stabilize the training process, some recent works [45], [58]- [60] add weights to the corresponding losses by the number of samples from different classes. However, as mentioned in Section II.D, the sample number is hardly effective to characterize the degree of training of the samples in the training process. Unlike previous approaches, VBL characterizes the degree of aggregation of the network for each class by the variance inferred from the model. Specifically, if a class has a large variance which is inferred by the model, it means that the network is not aggregating this class well enough, and we want VBL to give more weight to this class so that the decision boundary is far from this class and close to the other class. To implement this idea, we add variance-related adaptive weights to the loss during the training process.\nFormally, let F represents a mapping from an input sample x to the prediction after model inference, and generates a \nwhere ω represents the weight corresponding to the ground truth label. Weights are variance-dependent, and the computation of weights is proportional to the variance. And in the previous analysis, the loss is in inverse ratio to the variance, so the loss function needs to be inversely proportional to the weights. On this basis, the representation of the weight can be given by formula 3.\nω y = ω P , y = N ω N , y = P .(3)\nIn Equation 2, L(z, y) adaptively adjusts the weights of each class based on the variance generated by each class during the network training. The aim is to have more weight for classes with poor model aggregation. That is, the class with larger variance has more weight and the class with better aggregation has less weight. ω y can be expressed by formula 4.\nω y,n = α n ω y,n-1 + (1 -α n )V y,n ,(4)\nwhere ω y is given using the recursive formula and n denotes recursion times. This is because the variance V y needs to be accumulated during the training process. According to the law of large numbers, enough samples are needed to fit the overall distribution, so ω y cannot be directly replaced by V y . α is the linear decay factor. The calculation of V y requires the mean value A y . V y,n , A y,n and α n are given by:\nV y,n = n -1 n 2 (S(z y ) -A y,n-1 ) 2 + n -1 n V y,n-1 ,(5)\nA y,n = A y,n-1 + S(z y ) -A y,n-1 n ,(6)\nα n = α n-1 -γ,(7)\nAlgorithm 2: Algorithm to realize VBL. Input: label y, and outputs z Output: S(z y ) in Equation 6denotes Softmax, which converts the model outputs z y into the prediction probability distribution. γ in Equation 7 is a constant hyperparameter.\nloss 1 Initialization: 2 γ ← batchsize N , A ← 0, V ← 0, 3 n ← 0, α ← 1, ω ← [1, 1]. 4 if" }, { "figure_ref": [], "heading": "IV. EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "This section is divided into four subsections, which include experimental setup, ablation experiments, experimental results and experimental analysis. In the experimental setup, we introduce the evaluation metrics, the data set and the implementation details. A full-scale ablation experiment then validates the efficacy of the proposed modules. The experimental results of the GTDA are presented in the four selected datasets. Finally, in the experimental analysis subsection, this paper analyzes the effectiveness of GTDA based on extensive experimental comparisons" }, { "figure_ref": [], "heading": "A. Experimental Setup", "publication_ref": [], "table_ref": [ "tab_3", "tab_3" ], "text": "Evaluation Metrics. Based on the wide range of evaluation metrics used in object classification, the performance is usually reflected in four metrics, namely, accuracy, precision, recall, and F1-score.\nThese metrics are calculated using a confusion matrix, which is defined in Table II. The AC and PC in Table II indicate Actual Condition and Predicted Condition. TP and TN are the test results that accurately predict positive and negative samples. FN and FP denote the test results that inaccurately predict the positive and negative samples correspondingly. The relevant indicator is thus given by equation 8." }, { "figure_ref": [], "heading": "P recison =", "publication_ref": [ "b60", "b61", "b61" ], "table_ref": [ "tab_4" ], "text": "T P T P + F P ,\nRecall = T P T P + F N , Acc = T P + T N T P + T N + F P + F N , F 1 -score = 2 × P recision × Recall P recision + Recall .(8)\nDatasets. Flights dataset is selected by this paper, which collects a large amount of ultra-long temporal flight test data. It contains two classes, namely, minority (positive) class and majority (negative) class, which consists of 700 samples (350 training and 350 testing samples). There are 310 negative samples and 40 positive samples in the training set, and 309 negative and 41 positive samples in the test set. Among all the samples, the minimum, maximum and average sample lengths are 24, 628 , 477, 888 and 254, 076, respectively.\nAfter that, three datasets in UCR Archive [61] are selected to ensure the GTDA's efficiency, including Earthquakes, Hand-Outlines, and Herring. The detail in these datasets is listed in Table III .\nImplementation Details. ResNet-34 [62] is chosen as the backbone for this task because the network architecture was not the focus of our study and, on the other hand, ResNet-34 [62] is used by many classification networks. In addition, we set the same hyperparameters for all experiments. Specifically, the learning rate was set to 0.0002 and the epoch's number to 50. The relevant parameters for CRD, S2I and VBL will be elaborated in the ablation experiments section." }, { "figure_ref": [], "heading": "B. Ablation Study", "publication_ref": [ "b61", "b9" ], "table_ref": [], "text": "We perform ablation experiments in two ways. First, ignoring resampling as well as reweighting, that is, not considering CRD and VBL, we verify the effects of different parameters on S2I and then further compare the effects of S2I on the experiments. Second, on the basis of the S2I, we use S2I+ResNet34 [62] to fully explore the effects of CRD and VBL on model performance.\nThe Influence of S2I Parameters. The scale and type of curve, and whether or not the data is normalized are all taken into account. Specifically, we use the pyplot function in Matplotlib to set the width of of the sensor curve in the image, with the widths being set to {0.5, 1.0, 1.5, 2.0, 2.5}. The type of curve includes line or point. Similarly, the point size corresponds to {0.5, 1.0, 1.5, 2.0, 2.5}, and the point uses the parameter names markersize in pyplot. The data is normalized using Min-Max Normalization, which scales the range of values for each feature point to be normalized to [0, 1]. For each sample, the value of the k th feature point is given by the following equation:\nxik = x ik -min{x i } max{x i } -min{x i } ,(9)\nwhere max{x i } and min{x i } denote the maximum and minimum values of all feature points in the i th sample, respectively. Equation 9 allows x ik to be mapped to [0, 1].\nWe denote the type of curve as the set A, where A = {0.5, 1.0, 1.5, 2.0, 2.5}. Similarly, the type B = {line, point} and the normalization operation C = {Normal, non-Normal} are defined. Thus, all operations of S2I can be given by formula 10.\nA × B × C = {(x, y, z)|x ∈ A, y ∈ B, z ∈ C},(10)\nwhere × denotes Cartesian product. The optional parameter details are listed in Fig. 5 As shown in Fig. 6, we conduct experiments with different parameters and use diverse metric values in formula 8. The vertical axis presents the metric values on the Flights dataset. The horizontal vertical presents different scale of curve, which is generated by temporal data after S2I. In the same way, shape and color of the icon correspond to curve type and normalization operation in data, respectively. Considering the four evaluation metrics, when (x, y, z) is CRD and VBL analysis. Table V describes the detection results of each component with the S2I module using ResNet-34 as the backbone. For a fair comparison, we trained the backbone using the standard Softmax Cross-Entropy loss as well as without using the resample method. First, we verify the performance of CRD. CRD improves the performance of all evaluation indicators comparing to the baseline, i.e., +0.017 for accuracy, +0.213 for precision, +0.219 for recall and +0.239 for F1-score, respectively. The above results demonstrate that CRD can significantly adjust the decision boundary and help the network train the minority samples better. We then investigate the efficacy of VBL. The recall and F1-score increased by 0.658 and 0.237 compared to the baseline. Analyzing the above comparative data, CRD improves performance from the precision perspective, while VBL concentrates more on the recall of abnormal samples. Next, we verify the persuasiveness of the combination of CRD and VBL based on S2I and ResNet34. Specifically, CRD and VBL work collaboratively and improve 0.137 for precision, 0.365 for recall and 0.289 for F1-score comparing to the baseline. Notably, the combination of CRD and VBL increase 0.150 for precision comparing to the VBL only, and 0.146 for recall comparing to the CRD only. Therefore, the complete method achieve the best of F1-score, i.e., +0.289 to the baseline, +0.05 to the CRD, and +0.052 to the VBL. " }, { "figure_ref": [], "heading": "C. Experimental Results", "publication_ref": [], "table_ref": [], "text": "In this section, we test the performance of CRD and VBL on Flights and three other datasets with ResNet-34 and VGG-16 as baselines. For a fair comparison, all the temporal samples of the four datasets is converted to image data via S2I. Table IV lists the statistical results of the four evaluation indicators. The proposed GTDA indicates the CRD and VBL based on the baseline. Compare with baseline only, our method performs better. Taking the F1-score as an example, the proposed module increases by 0.038, 0.121, 0.012, 0.009, 0.011, and 0.289 compared with the corresponding backbone and dataset, respectively." }, { "figure_ref": [ "fig_8", "fig_8" ], "heading": "D. Experimental Analysis", "publication_ref": [ "b62", "b62", "b62", "b61", "b63" ], "table_ref": [ "tab_5", "tab_5" ], "text": "Qualitative Results. For extremely imbalanced datasets, the accuracy cannot effectively evaluate the quality of the model fitting ability. Taking Flights dataset as an example, the 350 test set samples contain 309 negative samples and 41 positive samples. The imbalance ratio reaches 7.54 : 1. It means that the accuracy rate is 0.883 when all samples are predicted as negative by the model. This is an inaccurate evaluation of the model in an imbalanced dataset. F1-score takes into account the precision and recall, and the evaluation is fair and effective. The two bolded values in red in Table IV indicate that the backbone has a higher accuracy without CRD and VBL. But this accuracy cannot express the degree of model fit, because they all have lower F1-score. That is, both models predict the positive samples as negative samples more. Due to the insufficient number of layers of VGGNet-16 [63] itself and the weak model fitting ability, it cannot effectively fit samples of different classes in the Herring dataset, and all samples are predicted to be the majority class. In Flights dataset, VGGNet-16 [63] still fails to fit, but with the help of the proposed modules (CRD+VBL), it fits efficiently and performs well. Specifically, ResNet-34 has better fitting ability than VGGNet-16 [63] objectively, but in F1-score, ResNet-34 [62] is 0.236 lower than GTDA.\nEmbedding Visualization. As show in Fig. 7, t-SNE [64] is used to visualize the performance of the proposed GTDA on Flights and HandOutlines datasets. For efficient comparison, we first visualize all samples in the initial dataset. The before training column in Fig. 7 shows that the Handoutlines dataset has more obvious data separability than Flights after t-SNE dimensionality reduction. Flights dataset is almost inseparable after dimensionality reduction. Then, the baseline is used after S2I to train the dataset. Finally, we added CRD and VBL to the baseline. And we visualize these results. In Flights dataset, the baseline mixes most of the positive samples with some negative samples, which is more obvious in the upper part of the figure. And the proposed GTDA concentrates more positive samples in the upper left of the figure. Combined with Table IV, GTDA has a higher F1-score than the baseline. In the HandOutlines dataset, the model trained by GTDA clearly separates positive instances from negative after visualization, but the baseline does not." }, { "figure_ref": [], "heading": "V. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "This paper focuses on anomaly detection for ultra-long temporal flight test data. To this end, we develop a general framework (GTDA) for studying time-series data using CVbased methods. It contains three modules: imaging temporal data module (S2I), resampling module (CRD) and reweighting module (VBL). Specifically, S2I converts temporal data into images. CRD oversamples minority class by clustering to adjust the decision boundary coarsely. VBL adaptively attaches different weights for each class to adjust the effect intensity on the decision boundary. Extensive experiments demonstrate the GTDA's eminence in ultra-long temporal data anomaly detection. Besides, CRD and VBL promote the precision and recall of the model, respectively. The synergy of CRD and VBL can improve the F1-score. In future work, we will concentrate on exploring the role of cross-modal flight test data in anomaly detection, and further investigate how to extract effective features from multi-modal flight test data." } ]
Anomaly detection in temporal data from sensors under aviation scenarios is a practical but challenging task: 1) long temporal data is difficult to extract contextual information with temporal correlation; 2) the anomalous data are rare in time series, causing normal/abnormal imbalance in anomaly detection, making the detector classification degenerate or even fail. To remedy the aforementioned problems, we propose a Graphical Temporal Data Analysis (GTDA) framework. It consists three modules, named Series-to-Image (S2I), Cluster-based Resampling Approach using Euclidean Distance (CRD) and Variance-Based Loss (VBL). Specifically, for better extracts global information in temporal data from sensors, S2I converts the data to curve images to demonstrate abnormalities in data changes. CRD and VBL balance the classification to mitigate the unequal distribution of classes. CRD extracts minority samples with similar features to majority samples by clustering and over-samples them. And VBL fine-tunes the decision boundary by balancing the fitting degree of the network to each class. Ablation experiments on the Flights dataset indicate the effectiveness of CRD and VBL on precision and recall, respectively. Extensive experiments demonstrate the synergistic advantages of CRD and VBL on F1-score on Flights and three other temporal datasets.
Imbalanced Aircraft Data Anomaly Detection
[ { "figure_caption": "Fig. 1 :1Fig. 1: Four exemplars generated by the imaging technique from the Flights dataset. (a) and (b) are normal data (negative or majority class samples), (c) and (d) are abnormal data (positive or minority class samples).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 :2Fig. 2: The flowchart of the proposed GTDA framework, which contains three modules: 1) CRD is designed to balance datasets by changing the data distribution. 2) S2I is used to imaging the temporal data. 3) VBL weights the loss function based on the data distribution after the classifier model inference.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 :3Fig. 3: CRD. Oversample positive samples close to the decision boundary to improve the model's ability to discriminate similar samples from different classes.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "mindicates the sample number in the positive class in the training set after oversampling, and k j=1 N j M A /N j M I indicates the total proportion of majority and minority classes in each cluster in the original training set. Clusters with larger proportions, i.e. clusters with more majority or minority class features, are easier to classify during training. Clusters with smaller proportions indicate that the positive and negative samples have similar features, and also indicate that these samples are close to the decision boundary. By oversampling the positive examples of these clusters, the network is more likely to extract their features and distinguish them.. Therefore, 1 -", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Algorithm 1 :1Algorithm to realize CRD. Input: (X , y ), m, and k Output: (X, y)1 Initialization: m ← 1, k ← 6 2 # Cluster the X into k clusters. 3 Cluster (X , u) ← k-means(X ,k); 4 # Calculate N M A and N M I in each cluster. 5 Calculate N M A , N M I ← cal 1(X , y , u); 6 # Calculate NN M I in each cluster after CRD. 7 Calculate NN M I ← cal 2(N M A , N M I , k); 8 # Generate the new balanced training dataset.", "figure_data": "", "figure_id": "fig_4", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig. 4: VBL. Modifying the intensity of the decision boundary adjustment adaptively according to the fitting degree of the model to each class.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 :Fig. 6 :56Fig. 5: Setting of S2I parameters and images generated under specific parameters. (a) The process of Cartesian product about the curve scale A, type B and the normalization operation C. × indicates Cartesian Product. (0.5, line, Normal), (1.5, point, Normal) and (2.5, line, non-Normal) in (b), (c) and (d), respectively.", "figure_data": "", "figure_id": "fig_6", "figure_label": "56", "figure_type": "figure" }, { "figure_caption": "(a). Further, Fig. 5 (b), (c), and (d) are drawn to more intuitively distinguish between the different operations. More specifically, (0.5, line, Normal) is set in the Fig. 5 (b). The parameters of Fig. 5 (c) and (d) are set to (1.5, point, Normal) and (2.5, line, non-Normal), respectively.", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 7 :7Fig. 7: The visualization of Flights and HandOutlines datasets with t-SNE, and results of the baseline and the proposed GTDA in these datasets.", "figure_data": "", "figure_id": "fig_8", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Functions or variables in Alg.1 and Alg.2.", "figure_data": "Functions&VariablesExplanationXoriginal training datasetyoriginal ground truth labelXtraining dataset after CRDyground truth label after CRDucluster labelk-meansk-means algorithmcal 1calculate the number of samples per clustercal 2calculate NN M I by ( 1 )ROSRandom Over-Sampling [38] operationnrecursion timesPprobabilityS()softmax operationω convertconvert ω by ( 3 )ω updateupdate ω by ( 4 )variance updateupdate V by ( 5 )mean updateupdate A by ( 6 )α updateupdate α by ( 7 )loss computecompute loss by ( 2 )", "figure_id": "tab_1", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "Confusion Matrix", "figure_data": "PCPositiveNegativeACTrueTrue positive (TP)False negative (FN)FalseFalse positive (FP)True negative (TN)", "figure_id": "tab_3", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "Statistics of the three temporal datasets and the Flights dataset", "figure_data": "NameTypeLengthTrainTestClassEarthquakesSensor5123221392HandOutlinesImage2, 7091, 0003702HerringImage51264642FlightsSensor254, 0763503502", "figure_id": "tab_4", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "The performance of typical classification methods and the proposed GTDA on the four datasets. All experiments use S2I to convert temporal data to images.", "figure_data": "MethodBackboneS2ICRDVBLEarthquakesHerringAccPrecisionRecallF1-scoreAccPrecisionRecallF1-scoreVGG-16 [63]VGG-160.7050.3330.1710.226Does not convergeGTDAVGG-160.6470.3250.3710.347Does not convergeResNet-34 [62]ResNet-340.6190.3040.4000.3460.6250.5330.6150.571GTDAResNet-340.6760.3680.4000.3840.6410.5520.6150.582MethodBackboneS2ICRDVBLHandOutlinesFlightsAccPrecisionRecallF1-scoreAccPrecisionRecallF1-scoreVGG-16 [63]VGG-160.9300.9530.9370.945Does not convergeGTDAVGG-160.9430.9390.9740.9560.8310.3330.4390.379ResNet-34 [62]ResNet-340.9350.9610.9360.9480.8630.2670.0980.143GTDAResNet-340.9490.9540.9660.9600.8570.4040.4630.432", "figure_id": "tab_5", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "The effectiveness of the CRD and VBL on Flights dataset", "figure_data": "CRDVBLAccPrecisionRecallF1-score0.8630.2670.0980.1430.8800.4800.3170.3820.7110.2540.7560.3800.8570.4040.4630.432", "figure_id": "tab_6", "figure_label": "V", "figure_type": "table" } ]
Hao Yang; Junyu Gao; Yuan Yuan; Xuelong Li
[ { "authors": "H Liu; T Ma; F L Lewis; Y Wan", "journal": "IEEE transactions on cybernetics", "ref_id": "b0", "title": "Robust formation control for multiple quadrotors with nonlinearities and disturbances", "year": "2018" }, { "authors": "Y Wang; Z.-Y Ru; K Wang; P.-Q Huang", "journal": "IEEE transactions on cybernetics", "ref_id": "b1", "title": "Joint deployment and task scheduling optimization for large-scale mobile users in multi-uavenabled mobile edge computing", "year": "2019" }, { "authors": "X Zhou; X Yu; K Guo; S Zhou; L Guo; Y Zhang; X Peng", "journal": "IEEE Transactions on Cybernetics", "ref_id": "b2", "title": "Safety flight control design of a quadrotor uav with capability analysis", "year": "2021" }, { "authors": "J Bu; R Sun; H Bai; R Xu; F Xie; Y Zhang; W Y Ochieng", "journal": "IET Radar, Sonar & Navigation", "ref_id": "b3", "title": "Integrated method for the uav navigation sensor anomaly detection", "year": "2017" }, { "authors": "D J Allerton; H Jia", "journal": "The Journal of Navigation", "ref_id": "b4", "title": "A review of multisensor fusion methodologies for aircraft navigation systems", "year": "2005" }, { "authors": "B M De Silva; J Callaham; J Jonker; N Goebel; J Klemisch; D Mcdonald; N Hicks; J N Kutz; S L Brunton; A Y Aravkin", "journal": "", "ref_id": "b5", "title": "Physics-informed machine learning for sensor fault detection with flight test data", "year": "2020" }, { "authors": "D Tian; C Gong; M Gong; Y Wei; X Feng", "journal": "IEEE Transactions on Cybernetics", "ref_id": "b6", "title": "Modeling cardinality in image hashing", "year": "2021" }, { "authors": "T G Dietterich", "journal": "Neural computation", "ref_id": "b7", "title": "Approximate statistical tests for comparing supervised classification learning algorithms", "year": "1998" }, { "authors": "M Goldstein; A Dengel", "journal": "", "ref_id": "b8", "title": "Histogram-based outlier score (hbos): A fast unsupervised anomaly detection algorithm", "year": "2012" }, { "authors": "H Iiduka", "journal": "IEEE Transactions on Cybernetics", "ref_id": "b9", "title": "Appropriate learning rates of adaptive learning rate optimization algorithms for training deep neural networks", "year": "2021" }, { "authors": "M Hosseinzadeh; A M Rahmani; B Vo; M Bidaki; M Masdari; M Zangakani", "journal": "Soft Computing", "ref_id": "b10", "title": "Improving security using svm-based anomaly detection: issues and challenges", "year": "2021" }, { "authors": "R Perdisci; G Gu; W Lee", "journal": "IEEE", "ref_id": "b11", "title": "Using an ensemble of one-class svm classifiers to harden payload-based anomaly detection systems", "year": "2006" }, { "authors": "D Saraswat; P Bhattacharya; M Zuhair; A Verma; A Kumar", "journal": "IEEE", "ref_id": "b12", "title": "Ansmart: A svm-based anomaly detection scheme via system profiling in smart grids", "year": "2021" }, { "authors": "F Nie; W Zhu; X Li", "journal": "Neurocomputing", "ref_id": "b13", "title": "Decision tree svm: An extension of linear svm for non-linear classification", "year": "2020" }, { "authors": "D Park; Y Hoshi; C C Kemp", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b14", "title": "A multimodal anomaly detector for robot-assisted feeding using an lstm-based variational autoencoder", "year": "2018" }, { "authors": "H Deng; X Li", "journal": "", "ref_id": "b15", "title": "Anomaly detection via reverse distillation from one-class embedding", "year": "2022-06" }, { "authors": "W Lin; J Gao; Q Wang; X Li", "journal": "Neurocomputing", "ref_id": "b16", "title": "Learning to detect anomaly events in crowd scenes from synthetic data", "year": "2021" }, { "authors": "J Gao; M Gong; X Li", "journal": "", "ref_id": "b17", "title": "Audio-visual representation learning for anomaly events detection in crowds", "year": "2021" }, { "authors": "H Hermansky", "journal": "the Journal of the Acoustical Society of America", "ref_id": "b18", "title": "Perceptual linear predictive (plp) analysis of speech", "year": "1990" }, { "authors": "Z Wang; T Oates", "journal": "", "ref_id": "b19", "title": "Imaging time-series to improve classification and imputation", "year": "2015" }, { "authors": "N Marwan; N Wessel; U Meyerfeldt; A Schirdewan; J Kurths", "journal": "Physical review E", "ref_id": "b20", "title": "Recurrence-plot-based measures of complexity and their application to heart-rate-variability data", "year": "2002" }, { "authors": "A Marcos Alvarez; M Yamada; A Kimura; T Iwata", "journal": "", "ref_id": "b21", "title": "Clusteringbased anomaly detection in multi-view data", "year": "2013" }, { "authors": "G Pu; L Wang; J Shen; F Dong", "journal": "Tsinghua Science and Technology", "ref_id": "b22", "title": "A hybrid unsupervised clustering-based anomaly detection method", "year": "2020" }, { "authors": "I Kiss; B Genge; P Haller; G Sebestyén", "journal": "IEEE", "ref_id": "b23", "title": "Data clustering-based anomaly detection in industrial control systems", "year": "2014" }, { "authors": "J Li; H Izakian; W Pedrycz; I Jamal", "journal": "Applied Soft Computing", "ref_id": "b24", "title": "Clustering-based anomaly detection in multivariate time series data", "year": "2021" }, { "authors": "A Ghosh; P Gudipati", "journal": "IEEE", "ref_id": "b25", "title": "Anomaly detection in web graphs using vertex neighbourhood based signature similarity methods", "year": "2016" }, { "authors": "M M Breunig; H.-P Kriegel; R T Ng; J Sander", "journal": "", "ref_id": "b26", "title": "Lof: identifying density-based local outliers", "year": "2000" }, { "authors": "M Ester; H.-P Kriegel; J Sander; X Xu", "journal": "", "ref_id": "b27", "title": "A density-based algorithm for discovering clusters in large spatial databases with noise", "year": "1996" }, { "authors": "X Liu; P S Nielsen", "journal": "", "ref_id": "b28", "title": "Regression-based online anomaly detection for smart grid data", "year": "2016" }, { "authors": "K.-W Cheng; Y.-T Chen; W.-H Fang", "journal": "IEEE Transactions on Image Processing", "ref_id": "b29", "title": "Gaussian process regression-based video anomaly detection and localization with hierarchical feature representation", "year": "2015" }, { "authors": "D Y Oh; I D Yun", "journal": "Sensors", "ref_id": "b30", "title": "Residual error based anomaly detection using auto-encoder in smd machine sound", "year": "2018" }, { "authors": "L Bergman; Y Hoshen", "journal": "", "ref_id": "b31", "title": "Classification-based anomaly detection for general data", "year": "2020" }, { "authors": "L Ruff; R Vandermeulen; N Goernitz; L Deecke; S A Siddiqui; A Binder; E Müller; M Kloft", "journal": "PMLR", "ref_id": "b32", "title": "Deep one-class classification", "year": "2018" }, { "authors": "I Golan; R El-Yaniv", "journal": "Advances in neural information processing systems", "ref_id": "b33", "title": "Deep anomaly detection using geometric transformations", "year": "2018" }, { "authors": "T Han; J Gao; Y Yuan; Q Wang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b34", "title": "Unsupervised semantic aggregation and deformable template matching for semi-supervised learning", "year": "2020" }, { "authors": "Q Wang; W Huang; Z Xiong; X Li", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b35", "title": "Looking closer at the scene: Multiscale representation learning for remote sensing image scene classification", "year": "2020" }, { "authors": "F Nie; L Tian; R Wang; X Li", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b36", "title": "Multiview semi-supervised learning model for image classification", "year": "2019" }, { "authors": "J Prusa; T M Khoshgoftaar; D J Dittman; A Napolitano", "journal": "IEEE", "ref_id": "b37", "title": "Using random undersampling to alleviate class imbalance on tweet sentiment data", "year": "2015" }, { "authors": "S.-J Yen; Y.-S Lee", "journal": "Expert Systems with Applications", "ref_id": "b38", "title": "Cluster-based under-sampling approaches for imbalanced data distributions", "year": "2009" }, { "authors": "", "journal": "Springer", "ref_id": "b39", "title": "Under-sampling approaches for improving prediction of the minority class in an imbalanced dataset", "year": "2006" }, { "authors": "K Morik; P Brockhausen; T Joachims", "journal": "", "ref_id": "b40", "title": "Combining statistical learning with a knowledge-based approach: a case study in intensive care monitoring", "year": "1999" }, { "authors": "Y Xie; C F Manski", "journal": "Sociological Methods & Research", "ref_id": "b41", "title": "The logit model and response-based samples", "year": "1989" }, { "authors": "K Cao; C Wei; A Gaidon; N Arechiga; T Ma", "journal": "Advances in neural information processing systems", "ref_id": "b42", "title": "Learning imbalanced datasets with label-distribution-aware margin loss", "year": "2019" }, { "authors": "A K Menon; S Jayasumana; A S Rawat; H Jain; A Veit; S Kumar", "journal": "", "ref_id": "b43", "title": "Long-tail learning via logit adjustment", "year": "2020" }, { "authors": "J Tan; C Wang; B Li; Q Li; W Ouyang; C Yin; J Yan", "journal": "", "ref_id": "b44", "title": "Equalization loss for long-tailed object recognition", "year": "2020" }, { "authors": "J Wu; L Song; T Wang; Q Zhang; J Yuan", "journal": "", "ref_id": "b45", "title": "Forest r-cnn: Largevocabulary long-tailed object detection and instance segmentation", "year": "2020" }, { "authors": "J.-P Eckmann; S O Kamphorst; D Ruelle", "journal": "World Scientific Series on Nonlinear Science Series A", "ref_id": "b46", "title": "Recurrence plots of dynamical systems", "year": "1995" }, { "authors": "M Parchami; W.-P Zhu; B Champagne; E Plourde", "journal": "IEEE Circuits and Systems Magazine", "ref_id": "b47", "title": "Recent developments in speech enhancement in the short-time fourier transform domain", "year": "2016" }, { "authors": "N V Chawla; K W Bowyer; L O Hall; W P Kegelmeyer", "journal": "Journal of artificial intelligence research", "ref_id": "b48", "title": "Smote: synthetic minority over-sampling technique", "year": "2002" }, { "authors": "H Han; W.-Y Wang; B.-H Mao", "journal": "Springer", "ref_id": "b49", "title": "Borderline-smote: a new oversampling method in imbalanced data sets learning", "year": "2005" }, { "authors": "O Fabius; J R Van Amersfoort", "journal": "", "ref_id": "b50", "title": "Variational recurrent autoencoders", "year": "2014" }, { "authors": "J An; S Cho", "journal": "Special Lecture on IE", "ref_id": "b51", "title": "Variational autoencoder based anomaly detection using reconstruction probability", "year": "2015" }, { "authors": "H Khalid; S S Woo", "journal": "", "ref_id": "b52", "title": "Oc-fakedect: Classifying deepfakes using one-class variational autoencoder", "year": "2020" }, { "authors": "Y Zhou; X Liang; W Zhang; L Zhang; X Song", "journal": "Neurocomputing", "ref_id": "b53", "title": "Vae-based deep svdd for anomaly detection", "year": "2021" }, { "authors": "J Kim; K Jeong; H Choi; K Seo", "journal": "Springer", "ref_id": "b54", "title": "Gan-based anomaly detection in imbalance problems", "year": "2020" }, { "authors": "H Zenati; C S Foo; B Lecouat; G Manek; V R Chandrasekhar", "journal": "", "ref_id": "b55", "title": "Efficient gan-based anomaly detection", "year": "2018" }, { "authors": "W Jiang; Y Hong; B Zhou; X He; C Cheng", "journal": "IEEE Access", "ref_id": "b56", "title": "A gan-based anomaly detection approach for imbalanced industrial time series", "year": "2019" }, { "authors": "Y Li; T Wang; B Kang; S Tang; C Wang; J Li; J Feng", "journal": "", "ref_id": "b57", "title": "Overcoming classifier imbalance for long-tail object detection with balanced group softmax", "year": "2020" }, { "authors": "J Ren; C Yu; X Ma; H Zhao; S Yi", "journal": "Advances in neural information processing systems", "ref_id": "b58", "title": "Balanced meta-softmax for long-tailed visual recognition", "year": "2020" }, { "authors": "J Wang; W Zhang; Y Zang; Y Cao; J Pang; T Gong; K Chen; Z Liu; C C Loy; D Lin", "journal": "", "ref_id": "b59", "title": "Seesaw loss for long-tailed instance segmentation", "year": "2021" }, { "authors": "H A Dau; A Bagnall; K Kamgar; C.-C M Yeh; Y Zhu; S Gharghabi; C A Ratanamahatana; E Keogh", "journal": "IEEE/CAA Journal of Automatica Sinica", "ref_id": "b60", "title": "The ucr time series archive", "year": "2019" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b61", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "K Simonyan; A Zisserman", "journal": "", "ref_id": "b62", "title": "Very deep convolutional networks for large-scale image recognition", "year": "2014" }, { "authors": "L Van Der Maaten; G Hinton", "journal": "Journal of machine learning research", "ref_id": "b63", "title": "Visualizing data using t-sne", "year": "2008" } ]
[ { "formula_coordinates": [ 4, 48.96, 246.25, 251.06, 52.45 ], "formula_id": "formula_0", "formula_text": "NN i M I = N i M A m × 1 k -1 × k i=1 (1 - N i M A /N i M I k j=1 N j M A /N j M I ),(1) where" }, { "formula_coordinates": [ 4, 152.02, 419.52, 52.52, 19.61 ], "formula_id": "formula_1", "formula_text": "N i M A /N i M I k j=1 N j M A /N j M I" }, { "formula_coordinates": [ 4, 105.75, 471.79, 52.53, 19.61 ], "formula_id": "formula_2", "formula_text": "N i M A /N i M I k j=1 N j M A /N j M I" }, { "formula_coordinates": [ 5, 127.95, 443.36, 172.08, 24 ], "formula_id": "formula_4", "formula_text": "ω y = ω P , y = N ω N , y = P .(3)" }, { "formula_coordinates": [ 5, 103.8, 558.14, 196.22, 9.65 ], "formula_id": "formula_5", "formula_text": "ω y,n = α n ω y,n-1 + (1 -α n )V y,n ,(4)" }, { "formula_coordinates": [ 5, 66.22, 677.03, 233.8, 22.31 ], "formula_id": "formula_6", "formula_text": "V y,n = n -1 n 2 (S(z y ) -A y,n-1 ) 2 + n -1 n V y,n-1 ,(5)" }, { "formula_coordinates": [ 5, 101.64, 707.23, 198.38, 22.31 ], "formula_id": "formula_7", "formula_text": "A y,n = A y,n-1 + S(z y ) -A y,n-1 n ,(6)" }, { "formula_coordinates": [ 5, 140.63, 739.08, 159.39, 9.65 ], "formula_id": "formula_8", "formula_text": "α n = α n-1 -γ,(7)" }, { "formula_coordinates": [ 5, 313.47, 83.98, 133.66, 57.13 ], "formula_id": "formula_9", "formula_text": "loss 1 Initialization: 2 γ ← batchsize N , A ← 0, V ← 0, 3 n ← 0, α ← 1, ω ← [1, 1]. 4 if" }, { "formula_coordinates": [ 6, 90.5, 494.1, 209.52, 70.82 ], "formula_id": "formula_10", "formula_text": "Recall = T P T P + F N , Acc = T P + T N T P + T N + F P + F N , F 1 -score = 2 × P recision × Recall P recision + Recall .(8)" }, { "formula_coordinates": [ 7, 115.2, 410.85, 184.82, 23.23 ], "formula_id": "formula_11", "formula_text": "xik = x ik -min{x i } max{x i } -min{x i } ,(9)" }, { "formula_coordinates": [ 7, 71.59, 561.53, 228.43, 9.3 ], "formula_id": "formula_12", "formula_text": "A × B × C = {(x, y, z)|x ∈ A, y ∈ B, z ∈ C},(10)" } ]
10.1007/978-3-319-16220-1_8
2023-05-17
[ { "figure_ref": [ "fig_8", "fig_4" ], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b3", "b4", "b6", "b7", "b8", "b9", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b18", "b19", "b9", "b20" ], "table_ref": [], "text": "P RECISION agriculture is essential to address the increasing global population and the corresponding demand for a 70% increase in agricultural production by 2050 [1]. The challenge lies in managing limited cultivation land, water scarcity, and the effects of climate change on productivity. One critical aspect of precision agriculture is the effective control of weeds that negatively impact crop growth and yields by competing for resources and interfering with crop growth through the release of chemicals [2]- [4].\nRecent advances in deep learning have revolutionized the field of computer vision, with Convolutional Neural Networks (CNNs) and transformers becoming the backbone of numerous state-of-the-art models [5]- [7]. However, their performance relies heavily on the quality and diversity of training data [8], [9], emphasizing the importance of comprehensive agricultural datasets for model development [10]. But the agricultural domain often suffers from the deficiency of task-specific data [10], [11]. Which can result in insufficient data variety, overfitting, inadequate representation of real-world challenges, and reduced model robustness. These limitations hinder the model's ability to generalize and accurately recognize crops and weeds in diverse real-world situations. To overcome these issues, researchers employ techniques like data augmentation [12], [13], transfer learning [14] , or synthetic data generation [15], although these approaches may not always achieve the same performance level as models trained on larger, more diverse datasets [16]. Transfer learning (fine-tuning) [17] is a common approach for training deep learning models in agriculture, as it involves using pretrained weights from other tasks (e.g., ImageNet) to address data deficiency [18]. Pretrained weights from ImageNet [19] and COCO [20] are commonly used but are less suitable for domain-specific agricultural tasks due to their generic content [10], [21]. Thus absence of a centralized benchmark repository for agriculture-specific datasets hinders the development of computer-aided precision agriculture (CAPA) systems.\nIn this study, we introduce and evaluate the crop weed recognition dataset (CWD30) dataset, a large-scale and diverse collection of various crops and weed images that captures the complexities and challenges of real-world precision agriculture scenarios. The CWD30 dataset comprises a collection of 219,770 images that encompass 10 crop classes and 20 weed classes. These images capture various growth stages, multiple viewing angles, and diverse environmental conditions. Figure 1 shows some image samples, while Figure 2 displays the number of images per category. The CWD30 dataset addresses the significant intra-class difference and large inter-species similarity of multiple crop and weed plants. We train various deep learning models, including CNNs and transformer-based architectures, on the CWD30 dataset to assess their perfor-Fig. 1. Crop and Weed image samples from CWD30 dataset, captured at different life cycle stages, under varying environment and from different viewing angles.Key elements in the images are highlighted: pink-bordered images represent similarities at a macro class level (crop vs weed); orange boxes indicate the variability within a single weed species due to environmental factors such as indoor vs outdoor settings and soil type; images encased in red and brown borders demonstrate visually similar crop and weed classes; images marked with black dashed lines represent weeds cultivated in a laboratory setting; small inset boxes on each image provide information about the weather conditions and camera angle and plant age at time of capture. mance and investigate the impact of pretraining. Furthermore, we analyze the structure of the feature embeddings obtained by these models and compare their performance on downstream tasks, such as pixel-level crop weed recognition In summary, building upon the aforementioned challenges and limitations we make the following main contributions:\n• We present the crop-weed dataset (CWD30), which, to the best of our knowledge, is the first truly holistic, largescale crop weed recognition dataset available to date. • Proposed dataset encompasses a wide range of plant growth stages, i.e., from seedlings to fully mature plants. This extensive coverage of growth stages ensures that the CWD30 dataset captures the various morphological changes and developmental stages plants undergo throughout their life cycle. By incorporating these diverse growth stages, the dataset provides a more comprehensive representation of real-world agricultural scenarios. Consequently, deep learning models trained on this dataset can better adapt to the inherent variability in plant appearances and growth stages, Figure 7a shows a few samples of plants at various growth stages. • The CWD30 dataset offers a unique advantage by including multi-view images, captured at various angles. This comprehensive representation of plants account for various viewpoints and lighting conditions, which enhances the dataset's ability to model real-world situations. The multi-view images enable the development of more robust and generalizable deep learning models, as they allow the models to learn from a broader range of visual features and better understand the complexities and variations commonly found in real-field settings (see section III for details). • Compared to existing agricultural datasets that focus on specific plant parts like branches or leaves, the proposed CWD30 dataset offers high-resolution images of entire plants in various growth stages and viewpoints. This comprehensive nature of the CWD30 dataset allows for the generation of simpler, plant-part-specific datasets by cropping its high-resolution images. As a result, the CWD30 dataset can be considered a more versatile and complete resource compared to existing datasets. This dataset contributes to overcoming the limitations of previous datasets and advances the field of precision agriculture.\n• Additionally, we demonstrate that models pretrained on the CWD30 dataset consistently outperform their ImageNet-1K pretrained counterparts, yielding more meaningful and robust feature representations. This improvement, in turn, enhances the performance of state-ofthe-art models on popular downstream agricultural tasks (see section V for details). These contributions can further advance the research and development of reliable CAPA systems.\nThe rest of this article unfolds as follows: Section II provides a review of related literature and relevant datasets. Section III explains the development of the CWD30 dataset, its unique characteristics, and draws comparisons with other agricultural datasets. The experimental setup is outlined in Section IV. Following this, Section V delves into the analysis of experimental results and the inherent advantages offered by the CWD30 dataset. Finally, we wrap up the article in the conclusion." }, { "figure_ref": [], "heading": "II. RELATED WORKS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Crop Weed Recognition", "publication_ref": [ "b7", "b21", "b4", "b22", "b23", "b38", "b39", "b40", "b21", "b41", "b44" ], "table_ref": [], "text": "Crop-weed recognition is vital in CAPA systems for efficient and sustainable farming practices. Reliable recognition and differentiation allow for effective weed management and optimal crop growth, reducing chemical usage and minimizing environmental impact [8], [22]. It also helps farmers oversee their crops' health, enabling prompt response and lowering the possibility of crop loss from weed infestations [5], [23]. However, these systems face limitations due to the reliance on small datasets [24], resulting in reduced model robustness, overfitting, and inadequate representation of real-world challenges.\nSeveral studies have shown the potential of deep learning techniques in addressing key components and challenges in developing CAPA systems, such as unmanned weed detection [39], fertilization [40], irrigation, and phenotyping [41]. Kamilaris et al. [22] conducted experiments that showed deep learning outperforming traditional methods. Westwood et al. [42] discussed the potential of deep learning-based plant classification for unmanned weed management. Wang [45] identified research gaps, such as a lack of substantial crop-weed datasets and generalized models and concluded that methods like data augmentation and transfer learning might not always produce results on par with models trained on more substantial, diverse datasets. To address these limitations and challenges, further research is needed to improve the accuracy and robustness of CAPA systems. Considering the identified research gaps and challenges, this work presents the CWD30 dataset, specifically designed to address the limitations of existing agricultural datasets. Our aim is to facilitate the development of accurate and reliable CAPA systems, ultimately enhancing the effectiveness and sustainability of precision agriculture practices." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "B. Related Datasets", "publication_ref": [ "b24", "b37", "b25", "b35", "b24", "b37", "b36", "b25", "b50", "b51", "b52" ], "table_ref": [], "text": "Here we provide an overview of several related agricultural datasets that have been previously proposed for cropweed recognition and other agricultural tasks [25]- [38]. These datasets, while valuable, have certain limitations that the CWD30 dataset aims to address.\nPlant Seedling:The Plant Seedlings Dataset [26] features approximately 960 unique plants from 11 species at various growth stages. It consists of annotated RGB images with a resolution of around 10 pixels per mm. Three public versions of the dataset are available: original, cropped, and segmented. For comparison in this study, we use the cropped plants v2 version, which contains 5,539 images of 12 different species. The dataset is imbalanced, with some classes having up to 654 samples (chickweed) and others as few as 221 (wheat).\nThe dataset was collected over 20 days (roughly 3 weeks) at 2-to-3-day intervals in an indoor setting. Plants were grown in a styrofoam box, and images were captured using a fixed overhead camera setup. This database was recorded at the Aarhus University Flakkebjerg Research station as part of a collaboration between the University of Southern Denmark and Aarhus University.\nCNU: This weeds dataset from Chonnam National University (CNU) in the Republic of Korea [36] consists of 208,477 images featuring 21 species. Captured on farms and fields using high-definition cameras, the images encompass various parts of weeds, including flowers, leaves and fruits. A visual comparison between the CNU dataset and the CWD30 dataset is illustrated in a Figure 3. However, unlike the CWD30 dataset, the CNU dataset does not encompass the growth stages and multiple viewing angles. The CNU dataset is imbalanced, with over 24,300 images of shaggy soldier and only about 800 images of spanish needles.\nDeep Weeds: The Deep Weeds [25] dataset consists of 17,509 low-resolution images of herbaceous rangeland weeds from 9 species. This dataset features a minimum of 1009 images and a maximum of 9016 images per category.\nIP102: Wu et al. [38] developed the IP102 dataset to further insect pest recognition research in computer vision. They initially gathered over 300,000 images from popular search engines, which were then labeled by volunteers to ensure relevance to insect pests. Following a data cleaning process, the IP102 dataset consisted of about 75,000 images representing 102 species of common crop insect pests. The dataset also captures various growth stages of some insect pest species.\nJapan - - - - - - - 0.2M - South Korea - - - - - - - 0.1M - Sudan - - - - - - - - 1.1M\nPDD271: Liu et al. [37] developed a large-scale dataset to support plant disease recognition research, consisting of 220,592 images across 271 disease categories. The data was collected in real farm fields with a camera-to-plant distance of 20-30 cm to ensure consistent visual scope. The dataset consists of a minimum of 400 images per category and a maximum of 2000 images.\nResearchers are actively working on plant recognition, frequently utilizing image databases containing samples of particular species to evaluate their methods. The creation of a database necessitates significant time, planning, and manual labor [26], [51]. Data is usually captured using an array of equip- ment, from readily available commercial cameras to custombuilt sensors designed for specific data acquisition tasks [52], [53]. Consequently, data collected by various researchers differ in quality, sensor type, and quantity, as well as encompassed distinct species. This leads to a diverse and occasionally sparse dataset, often tailored for highly specialized research.\nCompared to previous datasets, our proposed CWD30 dataset is unique in that it not only includes images captured from multiple angles, at various growth stages of the plant under varying weather conditions, but also features full plant images rather than just parts of plants (like leaves or branches) see Figure 3. This allows deep learning models to learn more robust and holistic features for better recognition, differentiation, and feature extraction. By addressing the domainspecific challenges of real-field agricultural environments and providing a diverse, varied, and extensive collection of images, CWD30 not only advances research in the field, but also enhances data efficiency and performance in a wide range of downstream agricultural tasks.Table I presents the statistical information for various agriculture-related datasets." }, { "figure_ref": [], "heading": "III. DATA COLLECTION, PREPROCESSING AND PROPERTIES", "publication_ref": [], "table_ref": [], "text": "In this section, we provide a detailed explanation of the collection process, preprocessing, and properties of the proposed CWD30 dataset." }, { "figure_ref": [ "fig_1", "fig_8" ], "heading": "A. Taxonomic System Establishment", "publication_ref": [ "b45", "b46", "b47", "b49" ], "table_ref": [ "tab_2" ], "text": "We developed a hierarchical taxonomic system for the CWD30 dataset in collaboration with several agricultural experts from the Rural Development Authority (RDA) in the Republic of Korea. We discussed the most common weed species that affect economically significant crops globally [46], [47]. A summary of these weeds, the crops they impact, and the countries where they are prevalent is provided in Table II. We ultimately chose to collect data on approximately 20 of the most problematic weed species worldwide. The selection of the 10 crops included in the CWD30 dataset was based on their share in global production and regional importance [48]- [50], ensuring the dataset's relevance and applicability in realworld precision agriculture scenarios. Table III indicates that these crops have considerable shares of global production, with percentages ranging from 39.7% to 87.5%. By incorporating crops with substantial importance across various countries, the CWD30 dataset establishes a taxonomy system that addresses the needs of diverse agricultural environments and promotes research in crop recognition and management.\nFor weed species not native to Korea, the RDA cultivated them in pots within their facility, as shown in Figure 1 (dashed black borders). As for the selected crops, they were divided into two subcategories based on their primary commercial value: economic crops (EC) and field crops (FC). Field crops include staples such as corn and millet, while economic crops encompass legumes (e.g., beans) and oilseeds (e.g., sesame). The resulting hierarchical structure is illustrated in Figure 4. Each crop is assigned both a micro and macro-class based on its properties, whereas weeds are only assigned a microclass, such as grasses, broad leaves, or sedges. In the CWD30 dataset, we also include a hold-out test set consisting of 23,502 mixed crop and weed (MCW) images, captured both indoors and outdoors, to facilitate the validation of developed models, see Figure 2. We have included a comprehensive table in the appendix of this paper, providing a detailed taxonomy for each plant species within the CWD30 dataset which explains the hierarchical classification, right from the domain, kingdom, and phylum, down to the order, family, genus, and species of each plant." }, { "figure_ref": [], "heading": "B. Data Collection", "publication_ref": [], "table_ref": [], "text": "To assemble a benchmark database, we formed five teams: four dedicated to collecting images in farms and fields, and one focused on gathering images from RDA's research facility. Each team was composed of three students from our institute and one field expert. The image collection devices provided to each team varied, including three Canon-SX740 HS, three Canon EOS-200D, three Android phone-based cameras, three iPhone-based cameras, and one DJI Mavic Pro 2.\nEach team was tasked with capturing images of two crops and four weeds twice a week. The full dataset is collected over a span of three years from 2020 to 2022. Since image collection is a manual process, the data recorded by different team members varied in quality, perspective, height, sensor type, and species. To ensure diverse data collection, we shuffled the teams monthly and assigned them to collect images of different crops and weeds. This approach helped us obtain a diverse dataset that covers a wide spectrum of real-world challenges and domain shifts, stemming from different sensor types, field environments, and fields of view. Figure 1 shows samples of the collected images." }, { "figure_ref": [ "fig_8", "fig_2" ], "heading": "C. Data Filtering, Labelling and Distribution", "publication_ref": [], "table_ref": [], "text": "The entire data construction process spanned three years. Alongside image collection, five experts reviewed each image to ensure label accuracy monthly. They then removed blurry and noisy images to maintain a clean dataset. The resulting CWD30 dataset comprises 219,778 images, 10 crop types, and 20 weed species. The distribution of each species is depicted in Figure 2. The minimum number of images per species is 210, while the maximum is 12,782. This unbalanced distribution reflects real-world scenarios where it is challenging to obtain data samples for certain classes. In our case, this occurred for weed species that were difficult to cultivate in Korea's weather conditions. As for labeling, each file is saved with a unique naming format, an example of which can be seen in Figure 5." }, { "figure_ref": [ "fig_8" ], "heading": "D. Data Splits", "publication_ref": [ "b53" ], "table_ref": [], "text": "The CWD30 dataset comprises 219,778 images and 30 plant species. To ensure more reliable test results, we employed a K-fold validation method with K=3, guaranteeing enough samples for each category in the testing set [54]. We divided the data into three randomized folds for training (74,724), validation (72,526), and testing (72,526), adhering to a 0.33:0.33:0.34 split ratio. For each fold, we partitioned every plant species into three sections, taking care to include an equal proportion of the smallest class within each section (refer to Figure 2). The training, validation, and testing sets were split at the micro-class level." }, { "figure_ref": [ "fig_3", "fig_4", "fig_4" ], "heading": "E. Viewing Angles and Growth Stages", "publication_ref": [], "table_ref": [], "text": "Our proposed CWD30 dataset stands out from previous datasets due to its unique and beneficial properties, with three prominent features: (i) images captured from multiple angles, (ii) images taken at various growth stages and under varying weather conditions, and (iii) full plant images instead of just plant parts like leaves or branches. These characteristics enable deep learning models to learn more robust and comprehensive features for enhanced recognition, differentiation, and feature extraction.\nCapturing plant images from different angles for deep learning models results in robust feature learning, improved occlusion handling, scale and rotation invariance, and better management of lighting and shadow variations. This leads to more accurate and reliable CAPA systems that perform well in real-world agricultural environments. Figure 6 depicts a visual representation of the various angles used for image collection.\nFurthermore, the growing interest in plant phenomics and the use of image-based digital phenotyping systems to measure morphological traits has led to increased efforts to bridge the genotyping-phenotyping gap. However, research in this area is limited, mainly due to the lack of available datasets providing whole-plant level information rather than specific plant parts, such as leaves, nodes, stems, and branches. The CWD30 dataset, which includes full plant images from multiple view- ing angles and at different growth stages, can accelerate our understanding of genotype-phenotype relationships. It can also assist plant scientists and breeders in developing advanced phenotyping systems that offer more detailed phenotypic information about plants. Figure 7a displays randomly selected samples of crops and weeds at different life cycle stages, with images captured at a 90-degree angle from the plant. The graph in Figure 7b show the distribution of images across each growing stage." }, { "figure_ref": [ "fig_5" ], "heading": "F. Comparison with Other Datasets", "publication_ref": [], "table_ref": [], "text": "In this section, we compare the CWD30 dataset with several existing datasets related to crop-weed recognition. Our dataset stands out as a more holistic, domain-adverse, versatile, and diverse dataset that provides a comprehensive solution to crop-weed discrimination. Furthermore, it classifies weeds into major families, such as grasses, sedges, and broad leaves, and further into specific weed sub-categories. To the best of our knowledge, CWD30 is the first dataset of its kind in the field of practical crop-weed discrimination.\nThe PDD271 dataset contains close-up images of only diseased plant parts, the Deep Weeds dataset has low-resolution images of roadside weeds, and the Plant Seedling dataset consists of early-stage weeds grown in lab trays. The most comparable dataset in this field is the CNU weed dataset, which focuses on field environments but features simplified representations of plants, i.e., zoomed in part of plants.\nExisting data sets' shortcomings can be summarized as follows:\n1) Simplified representation: By focusing on specific plant parts, such as leaves or branches, the data becomes less complex and fails to represent real-field challenges. In contrast, the CWD30 dataset addresses these limitations with the following inherent properties:\n1) Comprehensive representation: Full-plant images provide a holistic view, capturing various aspects of the crops and weeds. 2) Varied environments: Capturing plants in both indoor and outdoor settings enable the dataset to cover a broader range of conditions and will enhance the model's generalizability. 3) Multiple angles: Images taken from different angles allow models to learn robust features and improve occlusion handling, rotation invariance, and scale invariance. 4) Different growth stages: Capturing images at various growth stages helps models recognize crops and weeds at any stage of their life cycle, resulting in more accurate and reliable CAPA systems. 5) Complexity: Increased variability and complexity make the images more challenging to analyze. 6) Larger dataset size: The proposed dataset is one of the largest real-image datasets to date in the field of precision agriculture.\nBy addressing domain-specific challenges in real-field agricultural environments and providing a diverse, varied, and extensive collection of images, CWD30 advances research in the field and enhances data efficiency and performance in a wide range of downstream agricultural tasks.\nAn additional advantage of the CWD30 dataset is its versatility, which allows it to encompass various existing agricultural datasets through simple image processing operations. By applying random cropping, downsampling, foreground segmentation, or thresholding to the images in the CWD30 dataset, one can create subsets that resemble other datasets in the field. An example of this process is shown in Figure 8. This demonstrates that the CWD30 dataset can be considered a comprehensive and unified source of agricultural data, with other datasets effectively serving as subsets of CWD30. This versatility not only highlights the extensive nature of the CWD30 dataset but also supports its potential for advancing research and improving performance in a wide range of agricultural tasks." }, { "figure_ref": [ "fig_6" ], "heading": "G. Data Imbalance Ration", "publication_ref": [ "b61" ], "table_ref": [], "text": "A dataset's imbalance ratio (IR) refers to the degree of disparity between the number of samples in different classes [62]. In the context of deep learning, the imbalance ratio can have significant effects on model performance. Although low data imbalance ratios in datasets, like MNIST and ImageNet-1K, are generally preferred for deep learning models as they promote balanced class representation and accurate performance, these datasets do not always represent real-world situations where data samples for some classes are harder to obtain.\nIn contrast, high data imbalance ratios, found in datasets such as CNU, CWD30, and DeepWeeds, can pose challenges for deep learning models as they may lead to overfitting and poor generalization. Models trained on highly imbalanced datasets can become biased towards majority classes, resulting in decreased performance for minority classes. However, one key advantage of having high imbalance ratios is their increased representation of real-world situations, particularly in complex recognition tasks like precision agriculture, where some classes naturally have fewer available samples. While these imbalanced datasets present challenges, they also offer a more realistic depiction of real-world scenarios, pushing deep learning models to adapt and improve their performance in diverse and unevenly distributed data conditions. Figure 9 shows imbalance ration of related datasets.\nTo the best of our knowledge, the proposed CWD30 dataset offers several distinctive features not found in previous datasets, as highlighted in earlier sub-sections. These features can bridge the genotyping-phenotyping gap, enhance the robustness and reliability of deep learning systems, and expand their area of applications." }, { "figure_ref": [], "heading": "IV. EXPERIMENTS AND EVALUATION", "publication_ref": [], "table_ref": [], "text": "We conducted a comprehensive experimental evaluation of the CWD30 dataset, focusing on classification performance using deep convolutional and transformer-based architectures. Additionally, we examine the influence of CWD30 pretrained networks on downstream precision agriculture tasks, including semantic segmentation." }, { "figure_ref": [], "heading": "A. Experimental Setup", "publication_ref": [], "table_ref": [], "text": "In our experiments all networks' layers are fine-tuned using an AdamW optimizer with a minibatch size of 32 and an initial learning rate of 6e-5. We employ a cosine decay policy for reducing the learning rate and incorporate a dropout value of 0.2, along with basic data augmentations, to prevent overfitting. While the deep models' fundamental architectures remain unchanged, the last fully connected layer is adapted to match the number of target classification classes. Each network is trained for 50 epochs across all datasets, and the reported results represent the average of three runs. Input images are resized to 224 x 224 pixels. Our deep feature-based experiments are implemented using PyTorch and performed on an NVIDIA Titan RTX-3090 GPU with 24 GB of onboard memory. " }, { "figure_ref": [], "heading": "B. Evaluation Metrics", "publication_ref": [], "table_ref": [], "text": "To objectively assess models trained on the CWD30 dataset, we employ widely accepted evaluation metrics for comprehensive comparisons. Given the dataset's imbalanced class distribution, we utilize the following metrics for better performance assessment:\n• Per-class Mean Accuracy (Acc) calculates the average of individual class mean accuracies, providing a balanced performance evaluation, especially for imbalanced datasets like CWD30. • F1-Score is the harmonic mean of precision (the ratio of true positive predictions to the sum of true positive and false positive predictions) and recall (the ratio of true positive predictions to the sum of true positive and false negative predictions), offering a single value representing the model's overall performance while accounting for false positive and false negative errors. • For downstream tasks like semantic segmentation, we use mean intersection over union (mIoU), which evaluates the overlap between predicted and ground truth segments. By examining these metrics, researchers can identify the most promising approaches to guide future developments in precision agriculture and the development of CAPA systems." }, { "figure_ref": [ "fig_6" ], "heading": "V. RESULTS AND DISCUSSION", "publication_ref": [ "b54", "b60", "b57", "b59", "b60" ], "table_ref": [], "text": "In this section, we present the classification results for various deep learning models trained on the CWD30 dataset. We compare the models [55]- [61] based on their F1-Score and per-class mean accuracy (Acc) when trained from scratch and when pretrained on the ImageNet-1K dataset. The results are summarized in the table IV. The results reveal that EfficientNetv2-M [58] is the best-performing CNN architecture when trained from scratch, with the highest F1-Score (82.37) and accuracy (87.06). Pretraining on ImageNet-1K consistently improves the performance of all models. Among transformer-based models, SwinViT [60] achieves the highest accuracy (88.71), and MaxViT [61] obtains the highest F1-Score (82.43). Generally, more complex models like EfficientNetv2-M and MaxViT outperform less complex counterparts, as their increased capacity better captures and represents the nuances in the CWD30 dataset.\nMoreover, transformer-based models like SwinViT and MaxViT demonstrate superior performance compared to their CNN counterparts despite having fewer parameters and a smaller memory footprint (forward and backward pass). This observation underscores the potential of transformer architectures for handling the diverse and complex patterns in the CWD30 dataset. The self-attention mechanism in transformers may allow them to capture long-range dependencies and finegrained patterns more effectively than traditional convolutional layers.\nAdditionally, we compare the model parameters and memory footprint against the final output feature embeddings generated by the model just before the linear classification layers, as shown in the figure 9. Intriguingly, MaxViT, which outputs the fewest feature embeddings (512), still outperforms all other models. This finding is significant because lower-dimensional feature embeddings offer practical advantages for real-world applications, especially in resource-constrained environments. For instance, in precision agriculture, heavy GPUs like the RTX-3090 may not be suitable for field deployment due to their large size and power consumption. Instead, smaller embedded systems like NVIDIA Jetson boards are commonly used, which have limited memory and computational resources. By employing deep learning models with lowerdimensional embeddings, parameters, and memory footprint, these systems can efficiently process and analyze data, making them more suitable for real-world applications. The diverse and sizable CWD30 dataset is essential for the development of robust and reliable CAPA systems, as it offers a rich source of real-world precision agriculture data for training deep data hungry models. By focusing on the quality of the dataset and addressing practical constraints of real-world deployments, researchers can ensure that deep learning models are capable of handling inherent variability and imbalances in agricultural settings, ultimately making them more efficient, generalizable, and suitable for a wide range of applications, including field deployment." }, { "figure_ref": [ "fig_9", "fig_9" ], "heading": "A. Further Analysis", "publication_ref": [ "b24", "b25", "b33", "b37", "b65" ], "table_ref": [ "tab_5" ], "text": "To further evaluate the performance enhancements offered by using the CWD30 dataset for pretraining and finetuning on tasks with limited samples, we tested multiple publicly available benchmark agricultural datasets [25], [26], [34], [38] for robust feature extraction and compared the results with models pretrained on the ImageNet-1K dataset. Detailed information about these datasets is provided in Section II. For each dataset, we adhere to the testing and data split settings outlined in their original papers, while maintaining the same network training settings as described in the previous subsection. The results are summarized in Table V. Throughout all datasets MaxViT achieved highest per class mean accuracy scores despite having minimum output feature embeddings. Whereas pretraining on CWD30 dataset consistently improves the performance of all tested architectures on all datasets.\nFor better understanding and comparison, we extract highdimensional feature embeddings (features of second last layer) from the best-performing model, i.e., MaxViT, on test images of all datasets. The compactness and expressiveness of these feature embeddings facilitate the development of efficient and accurate algorithms for various applications, including CAPA systems. We perform t-SNE [66] visualization on these feature embeddings. t-SNE, effectively projects highdimensional feature embeddings onto a two-dimensional space while preserving the local structure and relationships within the data. By plotting t-SNE visualizations, we can assess the separability and distribution of the data in the reduced space, as well as the quality of the learned feature representations.\nOur results reveal that models pretrained on the CWD30 dataset produce more distinct and well-separated clusters in the t-SNE plots when fine-tuned on various public datasets compared to ImageNet pretrained models. The t-SNE plots for CWD30 and ImageNet pretrained MaxViT models on publicly available datasets are displayed in Figure 11. From the Figure 11, it is evident that CWD30-pretrained models learn more meaningful and robust feature representations, as the clusters in these plots are better defined and distinct, with points belonging to the same cluster positioned closer together and clear separation between clusters. This ultimately leads to improved performance during finetuning and downstream tasks (see section V.B)." }, { "figure_ref": [], "heading": "B. Performance on Downstream Tasks", "publication_ref": [ "b64", "b62", "b63", "b69" ], "table_ref": [ "tab_5" ], "text": "To evaluate the effectiveness of enhanced feature representations obtained by CWD30 pretraining on downstream tasks, we assess several state-of-the-art segmentation models for pixel-level crop weed recognition. We use three publicly available crop-weed datasets: CarrotWeed [65], SugarBeet [63], and BeanWeed [64]. Sample images from each dataset, along with their corresponding segmentation labels, are shown in Figure . The quantitative results are summarized in Table VI.\nThroughout the experiments, it is evident that pretraining architecture backbones with CWD30 provides a clear advantage over ImageNet-1K pretrained backbones. Although the performance difference may not appear substantial when examining the table VI, the difference becomes more apparent when analyzing the learning curves of both setups. The learning curves of the best-performing SegNext [70] From the plots, it can be seen that the difference between ImageNet and CWD30 initialization is significant at the 10th epoch, where the CWD30-initialized model already reaches performance close to its final convergence value. In contrast, for ImageNet initialized models, it takes about 50 epochs to achieve similar performance.\nThese findings in this section underscore the importance of employing a comprehensive agricultural dataset like CWD30 for pretraining deep learning models. By utilizing the rich and diverse data offered by CWD30, researchers can develop efficient and generalizable deep learning models that are more suitable for a wide range of applications, including precision agriculture." }, { "figure_ref": [], "heading": "VI. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In conclusion, this paper presents the CWD30 dataset, a comprehensive, holistic, large-scale, and diverse crop-weed recognition dataset tailored for precision agriculture. With over 219,770 high-resolution images of 20 weed species and 10 crop species, the dataset spans various growth stages, multiple viewing angles, and diverse environmental conditions. The hierarchical taxonomy of CWD30 facilitates the development of accurate, robust, and generalizable deep learning models for crop-weed recognition. Our extensive baseline experiments demonstrate the challenges and opportunities presented by" }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "This work was supported in part by the Agricultural Science and Technology Development Cooperation Research Program (PJ015720) and Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2019R1A6A1A09031717 and NRF-2019R1A2C1011297)." }, { "figure_ref": [], "heading": "Method", "publication_ref": [ "b66", "b54", "b67", "b54" ], "table_ref": [], "text": "Backbone SugarBeet CarrotWeed BeanWeed ImageNet-1k CWD-30 ImageNet-1k CWD-30 ImageNet-1k CWD-30 U-Net [67] ResNet-101 [55] 80.96 85.47 75.47 78.32 69.67 72.49 DeepLabv3+ [68] ResNet-101 [55] " } ]
The growing demand for precision agriculture necessitates efficient and accurate crop-weed recognition and classification systems. Current datasets often lack the sample size, diversity, and hierarchical structure needed to develop robust deep learning models for discriminating crops and weeds in agricultural fields. Moreover, the similar external structure and phenomics of crops and weeds complicate recognition tasks. To address these issues, we present the CWD30 dataset, a large-scale, diverse, holistic, and hierarchical dataset tailored for crop-weed recognition tasks in precision agriculture. CWD30 comprises over 219,770 high-resolution images of 20 weed species and 10 crop species, encompassing various growth stages, multiple viewing angles, and environmental conditions. The images were collected from diverse agricultural fields across different geographic locations and seasons, ensuring a representative dataset. The dataset's hierarchical taxonomy enables fine-grained classification and facilitates the development of more accurate, robust, and generalizable deep learning models. We conduct extensive baseline experiments to validate the efficacy of the CWD30 dataset. Our experiments reveal that the dataset poses significant challenges due to intra-class variations, inter-class similarities, and data imbalance. Additionally, we demonstrate that minor training modifications like using CWD30 pretrained backbones can significantly enhance model performance and reduce convergence time, saving training resources on several downstream tasks. These challenges provide valuable insights and opportunities for future research in crop-weed detection, fine-grained classification, and imbalanced learning. We believe that the CWD30 dataset will serve as a benchmark for evaluating crop-weed recognition algorithms, promoting advancements in precision agriculture, and fostering collaboration among researchers in the field. The data is available at: https://github.com/Mr-TalhaIlyas/CWD30
CWD30: A Comprehensive and Holistic Dataset for Crop Weed Recognition in Precision Agriculture
[ { "figure_caption": "Fig. 3 .3Fig. 3. Visual comparison of CWD30 dataset with other related datasets.", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Taxonomy of CWD30 dataset. Showcasing the hierarchical organization of crop and weed species included in the dataset.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. Schematic representation of the file naming convention in the CWD30 dataset, with each segment separated by \" \" indicating specific information about the image, such as species, growth stage, camera angle, and unique ID.", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig. 6. Illustration of camera placement for capturing images at various angles, along with sample images captured at those angles under different weather conditions.", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .7Fig. 7. (a) A visual representation of plant growth stages, spanning an 8-week period from seedling to maturity, showcasing the developmental progression, changes in color, shape and texture of the plant over time. (b)Radar graph illustrating the distribution of images in the CWD30 dataset across each growing stage during the 8-week period.", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Fig. 8 .8Fig. 8. Data imbalance ratio (IR) of proposed dataset in comparison with other datasets.", "figure_data": "", "figure_id": "fig_5", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Fig. 9 .9Fig. 9. Illustration how simple image processing techniques can transform CWD30 dataset into related subsets, emphasizing CWD30 as a comprehensive superset.", "figure_data": "", "figure_id": "fig_6", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Fig. 10 .10Fig. 10. Graph comparing deep learning models in terms of parameters (in million), feature embeddings (no. of features), and forward and backward pass sizes (in megabytes), highlighting the trade-offs among the models. Best viewed in color.", "figure_data": "", "figure_id": "fig_7", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "2 )2Limited scope: Images of specific plant parts may not capture the full characteristics of a plant, leading to less accurate recognition systems. 3) Restricted environments: Capturing images in specific fields may limit the model's ability to generalize to other settings or conditions. 4) Less robust features: The absence of multiple angles and growth stages may result in less robust feature learning and hinder the model's ability to handle occlusions, rotations, and scale variations. 5) Smaller dataset size: Most existing precision agricultural datasets have a limited number of images, hindering the development of more advanced deep learning-based systems.", "figure_data": "", "figure_id": "fig_8", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 11 .11Fig. 11. 2D t-SNE feature embeddings visualization comparing best performing deep learning model (i.e., MaxViT) with pretrained weights from ImageNet and CWD30, on various agricultural datasets. Highlighting the improved cluster patterns and separation achieved using the CWD30 pretrained network.", "figure_data": "", "figure_id": "fig_9", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Fig. 12 .12Fig. 12. Sample images from (a) SugarBeet [63], (b) BeanWeed [64] and (c) CarrotWeed [65] datasets.", "figure_data": "", "figure_id": "fig_10", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "model are shown in Figure 13. These curves demonstrate that initializing experiments with weights obtained from training on more relevant datasets (i.e., agricultural data) results in faster convergence and stable training.", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "et al. highlighted the main challenges in differentiating weed and", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "Fig. 2. A . [43] and Khan et al. [44] emphasized the importance of combining spectral and spatial characteristics for remote sensing and ground-based weed identification approaches. Hasan et al. [8] conducted a comprehensive survey of deep learning techniques for weed detection and presented a taxonomy of deep learning techniques. However, recent studies by Moazzam et al. [4] and Coleman et al.TABLE ICOMPARATIVE ANALYSIS OF CHARACTERISTICS.THE SYMBOL HH, DM, AND VM CORRESPOND TO '˜' INDICATES AN APPROXIMATE VALUE. VARIOUS AGRICULTURAL DATASETS: KEY ATTRIBUTES ANDHANDHELD, DEVICE MOUNTED, AND VEHICLE MOUNTED CAMERAS, RESPECTIVELY.Acquisition Location Platform Avg. Image Resolution Multi View Growth Stages Availability Image Content Coverage Environment Background Dataset # Images # Cat.Tripod / Roadside Full plant 256x256 public weeds outdoor complex Deep Weeds [25] 17,509 9Overhead Trays Full plant No No ∼355x355 public weeds indoor simple Plant Seedling [26] 5,539 12Camera tray single leaf 6000x4000 public fruits indoor simple Fruit Leaf [27] 4,503 122048x1368 public Fruits, crops indoor simple PDDB [28] 46,409 56Lab Single leaf No No 224x224 public corn outdoor Simple Corn2022 [29] 7,701 4224x224 private wheat outdoor simple LWDCD2020 [30] 12,160 10Handheld Lab Single leaf 256x256 No No public ∼1070x907 RGB Single leaf Camera Single branch 300x300 private 800x600 public fruits, crops indoor simple fruits, crops outdoor complex rice outdoor simple cassava outdoor complex Plant Village [31] Plant Doc [32] RiceLeaf [33] CLD [34] 54,309 38 2,598 17 5,932 4 15,000 6Farmland Single leaf No No 4000x2672 public frutis outdoor simple AppleLeaf [35] 23,249 6Single branch -private weeds outdoor complex CNU [36] 208,477 21Single leaf 256x256 private fruits, crops, vegetables outdoor Simple PDD271 [37] 220,592 271Search Farmland, sketch, Single pest ∼525x413 Engines drawings on leaf No No private crop pests Simple / complex Simple IP102 [38] 75,222 102HH /DM / Simple / VM / Overhead Farmland, Pots Full plant ∼4032x3024 Yes Yes public crops, weeds complex camera Simple / complex CWD30 219,778 30", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "PRODUCTION SHARE ,IN MILLION METRIC TONS (M), OF THE 10 CROP SPECIES INCLUDED IN THE CWD30 DATASET FOR THE YEAR 2020 TO 2021, ACROSS VARIOUS COUNTRIES, EMPHASIZING THEIR SIGNIFICANCE AND CONTRIBUTION TO WORLDWIDE AGRICULTURAL PRODUCTION [48]-[50].", "figure_data": "CountryCornFoxtail Millet Great Millet Proso MilletBeanGreen Gram PeanutRed Bean SesameUnited States358.4M-9.7M---2.79M--China260.8M6.5M-1.8M-0.6M17.9M2.2M-Brazil81M---4.2M----India31.651M6M2.2M6.5M2M6.7M-0.8MNigeria12.4-7.1M---4.23M--Myanmar----3.9M0.9M--0.6MRussia13.87--1.1M-----", "figure_id": "tab_2", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "PERFORMANCE OF VARIOUS DEEP LEARNING MODELS ON THE CWD30 DATASET, COMPARING RESULTS OBTAINED FROM RANDOM INITIALIZATION AND IMAGENET INITIALIZATION.", "figure_data": "Typ.MethodsScratch F1 AccImageNet-1K F1 AccResNet-101 [55]76.3880.17 83.83 88.66CNNResNext-101 [56] MobileNetv3-L [57]79.76 74.6781.36 84.03 89.06 78.95 81.80 86.29EfficientNetv2-M [58] 87.3783.06 84.91 90.79ViT [59]78.9083.43 84.08 87.84Trans.SwinViT [60]81.5387.59 83.70 88.71MaxViT [61]82.2487.08 82.43 91.45", "figure_id": "tab_4", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "COMPARISON OF DEEP LEARNING MODELS USING PRETRAINED WEIGHTS FROM IMAGENET AND CWD30, HIGHLIGHTING THE IMPACT OF DATASET-SPECIFIC PRETRAINING ON MODEL PERFORMANCE.", "figure_data": "Typ.MethodsDeep Weeds [25] ImageNet-1k CWD-30Plant Seedlings [26] ImageNet-1k CWD-30 ImageNet-1k CWD-30 ImageNet-1k CWD-30 Cassava Plant [34] IP 102 [38]ResNet-101 [55]91.1395.0890.1496.2764.8271.4460.3466.87CNNResNext-101 [56] MobileNetv3-L [57]90.70 89.0895.87 94.6292.46 88.4397.79 96.5465.01 66.3473.22 71.1762.13 61.0867.90 64.53EfficientNetv2-M [58]91.3995.7890.8597.1861.1369.3460.8668.29ViT [59]86.2590.1891.4195.3958.2461.3259.7768.46Trans.SwinViT [60]88.8396.7093.2498.0673.8378.6659.1168.67MaxViT [61]87.7997.0492.4797.8971.5579.5460.5169.36", "figure_id": "tab_5", "figure_label": "V", "figure_type": "table" } ]
Talha Ilyas; Dewa Made; Sri Arsa; Khubaib Ahmad; Yong Chae Jeong; Okjae Won; Jong Hoon Lee; Hyongsuk Kim
[ { "authors": "P Radoglou-Grammatikis; P Sarigiannidis; T Lagkas; I Moscholios", "journal": "Computer Networks", "ref_id": "b0", "title": "A compilation of uav applications for precision agriculture", "year": "2020" }, { "authors": "N Iqbal; S Manalil; B S Chauhan; S W Adkins", "journal": "Archives of Agronomy and Soil Science", "ref_id": "b1", "title": "Investigation of alternate herbicides for effective weed management in glyphosatetolerant cotton", "year": "2019" }, { "authors": "D Patel; B Kumbhar", "journal": "Journal Pharmaceutical Science and Bioscientific Research (JPSBR)", "ref_id": "b2", "title": "Weed and its management: A major threats to crop economy", "year": "2016" }, { "authors": "S I Moazzam; U S Khan; M I Tiwana; J Iqbal; W S Qureshi; S I Shah", "journal": "IEEE", "ref_id": "b3", "title": "A review of application of deep learning for weeds and crops classification in agriculture", "year": "2019" }, { "authors": "T Ilyas; H Jin; M I Siddique; S J Lee; H Kim; L Chua", "journal": "Frontiers in Plant Science", "ref_id": "b4", "title": "Diana: A deep learning-based paprika plant disease and pest phenotyping system with disease severity analysis", "year": "2022" }, { "authors": "O Elsherbiny; L Zhou; L Feng; Z Qiu", "journal": "Remote Sensing", "ref_id": "b5", "title": "Integration of visible and thermal imagery with an artificial neural network approach for robust forecasting of canopy water content in rice", "year": "2021" }, { "authors": "I Sa; Z Chen; M Popović; R Khanna; F Liebisch; J Nieto; R Siegwart", "journal": "IEEE robotics and automation letters", "ref_id": "b6", "title": "weednet: Dense semantic weed classification using multispectral images and mav for smart farming", "year": "2017" }, { "authors": "A M Hasan; F Sohel; D Diepeveen; H Laga; M G Jones", "journal": "Computers and Electronics in Agriculture", "ref_id": "b7", "title": "A survey of deep learning techniques for weed detection from images", "year": "2021" }, { "authors": "Y Bai; J Mei; A L Yuille; C Xie", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b8", "title": "Are transformers more robust than cnns?", "year": "2021" }, { "authors": "A Joshi; D Guevara; M Earles", "journal": "", "ref_id": "b9", "title": "Standardizing and centralizing datasets to enable efficient training of agricultural deep learning models", "year": "2022" }, { "authors": "C Shorten; T M Khoshgoftaar", "journal": "Journal of big data", "ref_id": "b10", "title": "A survey on image data augmentation for deep learning", "year": "2019" }, { "authors": "D Su; H Kong; Y Qiao; S Sukkarieh", "journal": "Computers and Electronics in Agriculture", "ref_id": "b11", "title": "Data augmentation for deep learning based semantic segmentation and crop-weed classification in agricultural robotics", "year": "2021" }, { "authors": "C Shorten; T M Khoshgoftaar", "journal": "Journal of big data", "ref_id": "b12", "title": "A survey on image data augmentation for deep learning", "year": "2019" }, { "authors": "B Espejo-Garcia; N Mylonas; L Athanasakos; S Fountas; I Vasilakoglou", "journal": "Computers and Electronics in Agriculture", "ref_id": "b13", "title": "Towards weeds identification assistance through transfer learning", "year": "2020" }, { "authors": "Q H Cap; H Uga; S Kagiwada; H Iyatomi", "journal": "IEEE Transactions on Automation Science and Engineering", "ref_id": "b14", "title": "Leafgan: An effective data augmentation method for practical plant disease diagnosis", "year": "2020" }, { "authors": "T Moon; J E Son", "journal": "Computers and Electronics in Agriculture", "ref_id": "b15", "title": "Knowledge transfer for adapting pre-trained deep neural models to predict different greenhouse environments based on a low quantity of data", "year": "2021" }, { "authors": "S J Pan; Q Yang", "journal": "IEEE Transactions on knowledge and data engineering", "ref_id": "b16", "title": "A survey on transfer learning", "year": "2010" }, { "authors": "O Antonijević; S Jelić; B Bajat; M Kilibarda", "journal": "Journal of Big Data", "ref_id": "b17", "title": "Transfer learning approach based on satellite image time series for the crop classification problem", "year": "2023" }, { "authors": "J Deng; W Dong; R Socher; L.-J Li; K Li; L Fei-Fei", "journal": "Ieee", "ref_id": "b18", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "T.-Y Lin; M Maire; S Belongie; J Hays; P Perona; D Ramanan; P Dollár; C L Zitnick", "journal": "Springer", "ref_id": "b19", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Z ; Al Sahili; M Awad", "journal": "", "ref_id": "b20", "title": "Convolutional neural networks and deep learning for crop improvement and production", "year": "2023" }, { "authors": "A Kamilaris; F X Prenafeta-Boldú", "journal": "Computers and electronics in agriculture", "ref_id": "b21", "title": "Deep learning in agriculture: A survey", "year": "2018" }, { "authors": "A Fuentes; S Yoon; D S Park", "journal": "Frontiers in Plant Science", "ref_id": "b22", "title": "Deep learning-based phenotyping system with glocal description of plant anomalies and symptoms", "year": "2019" }, { "authors": "M Rahnemoonfar; C Sheppard", "journal": "Sensors", "ref_id": "b23", "title": "Deep count: fruit counting based on deep simulated learning", "year": "2017" }, { "authors": "A Olsen; D A Konovalov; B Philippa; P Ridd; J C Wood; J Johns; W Banks; B Girgenti; O Kenny; J Whinney", "journal": "Scientific reports", "ref_id": "b24", "title": "Deepweeds: A multiclass weed species image dataset for deep learning", "year": "2019" }, { "authors": "T M Giselsson; R N Jørgensen; P K Jensen; M Dyrmann; H S Midtiby", "journal": "", "ref_id": "b25", "title": "A public image database for benchmark of plant seedling classification algorithms", "year": "2017" }, { "authors": "S S Chouhan; U P Singh; A Kaul; S Jain", "journal": "IEEE", "ref_id": "b26", "title": "A data repository of leaf images: Practice towards plant conservation with plant pathology", "year": "2019" }, { "authors": "J G A Barbedo", "journal": "Biosystems Engineering", "ref_id": "b27", "title": "Plant disease identification from individual lesions and spots using deep learning", "year": "2019" }, { "authors": "X Qian; C Zhang; L Chen; K Li", "journal": "Frontiers in Plant Science", "ref_id": "b28", "title": "Deep learning-based identification of maize leaf diseases is improved by an attention mechanism: Self-attention", "year": "2022" }, { "authors": "L Goyal; C M Sharma; A Singh; P K Singh", "journal": "Informatics in Medicine Unlocked", "ref_id": "b29", "title": "Leaf and spike wheat disease detection & classification using an improved deep convolutional architecture", "year": "2021" }, { "authors": "D Hughes; M Salathé", "journal": "", "ref_id": "b30", "title": "An open access repository of images on plant health to enable the development of mobile disease diagnostics", "year": "2015" }, { "authors": "D Singh; N Jain; P Jain; P Kayal; S Kumawat; N Batra", "journal": "", "ref_id": "b31", "title": "Plantdoc: A dataset for visual plant disease detection", "year": "2020" }, { "authors": "P K Sethy; N K Barpanda; A K Rath; S K Behera", "journal": "Computers and Electronics in Agriculture", "ref_id": "b32", "title": "Deep feature based rice leaf disease identification using support vector machine", "year": "2020" }, { "authors": "H Ayu; A Surtono; D Apriyanto", "journal": "Journal of Physics: Conference Series", "ref_id": "b33", "title": "Deep learning for detection cassava leaf disease", "year": "2021" }, { "authors": "R Thapa; N Snavely; S Belongie; A Khan", "journal": "", "ref_id": "b34", "title": "The plant pathology 2020 challenge dataset to classify foliar disease of apples", "year": "2020" }, { "authors": "V H Trong; Y Gwang-Hyun; D T Vu; K Jin-Young", "journal": "Computers and Electronics in Agriculture", "ref_id": "b35", "title": "Late fusion of multimodal deep neural networks for weeds classification", "year": "2020" }, { "authors": "X Liu; W Min; S Mei; L Wang; S Jiang", "journal": "IEEE Transactions on Image Processing", "ref_id": "b36", "title": "Plant disease recognition: A large-scale benchmark dataset and a visual region and loss reweighting approach", "year": "2021" }, { "authors": "X Wu; C Zhan; Y.-K Lai; M.-M Cheng; J Yang", "journal": "", "ref_id": "b37", "title": "Ip102: A largescale benchmark dataset for insect pest recognition", "year": "2019" }, { "authors": "D M S Arsa; T Ilyas; S.-H Park; O Won; H Kim", "journal": "Computers and Electronics in Agriculture", "ref_id": "b38", "title": "Eco-friendly weeding through precise detection of growing points via efficient multibranch convolutional neural networks", "year": "2023" }, { "authors": "H Escalante; S Rodríguez-Sánchez; M Jiménez-Lizárraga; A Morales-Reyes; J De; La Calleja; R Vazquez", "journal": "International journal of remote sensing", "ref_id": "b39", "title": "Barley yield and fertilization analysis from uav imagery: a deep learning approach", "year": "2019" }, { "authors": "J Yi; L Krusenbaum; P Unger; H Hüging; S J Seidel; G Schaaf; J Gall", "journal": "Sensors", "ref_id": "b40", "title": "Deep learning for non-invasive diagnosis of nutrient deficiencies in sugar beet using rgb images", "year": "2020" }, { "authors": "J H Westwood; R Charudattan; S O Duke; S A Fennimore; P Marrone; D C Slaughter; C Swanton; R Zollinger", "journal": "Weed science", "ref_id": "b41", "title": "Weed management in 2050: Perspectives on the future of weed science", "year": "2018" }, { "authors": "A Wang; W Zhang; X Wei", "journal": "Computers and electronics in agriculture", "ref_id": "b42", "title": "A review on weed detection using ground-based machine vision and image processing techniques", "year": "2019" }, { "authors": "A Khan; A D Vibhute; S Mali; C Patil", "journal": "Ecological Informatics", "ref_id": "b43", "title": "A systematic review on hyperspectral imaging technology with a machine and deep learning methodology for agricultural applications", "year": "2022" }, { "authors": "G R Coleman; W Salter", "journal": "AoB Plants", "ref_id": "b44", "title": "More eyes on the prize: open-source data, software and hardware for advancing plant science through collaboration", "year": "2023" }, { "authors": "", "journal": "", "ref_id": "b45", "title": "Plants database", "year": "2023-05-09" }, { "authors": "", "journal": "Weed surveys", "ref_id": "b46", "title": "", "year": "2023-05-09" }, { "authors": "", "journal": "", "ref_id": "b47", "title": "Agriculture production data", "year": "2023-05-09" }, { "authors": "", "journal": "", "ref_id": "b48", "title": "World agricultural production", "year": "2023-05-09" }, { "authors": "B Leff; N Ramankutty; J A Foley", "journal": "Global biogeochemical cycles", "ref_id": "b49", "title": "Geographic distribution of major crops across the world", "year": "2004" }, { "authors": "Y Lu; S Young", "journal": "Computers and Electronics in Agriculture", "ref_id": "b50", "title": "A survey of public datasets for computer vision tasks in precision agriculture", "year": "2020" }, { "authors": "W Coudron; A Gobin; C Boeckaert; T De Cuypere; P Lootens; S Pollet; K Verheyen; P De Frenne; T De Swaef", "journal": "Computers and Electronics in Agriculture", "ref_id": "b51", "title": "Data collection design for calibration of crop models using practical identifiability analysis", "year": "2021" }, { "authors": "W Coudron; P De Frenne; K Verheyen; A Gobin; C Boeckaert; T De Cuypere; P Lootens; S Pollet; T De Swaef", "journal": "Frontiers in Plant Science", "ref_id": "b52", "title": "Usefulness of cultivar-level calibration of aquacrop for vegetables depends on the crop and data availability", "year": "2023" }, { "authors": "S Yadav; S Shukla", "journal": "IEEE", "ref_id": "b53", "title": "Analysis of k-fold cross-validation over holdout validation on colossal datasets for quality classification", "year": "2016" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b54", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "S Xie; R Girshick; P Dollár; Z Tu; K He", "journal": "", "ref_id": "b55", "title": "Aggregated residual transformations for deep neural networks", "year": "2017" }, { "authors": "A Howard; M Sandler; G Chu; L.-C Chen; B Chen; M Tan; W Wang; Y Zhu; R Pang; V Vasudevan", "journal": "", "ref_id": "b56", "title": "Searching for mobilenetv3", "year": "2019" }, { "authors": "M Tan; Q Le", "journal": "PMLR", "ref_id": "b57", "title": "Efficientnetv2: Smaller models and faster training", "year": "2021" }, { "authors": "A Dosovitskiy; L Beyer; A Kolesnikov; D Weissenborn; X Zhai; T Unterthiner; M Dehghani; M Minderer; G Heigold; S Gelly", "journal": "", "ref_id": "b58", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Z Liu; Y Lin; Y Cao; H Hu; Y Wei; Z Zhang; S Lin; B Guo", "journal": "", "ref_id": "b59", "title": "Swin transformer: Hierarchical vision transformer using shifted windows", "year": "2021" }, { "authors": "Z Tu; H Talebi; H Zhang; F Yang; P Milanfar; A Bovik; Y Li", "journal": "Springer", "ref_id": "b60", "title": "Maxvit: Multi-axis vision transformer", "year": "2022" }, { "authors": "J M Johnson; T M Khoshgoftaar", "journal": "Journal of Big Data", "ref_id": "b61", "title": "Survey on deep learning with class imbalance", "year": "2019" }, { "authors": "N Chebrolu; P Lottes; A Schaefer; W Winterhalter; W Burgard; C Stachniss", "journal": "The International Journal of Robotics Research", "ref_id": "b62", "title": "Agricultural robot dataset for plant classification, localization and mapping on sugar beet fields", "year": "2017" }, { "authors": "T Ilyas; H Kim; J Lee; O Won; Y Jeong", "journal": "", "ref_id": "b63", "title": "Adaptive deep learning for crop weed discrimination in unseen fields", "year": "2023" }, { "authors": "S Haug; J Ostermann", "journal": "", "ref_id": "b64", "title": "A crop/weed field image dataset for the evaluation of computer vision based precision agriculture tasks", "year": "2015" }, { "authors": "T T Cai; R Ma", "journal": "The Journal of Machine Learning Research", "ref_id": "b65", "title": "Theoretical foundations of t-sne for visualizing high-dimensional clustered data", "year": "2022" }, { "authors": "O Ronneberger; P Fischer; T Brox", "journal": "Springer", "ref_id": "b66", "title": "U-net: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "L.-C Chen; G Papandreou; F Schroff; H Adam", "journal": "", "ref_id": "b67", "title": "Rethinking atrous convolution for semantic image segmentation", "year": "2017" }, { "authors": "Y Yuan; X Chen; J Wang", "journal": "Springer", "ref_id": "b68", "title": "Object-contextual representations for semantic segmentation", "year": "2020" }, { "authors": "M.-H Guo; C.-Z Lu; Q Hou; Z Liu; M.-M Cheng; S.-M Hu", "journal": "", "ref_id": "b69", "title": "Segnext: Rethinking convolutional attention design for semantic segmentation", "year": "2022" } ]
[ { "formula_coordinates": [ 6, 95.4, 455.28, 417.49, 30.87 ], "formula_id": "formula_0", "formula_text": "Japan - - - - - - - 0.2M - South Korea - - - - - - - 0.1M - Sudan - - - - - - - - 1.1M" } ]
2023-05-17
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b5", "b12", "b14", "b15", "b16", "b17", "b13", "b6", "b6", "b2", "b18", "b19" ], "table_ref": [], "text": "Screening for colorectal cancer is highly effective, as early detection is within reach, making this disease one of the most preventable. Today's standard of care screening method is optical colonoscopy, which searches the colon for mucosal abnormalities, such as polyps. However, performing a thorough examination of the entire colon surface using optical colonoscopy is challenging, which may lead to a lower polyp detection rate. Recent studies have shown that approximately 25% of polyps are routinely missed during colonoscopies [1].\nThe success (diagnostic accuracy) of a colonoscopy procedure is highly operator dependent. It varies based on the performing physician skills, experience, vigilance, fatigue, and more. To ensure high procedure quality, various quality metrics are measured and monitored. E.g., the Withdrawal Time (time from the colonoscope reaching cecum to removal of the instrument from the patient) metric was shown to be highly correlated to Adenoma Detection Rate (ADR) [6,13,15,16,17,18]. Another quality metric -Cecal Intubation Rate (proportion of colonoscopies in which the cecum is intubated) -is considered important to ensure good colon coverage.\nMost of these existing metrics are relatively easy to compute, but can provide only limited data on the quality of a specific procedure, and are typically used aggregatively for multiple sessions. Some studies [14] suggest that there are other factors that impact the polyp detection rate. For example, one may wish to distinguish between a good and bad colonoscope motion patterns, or assess the style of the examination. The hypothesis is that a better inspection style yields more informative visual input, which results in a better diagnostic accuracy.\nIn this work we propose a novel quantitative quality metric for colonoscopy, based on the automatic analysis of the induced video feed. This metric is computed locally in time, measuring how informative and helpful for colon inspection a local video segment is. As this instantaneous quality is very subjective and difficult to formulate, human annotation is problematic and ill-defined. Instead, we let an ML model build a meaningful visual data representation in a fully unsupervised way, and use it to construct a metric highly correlated with the clinical outcome. First, we learn visual representations of colonoscopy video frames using contrastive self-supervised learning. Then, we perform cluster analysis on these representations and construct a learned aggregation of these cluster assignments, bearing a strong correlation with polyp detection, which can serve as an indicator for \"good-quality\" video segments.\nWhile the proposed approach resembles the one proposed in [7], the addressed problems are markedly different, as [7] does phase detection in colonoscopy. There are other works aiming to learn frame representations in colonoscopy videos, However, those descriptors are usually associated with polyps, and used for polyp related tasks -tracking, re-identification [3,19], optical biopsy [20], etc.\nBy measuring the duration of good quality video segments over the withdrawal phase of the procedure, we derive a new offline colonoscopy quality metric. We show that this measure is strongly correlated to the Polyps Per Colonoscopy (PPC) quality metric. Moreover, we show how the real-time measurement of the quality of a colonoscopy procedure can be used to evaluate the likelihood of detecting a polyp at any specific point in time during the procedure." }, { "figure_ref": [ "fig_0" ], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "Our goal is to learn a colonoscopy quality metric through the identification of temporal intervals in which effective polyp detection is possible. We start by learning the colonoscopy video frame embedding using self-supervised learning, followed by a cluster analysis. Using those clusters, we learn a \"good\" frame classifier, which then serves as the basis for both global (offline) and local (online) quality metrics. The end-to-end framework is described in the following sections, and illustrated in Fig. 1." }, { "figure_ref": [ "fig_0" ], "heading": "Frame Encoding", "publication_ref": [ "b3", "b1", "b3", "b8", "b9", "b10", "b4" ], "table_ref": [], "text": "We start from learning visual representations of colonoscopy frames using contrastive learning. We use SimCLR [4], which maximizes the agreement between representations of two randomly augmented versions of the same frame, while pushing away the representations of other frames (see Fig. 1). Specifically, frame x i is randomly augmented, resulting in two correlated views, x 1 i and x 2 i , considered as a positive pair. These views are fed to an encoder f θ (•) and projection layer g φ (•), yielding the embedding vector z a i = g φ (f θ (x a i )) (a = 1, 2). Given a batch of N frames, the contrastive loss referring to the i-th frame is given by\n(z 1 i , z 2 i ) = -log exp(sim(z 1 i , z 2 i )/τ ) k =i 2 a=1 2 b=1 exp(sim(z a i , z b k )/τ ) ,(1)\nwhere τ is a temperature parameter, and sim is the cosine similarity defined as sim(u, v) = u T v/ u v . We use ResNet-RS50 [2] for the encoder and a simple MLP with one hidden layer for the projection layer, as suggested in [4].\nOur training data consists of 1M frames randomly sampled from 2500 colonoscopy videos. Since the designed metric is supposed to be used for predicting the chance of detecting a polyp, it is not expected to be used on frames where the polyp is detected or treated. Therefore, we exclude such frames from the training set, by detecting them automatically using off-the-shelf polyp and surgical tool detectors [9,10,11].\nFor augmentation we use standard geometric transformations (resize, rotation, translation), color jitter, and the Cutout [5] with the Gaussian noise filling. \n#0 #1 #2 #3 #4 #5 #6 #7 #8 #9\nFig. 2: T-SNE plot of frame embeddings. K-means clusters are color coded." }, { "figure_ref": [], "heading": "Frame Clustering", "publication_ref": [ "b11" ], "table_ref": [], "text": "The second step in our scheme is clustering the learned representations1 f θ (x i ) into K(=10) clusters using k-means [12]. While the standard k-means does a hard assignment of each frame to its corresponding cluster, we use a soft alternative based on the distance between the frame descriptor to cluster centers. Namely, we define the probability of the i-th frame to belong to the k-th cluster by\nr i,k = P rob(f θ (x i ) ∈ k) ∼ 1 f θ (x i ) -c k 2 2 α for k = 1, 2, . . . , K,(2)\nwhere {c k } K k=1 are the cluster centers, α = 16, and {r i,k } K k=1 are normalized to sum to 1. Figure 2 shows the t-SNE projection of frame embeddings with kmeans clusters color coded. Interestingly, the samples are clustered into relatively compact, meaningful groups. Figure 3 presents a random selection of frames from each cluster. One can see that clusters 1, 2 and 7 contains inside-body informative frames. In contrast, clusters 0, 3, 4, 5, 6, 8 and 9 contain non-informative outsidebody and inside-body frames. Please see the SM for more visual examples." }, { "figure_ref": [], "heading": "Online (Local) Quality Metric", "publication_ref": [ "b7" ], "table_ref": [], "text": "Based on the learned frame embeddings and clusters, we now design an online (local) quality metric. As our objective is to link the visual appearance to polyp detection, we will learn a metric that tries to predict one from the other. Namely,\n#0 #1 #2 #3 #4 #5 #6 #7 #8 #9\nFig. 3: Clusters visualization. Random selection of frames from each cluster.\nwe learn a function Q(•) that maps frame x i appearance encoded by the vector {r i,k } K k=1 (see Eq. 2) to the chance of detecting a polyp in the following frames. More precisely, we average the {r i,k } K k=1 over a video segment of 10 sec to get {r i,k } K k=1 , and train a binary classifier Q({r i,k } K k=1 ) to predict the detection of a polyp in the following 2 sec.\nThe training set for the classifier is built from a set of 2243 colonoscopy videos annotated for the location of polyps. 1086 intervals of 10 seconds before the appearance of polyps are sampled from the training set as positive samples, and another 1086 random intervals sampled as negative samples. The Q(•) is implemented as a binary classifier with a single linear layer and trained with Adam optimizer [8] for 500 epochs, using a batch size of 64.\nWhile the Q(•) achieves only a mediocre classification (i.e. polyp detection prediction) accuracy of 64% on the test-set (indeed, it is very difficult to predict a detection of a polyp when it is not known that the polyp is there), we will show in the following sections that it can still be used as a quality metric." }, { "figure_ref": [], "heading": "From Quality Metric to the Chance to Detect a Polyp", "publication_ref": [ "b0" ], "table_ref": [], "text": "We would like to assess the chance of detecting a polyp (if it exists) at a certain time point t as a function of the procedure quality Q in the preceding time interval [t -∆t, t]. Let us denote the event of having a polyp in the colon at time t as E (\"exists\"), and the event of detecting it as D (\"detected\"). For this analysis we will treat the quality metric Q from the previous section, as a random variable in the range [0, 1] measuring the quality of the procedure in the time interval [t -∆t, t].\nWe are interested to estimate the following probability:\nP (D|E, Q) = P (E, Q|D)P (D) P (E, Q) = P (Q|D)P (E|Q, D)P (D) P (E, Q) ,(3)\nrepresenting the chance of detecting a polyp if it exists as a function of quality. In the above, the first equality uses the Bayes rule, and the second exploits the chain probability relationship. We know that physicians rarely mistake a non-polyp for a polyp, implying that P (E|Q, D) ≈ 1. Then, assuming the independence between the existence of the polyp (E) and the quality of the procedure (Q), Eq. 3 becomes\nP (D|E, Q) ≈ P (Q|D)P (D) P (Q)P (E)(4)\nAs mentioned above, the incidence of polyp detection false alarms in colonoscopy is negligible, hence the ratio P (D)/P (E) can be interpreted as the average polyp detection rate/sensitivity (PDS). From the literature, we know that polyp missrate in colonoscopy is about 20 -25% [1]. Hence, P (D)/P (E) can be approximated as 0.75 -0.8, regardless of Q. Therefore, to compute P (D|E, Q), all we need to do is approximate P (Q) and P (Q|D). This can be done empirically by estimating the distribution of Q in random intervals and in intervals preceding polyps for P (Q|D)." }, { "figure_ref": [], "heading": "Offline Quality Metric (Post-Procedure)", "publication_ref": [], "table_ref": [], "text": "We would like to design an offline quality indicator based on the above online measure Q. We define the following quality metric by integrating Q over the entire withdrawal phase,\nQ Offline = i∈withdrawal Q {r i,k } K k=1 .(5)\n3 Experiments" }, { "figure_ref": [ "fig_2" ], "heading": "Online Quality Metric Evaluation", "publication_ref": [], "table_ref": [], "text": "We would like to evaluate how relevant the proposed online quality metric Q is to the ability of detecting polyps. We do that by estimating the likelihood of detecting an existing polyp P (D|E, Q) as a function of Q. The higher the correlation between Q and P (D|E, Q), the better Q is as a local colonoscopy quality metric.\nAs discussed above P (D|E, Q) ∝ P (Q|D)/P (Q). Both P (Q|D) and P (Q) can be estimated empirically: For P (Q) we build a 10-bin histogram of Q measured in 543 randomly chosen colonoscopy video segments 10sec long. The same is done for P (Q|D), but with 543 video segments preceding a polyp.\nThe estimated P (D|E, Q) is depicted in Figure 4. As one can see, the proposed quality metric Q correlates very well with the polyp detection sensitivity (PDS). Q can be computed online and provided as a real time feedback to the physician during the procedure. " }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "Offline Quality Metric Evaluation", "publication_ref": [], "table_ref": [], "text": "We would like to evaluate the effectiveness of the proposed offline quality metric Q Offline in predicting the polyp detection sensitivity.\nTo do so, we compute Q Offline for 500 annotated test set colonoscopies. We sort the cases in the increasing order of Q Offline , and split them into 5 bins -100 cases each, from lower Q Offline to higher. For each bin we compute the average Polyps Per Colonoscopy (PPC) metric. The resulting historgram is shown in Fig. 5(Left). One can observe a strong correlation between the Q Offline and the PPC metric.\nFig. 5(Right) shows the distribution of procedures with (red) and without detected polyps (blue), as the function of Q Offline . One can see that higher Q Offline are more likely to correspond to procedures with detected polyps.\nThe evaluations above suggest that the proposed quality metric Q Offline is highly correlated to polyp detection sensitivity (PPS). It is important to note that high Q Offline for any specific procedure does not mean that there is a high chance of finding a polyp in that procedure, as we don't know if there are any polyps there and how many. What it does mean, is that if there is a polyp, there is a high chance it will be detected." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We proposed novel online and offline colonoscopy quality metrics, computed based on the visual appearance of frames in colonoscopy video. The quality criteria for the visual appearance were automatically learned by an ML model in an unsupervised way.\nUsing a Bayesian approach, we developed a technique for estimating the likelihood of detecting an existing polyp as a function of the proposed local quality metric. We used this likelihood estimation to demonstrate the correlation between the local quality metric and the polyp detection sensitivity. The proposed local metric can be computed online to provide a real time quality feedback to the performing physician.\nIntegrating the local metric over the withdrawal phase yields a global, offline quality metric. We show that the offline metric is highly correlated to the standard Polyps Per Colonoscopy (PPC) quality metric.\nAs the next step, we would like to estimate the impact of the proposed real time quality feedback on the quality of the procedure, e.g. by measuring its impact on the Adenoma Detection Rate (ADR) in a prospective study." } ]
Colonoscopy is the standard of care technique for detecting and removing polyps for the prevention of colorectal cancer. Nevertheless, gastroenterologists (GI) routinely miss approximately 25% of polyps during colonoscopies. These misses are highly operator dependent, influenced by the physician skills, experience, vigilance, and fatigue. Standard quality metrics, such as Withdrawal Time or Cecal Intubation Rate, have been shown to be well correlated with Adenoma Detection Rate (ADR). However, those metrics are limited in their ability to assess the quality of a specific procedure, and they do not address quality aspects related to the style or technique of the examination. In this work we design novel online and offline quality metrics, based on visual appearance quality criteria learned by an ML model in an unsupervised way. Furthermore, we evaluate the likelihood of detecting an existing polyp as a function of quality and use it to demonstrate high correlation of the proposed metric to polyp detection sensitivity. The proposed online quality metric can be used to provide real time quality feedback to the performing GI. By integrating the local metric over the withdrawal phase, we build a global, offline quality metric, which is shown to be highly correlated to the standard Polyp Per Colonoscopy (PPC) quality metric.
Semi-supervised Quality Evaluation of Colonoscopy Procedures
[ { "figure_caption": "ClassifierFig. 1 :1Fig. 1: Method overview. (Left) Two augmented views for each frame are used to train the encoder and the projection head using contrastive learning. (Right top) Feature representations are directly clustered into semantically meaningful groups using K-means. (Right middle) Learning clusters' associations. (Right bottom) At inference time, cluster attributes are leveraged for quality metric evaluation.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig. 4: The likelihood of detecting an existing polyp in a short video segment as a function of local quality metric Q.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 :5Fig. 5: Q Offline during the withdrawal phase. (Left) The relationship between the proposed offline quality measure and the actual number of polyps detected, when Q Offline observations are divided into five equal-sized groups. (Right) Procedures with high Q Offline values are likely to have polyps.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" } ]
I Kligvasser; G Leifman; R Goldenberg; E Rivlin; M Elad Verily
[ { "authors": "S B Ahn; D S Han; J H Bae; T J Byun; J P Kim; C S Eun", "journal": "Gut and liver", "ref_id": "b0", "title": "The miss rate for colorectal adenoma determined by quality-adjusted, back-to-back colonoscopies", "year": "2012" }, { "authors": "I Bello; W Fedus; X Du; E D Cubuk; A Srinivas; T Y Lin; J Shlens; B Zoph", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b1", "title": "Revisiting resnets: Improved training and scaling strategies", "year": "2021" }, { "authors": "C Biffi; P Salvagnini; N N Dinh; C Hassan; P Sharma; A Cherubini", "journal": "NPJ digital medicine", "ref_id": "b2", "title": "A novel ai device for real-time optical characterization of colorectal polyps", "year": "2022" }, { "authors": "T Chen; S Kornblith; M Norouzi; G Hinton", "journal": "PMLR", "ref_id": "b3", "title": "A simple framework for contrastive learning of visual representations", "year": "2020" }, { "authors": "T Devries; G W Taylor", "journal": "", "ref_id": "b4", "title": "Improved regularization of convolutional neural networks with cutout", "year": "2017" }, { "authors": "H Fatima; D K Rex; R Rothstein; E Rahmani; O Nehme; J Dewitt; D Helper; A Toor; S Bensen", "journal": "Clinical Gastroenterology and Hepatology", "ref_id": "b5", "title": "Cecal insertion and withdrawal times with wide-angle versus standard colonoscopes: a randomized controlled trial", "year": "2008" }, { "authors": "O Kelner; O Weinstein; E Rivlin; R Goldenberg", "journal": "", "ref_id": "b6", "title": "Motion-based weak supervision for video parsing with application to colonoscopy", "year": "2022" }, { "authors": "D P Kingma; J Ba", "journal": "", "ref_id": "b7", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "J Lachter; S C Schlachter; R S Plowman; R Goldenberg; Y Raz; N Rabani; N Aizenberg; A Suissa; E Rivlin", "journal": "", "ref_id": "b8", "title": "Novel artificial intelligence-enabled deep learning system to enhance adenoma detection: a prospective randomized controlled study", "year": "2023" }, { "authors": "G Leifman; A Aides; T Golany; D Freedman; E Rivlin", "journal": "IEEE", "ref_id": "b9", "title": "Pixel-accurate segmentation of surgical tools based on bounding box annotations", "year": "2022" }, { "authors": "D M Livovsky; D Veikherman; T Golany; A Aides; V Dashinsky; N Rabani; D B Shimol; Y Blau; L Katzir; I Shimshoni", "journal": "Gastrointestinal Endoscopy", "ref_id": "b10", "title": "Detection of elusive polyps using a large-scale artificial intelligence system (with videos)", "year": "2021" }, { "authors": "S Lloyd", "journal": "IEEE transactions on information theory", "ref_id": "b11", "title": "Least squares quantization in pcm", "year": "1982" }, { "authors": "W Sanchez; G C Harewood; B T Petersen", "journal": "Official journal of the American College of Gastroenterology| ACG", "ref_id": "b12", "title": "Evaluation of polyp detection in relation to procedure time of screening or surveillance colonoscopy", "year": "2004" }, { "authors": "M S Sawhney; M S Cury; N Neeman; L H Ngo; J M Lewis; R Chuttani; D K Pleskow; M D Aronson", "journal": "Gastroenterology", "ref_id": "b13", "title": "Effect of institution-wide policy of colonoscopy withdrawal time more than 7 minutes on polyp detection", "year": "2008" }, { "authors": "A Shaukat; T S Rector; T R Church; F A Lederle; A S Kim; J M Rank; J I Allen", "journal": "Gastroenterology", "ref_id": "b14", "title": "Longer withdrawal time is associated with a reduced incidence of interval cancer after screening colonoscopy", "year": "2015" }, { "authors": "R Shine; A Bui; A Burgess", "journal": "ANZ journal of surgery", "ref_id": "b15", "title": "Quality indicators in colonoscopy: an evolving paradigm", "year": "2020" }, { "authors": "D T Simmons; G C Harewood; T H Baron; B T Petersen; K K Wang; F Boyd-Enders; B J Ott", "journal": "Alimentary pharmacology & therapeutics", "ref_id": "b16", "title": "Impact of endoscopist withdrawal speed on polyp yield: implications for optimal colonoscopy withdrawal time", "year": "2006" }, { "authors": "S R Vavricka; M C Sulz; L Degen; R Rechner; M Manz; L Biedermann; C Beglinger; S Peter; E Safroneeva; G Rogler", "journal": "Endoscopy", "ref_id": "b17", "title": "Monitoring colonoscopy withdrawal time significantly improves the adenoma detection rate and the performance of endoscopists", "year": "2016" }, { "authors": "T Yu; N Lin; X Zhang; Y Pan; H Hu; W Zheng; J Liu; W Hu; H Duan; J Si", "journal": "Artificial Intelligence in Medicine", "ref_id": "b18", "title": "An end-to-end tracking method for polyp detectors in colonoscopy videos", "year": "2022" }, { "authors": "Q E Van Der Zander; R M Schreuder; R Fonollà; T Scheeve; F Van Der Sommen; B Winkens; P Aepli; B Hayee; A B Pischel; M Stefanovic", "journal": "Endoscopy", "ref_id": "b19", "title": "Optical diagnosis of colorectal polyp images using a newly developed computer-aided diagnosis system (cadx) compared with intuitive optical diagnosis", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 193.79, 483.6, 286.8, 27.88 ], "formula_id": "formula_0", "formula_text": "(z 1 i , z 2 i ) = -log exp(sim(z 1 i , z 2 i )/τ ) k =i 2 a=1 2 b=1 exp(sim(z a i , z b k )/τ ) ,(1)" }, { "formula_coordinates": [ 4, 445.85, 150.39, 12.28, 111.8 ], "formula_id": "formula_1", "formula_text": "#0 #1 #2 #3 #4 #5 #6 #7 #8 #9" }, { "formula_coordinates": [ 4, 152.92, 446.35, 327.67, 27.59 ], "formula_id": "formula_2", "formula_text": "r i,k = P rob(f θ (x i ) ∈ k) ∼ 1 f θ (x i ) -c k 2 2 α for k = 1, 2, . . . , K,(2)" }, { "formula_coordinates": [ 5, 146.01, 126, 325.28, 7.92 ], "formula_id": "formula_3", "formula_text": "#0 #1 #2 #3 #4 #5 #6 #7 #8 #9" }, { "formula_coordinates": [ 5, 178.64, 645.03, 301.95, 22.31 ], "formula_id": "formula_4", "formula_text": "P (D|E, Q) = P (E, Q|D)P (D) P (E, Q) = P (Q|D)P (E|Q, D)P (D) P (E, Q) ,(3)" }, { "formula_coordinates": [ 6, 246.94, 191.54, 233.65, 22.31 ], "formula_id": "formula_5", "formula_text": "P (D|E, Q) ≈ P (Q|D)P (D) P (Q)P (E)(4)" }, { "formula_coordinates": [ 6, 229.87, 412.72, 250.73, 22.17 ], "formula_id": "formula_6", "formula_text": "Q Offline = i∈withdrawal Q {r i,k } K k=1 .(5)" } ]
10.1145/nnnnnnn.nnnnnnn
2023-05-17
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b87", "b106", "b129", "b131", "b168", "b169", "b197", "b216", "b218", "b75", "b130", "b180", "b179", "b91", "b233" ], "table_ref": [], "text": "Reinforcement Learning (RL) is extensively explored due to its tremendous potential in solving sequence decision tasks [88,107,129,131,168,169,197,216,218]. Kaelbling et al. pointed out in 1996 [76] that RL will be widely used in game playing and robotics. Mnih et al. [130] propose Deep Reinforcement Learning (DRL) to combine reinforcement learning with reasoning ability and Deep Learning (RL) with representative capacity, and the performance of the trained agent outperformed that of human players in various Atari games. Silver et al. use RL to solve Go games in 2007 [180] and propose AlphaGo leveraging deep neural networks and Monte Carlo tree search in 2016 [179]. In robotics, DRL also achieves outstanding developments such as quadrupedal movement [92,233]. The latest ChatGPT is well-known worldwide and makes use of RL-related technology. In the 20 " }, { "figure_ref": [ "fig_0" ], "heading": "Work", "publication_ref": [ "b51", "b77", "b219", "b19", "b242", "b254", "b36", "b38", "b61", "b86", "b201", "b247", "b225", "b75", "b225", "b189" ], "table_ref": [ "tab_0" ], "text": "Scope SARL MARL Applications Trustworthy Human [52,78,219] [102] Future Internet [136] [213] Model-based [20] Large Population [242] Decentralized [254] Communication [37] Causal [39] Safety [62] Robustness [87,201,247] Generalization [225] Comprehensive Brief Ours Brief Comprehensive years since DRL was proposed, there has been a continuous rise in research interest in games and robotics. Visionary applications of RL are summarized in [76].\nMulti-Agent Reinforcement Learning (MARL) research is advancing significantly based on the issues of poor scalability and non-stationary and has shown remarkable success in a range of applications. We summarize the relevant research on MARL in nine domains, involved in engineering and science.\nHowever, despite the impressive achievements, it is still necessary to construct trustworthy MARL to apply it to real-world tasks better. Consequently, one of the most critical topics we need to focus on in the next 10 to 20 years is how to establish a trustworthy MARL. As stated in [225], the intrinsic safety, robustness, and generalization of RL still need to improve, making it challenging to realize accurate general intelligence. While it mainly focuses on the single-agent domain. Compared to Single-agent Reinforcement Learning (SARL), MARL requires consideration not only of individual policy trustworthiness but also of the reliability of team interaction policies. As the number of agents increases, the complexity of team policies also increases, which increases the difficulty of researching trustworthy MARL. Currently, there is a portion of research on trustworthy MARL, but it is still in the early stages. To promote the development of this field, we conduct a comprehensive investigation of trustworthy multi-agent reinforcement learning from four aspects, including safety, robustness, generalization, and learning with ethical constraints.\nBy integrating human aspects, it is necessary to take into consideration not just agent collaboration but also the interaction between intelligent physical information systems and human civilization. In relation to MARL for human-machine interaction, we present four challenges: non-Markovian due to human intervention, diversity of human behavior, complex heterogeneity, and scalability of multi-human and multi-machines.\nThe difference between this paper and other related reviews are listed in Table 1. The outline of this paper is shown in Fig. 1. The rest of this survey is organized as follows. In Section 2, we give a relevant definition of MARL and summarize typical research methods. Section 3 shows the specific application scenarios of MARL. Section 4 summarizes the definition, related research, and limitations of trustworthy MARL. In Section 5, we point out the challenges faced by human-compatible MARL. Section 6 concludes the whole paper.\nwhere 𝑅 (𝑠 𝑡 , 𝑎 𝑡 , 𝑠 𝑡 +1 ) is an immediate reward environment returned when the agent executes action 𝑎 𝑡 at time step 𝑡 to make the state transit from 𝑠 𝑡 to 𝑠 𝑡 +1 . Many techniques to solve MDP are divided into value-based and policy-based methods. The most popular value-based method is Q-Learning [189] which approximates the optimal Q-function 𝑄 * by Q and updates its value via TD as follows, Q (𝑠 𝑡 , 𝑎 𝑡 ) ← Q (𝑠 𝑡 , 𝑎 𝑡 ) + 𝛼 𝑅 𝑡 + 𝛾 𝑚𝑎𝑥 learning cooperation and learning communication according to whether communication between agents is involved in the execution process." }, { "figure_ref": [], "heading": "Learning cooperation.", "publication_ref": [ "b161", "b181", "b188", "b191", "b208", "b113", "b118", "b144", "b164", "b191", "b188", "b161", "b181", "b125", "b208", "b118", "b62", "b113", "b164", "b144", "b227", "b31", "b32", "b243" ], "table_ref": [], "text": "The typical approach for learning cooperation involves centralized training and decentralized execution (CTDE), utilizing global or communication information during the training process while only using the observation information of the current agent during the execution phase. It also includes value-based [161,181,188,191,208] and policy-based [114,119,144,164] MARL methods.\nValue-based MARL: The updated rule of Eq. ( 3) is suitable for the multi-agent scenario:\nQ𝑖 (𝑠 𝑡 , 𝒂 𝑡 ) ← Q𝑖 (𝑠 𝑡 , 𝒂 𝑡 ) + 𝛼 𝑅 𝑖 + 𝑚𝑎𝑥\n𝑎 𝑖 ∈A 𝑖 𝛾 { Q 𝑗 (𝑠 𝑡 , 𝒂 𝑡 )} 𝑗 ∈ {1,...,𝑁 } -Q𝑖 (𝑠 𝑡 , 𝒂 𝑡 ) .(9)\nTampuu et al. [191] first extend the DQN to the multi-agent scenario equipping an independent DQN for each agent, i.e., only considering the agent's interaction with the environment. The experimental results demonstrate that this fully distributed training can produce good results for simple MAS but that it is difficult to converge for complex tasks and that there is a credit assignment issue. Sunehag et al. [188] overcome these issues by introducing a Value Decomposition Network (VDN) based on the CTDE. An optimal linear-valued decomposition is trained from the team reward function with VDN, and during execution, each agent uses an implicit value function based only on partial observations to make decisions. However, this decomposition is linear and can only apply to small-scale scenarios. Rashid et al. [161] use an end-to-end Q-Mixing Network (QMIX) to train decentralized policies following the advantages of VND. QMIX is a complex non-linear network that constrains the joint Q-function monotonic on the Q-function of each agent. This ensures the consistency of centralized and decentralized policies and simplifies the solution for maximizing the joint action-value function in offline policy learning. Son et al. [181] develop an innovative MARL factorization technique called QTRAN that eliminates the structural restriction and uses a novel technique to convert the initial joint action-value function into a simple decomposition function.\nAlthough the decomposition of QTRAN is more complex computationally, it covers a broader range of MARL activities as compared to VDN and QMIX. An approximate QTRAN performs hard in complex domains with online data collecting and requires two extra soft regularizations [125]. As a result, effective scalability is still a challenge for cooperative MARL. Wang et al. [208] use a duplex dueling network structure (QPLEX) to decompose the joint action-value function into an action-value function for each agent to address this challenge. It is made easier to learn actionvalue functions with a linear decomposition structure by reformulating the Individual-Global-Max (IGM) consistency as a restriction on the value range of the advantage function, which is a strong scalability value-based MARL technique.\nPolicy-based MARL: The state of the environment is determined by the action of all agents in the multi-agent scenario. The value-based method is challenging to train due to the unstable environment, and the variance of the policy-based method gets more prominent as the number of agents increases. Lowe et al. [119] proposed a variant of the actor-critic method in a multi-agent scenario -multi-agent deep deterministic policy gradient (MADDPG), which considers the action strategies of other agents in the process of reinforcement learning training for each agent, and Only individual information is considered during the testing phase. The multi-agent deterministic policy gradient can be written as\n▽ 𝜃 𝑖 J 𝑖 (𝜃 ) = E 𝑠∼𝜇 𝝅 𝜃 𝑖 ▽ 𝜃 𝑖 𝑙𝑜𝑔 𝜋 𝜃 𝑖 𝑎 𝑖 |𝑠 ▽ 𝑎 𝑖 𝑄 𝜋 𝜃 𝑖 (𝑠, 𝒂) | 𝒂=𝝅 𝜃 (𝑠) .(10)\nHowever, as the number of agents increases, the estimation error in the critic network also increases, making it difficult to scale MADDPG to larger environments. To address this limitation, researchers have proposed attention mechanisms that allow agents to focus dynamically on relevant information.\nFor example, the MAAC [63], G2ANet [114] and HAMA [164] algorithms use graph structures to model agent relationships and employ attention mechanisms to weigh their relevance. This approach has shown promising results in environments with a large number of agents. Another challenge in MAS is the need to adapt to changes in collaborative policies. The FACMAC algorithm [144] addresses this issue by incorporating a centralized strategy gradient estimation to optimize joint action spaces. This method has been shown to outperform MADDPG and QMIX in environments with large-scale continuous actions. Mean-Field-based MARL: The above methods are all based on the CTDE training framework, effectively addressing the problem of non-Markovian environments in fully decentralized training frameworks and the problem of high computational complexity in fully centralized training frameworks. However, existing MARL methods are usually limited to a small number of agents, and scalability remains a challenging issue. Yang et al. [227] propose mean-field reinforcement learning (MFRL), which approximates the interaction between individuals as the interaction between individuals and the average effect of the whole group or neighboring individuals and the convergence of Nash equilibrium solutions is analyzed. Ganapathi et al. [32] extended MFRL to multiple types of domains and proposed the MTMFQ method. Multiple types relax a core assumption in mean-field games, which is that all agents in the environment are using almost identical strategies and have the same goals. Then they further relaxed the assumption of MFRL and extended it to partially observable domains, assuming that agents can only observe information from a fixed neighborhood or from other agents based on random distances [33]. Zhang et al. [243] apply mean-field theory to the value function decomposition-based MARL framework and proposed the MFVDN method, which solves the problems of homogenous agents, limited representation, and inability to execute with local information decentralized in MFRL." }, { "figure_ref": [], "heading": "Learning communication.", "publication_ref": [ "b254", "b27", "b85", "b54", "b43", "b184", "b146", "b66", "b211", "b118", "b255", "b232", "b250", "b7", "b131", "b60", "b87", "b251", "b129", "b23", "b137" ], "table_ref": [], "text": "The purpose of learning communication is for agents to learn when, with which agents, and what information to communicate, which can be categorized as reinforced and differentiable according to [254].\nReinforced: Foerster et al. [28] use DQN with a recurrent network to handle partial observability called RIAL. Kilinc et al. [86] improve a DDPG algorithm enhanced by a communication medium including a concurrent learning mechanism that allows agents to decide if their private observations need to be shared with others. To maximize communication efficiency, Huang et al. [55] propose a network named ETCNet, that uses RL to find the optimal communication protocol within bandwidth constraints. The bandwidth is minimized due to messages being sent only when necessary. Gupta et al. [44] introduce a central agent observing every observation with multiple agents only receiving local observations and no communication. The central agent determines the message each agent needs to make better decisions based on global observations, avoiding central solving of the entire problem.\nDifferentiable: Sukhbaatar et al. [184] develop a neural model CommNet that lets the agents communicate continuously for fully cooperative tasks. Agents learn both their policy and communication way during training. To maintain effective communication, Peng et al. [146] propose a multi-agent Bidirectionally-Coordinated Network (BiCNet) with a vectorized actor-critic formulation. They demonstrate that BiCNet can learn advanced coordination methods without supervision. To learn abstract representations of the interaction of agents, Jiang et al. [67] propose graph convolution RL that leverages graph convolution to adapt to the underlying dynamics of the graph, with relation kernels capturing the interaction of agents. Wang et al. [211] devise a novel approach entitled IMAC, which addresses the challenges of constrained-bandwidth communication in MARL. IMAC optimizes resource usage, minimizes needless connections, and allows smooth communication protocols and schedules. It uses low-entropy messages that stick to bandwidth limits and MADDPG-based [119] MA [255] Game theoretic MA Auto-driving [232] Dynamic coordination graph MA [250] Auto-driving simulation platform MA [8] DQN-based [131] MA [61] AC-based [88] SA [251] AC-based [129] MA merges the information bottleneck principle with a weight-based scheduler to produce a practical protocol. Using an attention mechanism is insufficient as it overlooks dynamic communication and the correlation between agents' connections. To tackle this issue, Du et al. [24] propose a method that utilizes a normalizing flow to encode the correlation between agents' interactions, allowing for direct learning of the dynamic communication topology. This methodology proves effective in cooperative navigation and adaptive traffic control tasks. Niu et al. [137] leverage a graph-attention mechanism to determine the most pertinent agent of messages and the most suitable means of delivery.\nOverall, these algorithms aim to improve the scalability and non-stationary of MAS, allowing agents to learn from the experiences of other agents and achieve better performance in complex environments." }, { "figure_ref": [], "heading": "APPLICATIONS OF MULTI-AGENT REINFORCEMENT LEARNING", "publication_ref": [], "table_ref": [], "text": "Through MARL, agents are able to learn and communicate with each other, thereby achieving more efficient task completion and better decision-making results. This method is widely used in engineering and science, for example, in smart transportation, unmanned aerial vehicles, intelligent information system, public health and intelligent medical diagnosis, smart manufacturing, financial trade, network security, smart education, and RL for science." }, { "figure_ref": [], "heading": "Smart Transportation", "publication_ref": [ "b98", "b220", "b255" ], "table_ref": [ "tab_1" ], "text": "Smart transportation makes use of advanced technologies like the Internet of Things (IoT) and AI to increase safety, improve transportation efficiency, and reduce its negative environmental effects. In MARL-based smart transportation, we describe two known scenarios: traffic light control and auto-driving and present the role of humans in these intelligent systems. The correspondence between this application and RL methods is shown in Table 2.\nTraffic light control: Li et al. [99] use DQN to obtain the optimal policy in sight of the variety of the control action and the state and demonstrate the potential of DRL in traffic light control. However, the control of traffic lights needs to consider the situation of multiple intersections. Wu et al. [220] combine MADDPG with Long-short-term Memory (LSTM) for multi-intersection traffic light control coordination. The use of LSTM is appropriate to address the environmental instability forced on by partial observable states. They take into account both the cars and the pedestrians waiting to cross the street. Zhu et al. [255] propose a Bi-hierarchical Game-theoretic (BHGT) to solve network-wide traffic signal control problems. They evaluate the state of the network-wide traffic based on the collection data of trips. The experiment shows that BHGT efficiently reduces the network-wide travel delay. " }, { "figure_ref": [], "heading": "Applications", "publication_ref": [], "table_ref": [], "text": "Papers Methods SA/MA" }, { "figure_ref": [], "heading": "Unmanned Aerial Vehicles", "publication_ref": [ "b124", "b131", "b207", "b106", "b210", "b118", "b148", "b131", "b74", "b131", "b204", "b168", "b134", "b131", "b65", "b73", "b62", "b177", "b118", "b232", "b250", "b7", "b60", "b251" ], "table_ref": [], "text": "Cluster control [124] DQN-based [131] SA [207] DDPG-based [107] MA [210] MADDPG-based [119] MA\nEnvironmental monitoring [148] DQN-based [131] MA [75] DQN-based [131] MA [204] TRPO-based [168] MA [134] DQN-based [131] MA Collaborative transportation [66,74] MAAC-based [63] MA [177] MADDPG-based [119] MA Auto-driving: Chao et al. [232] simulate the dynamic topography during vehicle interactions using a dynamic coordination graph and put forward two fundamental learning strategies to coordinate the driving actions for a fleet of vehicles. Additionally, they propose a number of extension mechanisms in order to adapt to the complex scenario with any number of vehicles. Zhou et al. [250] build an autonomous driving simulation platform to realize more realistic and diverse interactions. Bhalla et al. [8] propose two novel centralized training based on DQN and a memory block to execute decentralized, which achieve better cumulative reward in autonomous driving. Huang et al. [61] propose a sample efficient DRL framework including imitative expert priors and RL. The agent learns expert policy from the prior human knowledge and is guided by minimizing the KL divergence between the policy of the agent and the imitative expert. Zhou et al. [251] propose a MARL framework composed of a brand-new local reward and scheme for sharing parameters for lane-changing decision makings.\nAs a system that involves both physical and digital components, it requires the active participation and cooperation of humans to achieve its full potential. Humans play a crucial role in the operation and management of systems for transportation, from designing and building infrastructure to using and maintaining vehicles to making decisions about routing and scheduling. Thus, the success of smart transportation ultimately depends on how well it can integrate and leverage the capabilities of both humans and machines in a seamless and effective manner. However, the current state of research on MARL-based smart transportation is without adequately address the decision priority between human control and intelligent algorithms. Given the continuously evolving nature of both human behavior and city traffic, situations such as traffic accidents and surges in vehicles can make it challenging to manage traffic jams solely through traffic signal control. In such scenarios, human intervention becomes necessary. Similarly, in instances where self-driving cars encounter hazardous situations that were not anticipated during training, relinquishing control to the human driver is critical. Defining the optimal decision priority between humans and agents remains an unresolved issue." }, { "figure_ref": [], "heading": "Unmanned Aerial Vehicles", "publication_ref": [ "b124", "b158", "b207", "b210", "b222", "b223", "b224", "b74", "b134", "b148", "b204", "b65", "b73", "b177", "b124", "b207", "b210", "b148", "b74", "b204", "b134", "b73", "b65", "b177", "b174" ], "table_ref": [ "tab_2" ], "text": "In MARL-based Unmanned Aerial Vehicles (UAVs) applications, we describe three known scenarios: cluster control [124,158,207,210,[222][223][224], environmental monitoring [75,134,148,204], and collaborative transportation [66,74,177]. The correspondence between this application and RL methods is shown in Table 3.\nCluster control: Maciel-Pearson et al. [124] make use of DRL to improve the ability of UAVs to automatically navigate when the environments are various. The approach uses a double stateinput strategy that combines positional information with feature maps from the current scene. This approach is tested and shown to outperform other DQN variants and has the ability to navigate through multiple unknown environments and extreme weather conditions. A two-stage RL method is proposed by Wang et al. [207] for multi-UAV collision avoidance to address the issues of high variance and low reproducibility, where supervised training is in the first stage and policy gradient is in the next stage. Wang et al. [210] propose a trajectory control method according to MARL, which introduced a low-complexity approach to optimize the offloading decisions of the user equipment given the trajectories of UAVs. The results show that the proposed approach has promising performance.\nEnvironmental monitoring: Pham et al. [148] propose a distributed MARL algorithm to achieve complete coverage of an unfamiliar area while minimizing overlapping fields of view. Julian and Kochenderfer [75] present two DRL approaches for controlling teams of UAVs to monitor wildfires. The approaches accommodate the problem with uncertainty and high dimensionality and allow the UAV to accurately track the wildfire expansions and outperform existing controllers. The approaches scale with different numbers of UAVs and generalize to various wildfire shapes. Walker et al. [204] propose a method for indoor target-finding by combining Partially Observable MDP (POMDP) and DRL. The framework consists of two stages: planning and control. Global planning is done using an online POMDP solver, while local control is done using Deep RL. Mou et al. [134] propose a hierarchical UAV swarm architecture based on the DRL algorithm for solving the 3D irregular terrain surface coverage problem. A geometric approach is used to divide the 3D terrain surface into weighted 2D patches. A coverage trajectory algorithm is designed for low-level follower UAVs to achieve specific coverage tasks within patches. For high-level leader UAVs, a swarm DQN algorithm is proposed to choose patches, which integrates Convolutional Neural Networks (CNNs) and mean embedding methods to address communication limitations.\nCollaborative transportation: Jeon et al. [74] design a UAV logistics delivery service environment using Unity to evaluate MADRL-based models, and Jo [66] propose a fusion-multi-actorattention-critic (F-MAAC) model based on the MAAC. It is shown from the results that F-MAAC outperformed MAAC in terms of the total number of deliveries completed during a specific period and the total number of deliveries completed over the same distance. Our previous work [177] develops a virtual platform for multi-UAVs collaborative transport using AirSim [174] and proposed recurrent-MADDPG with domain randomization technique to achieve MARL sim2real transfer.\nBy utilizing MARL, UAV systems can make autonomous decisions and collaborations in various scenarios, leading to more efficient task completion. However, existing works do not consider the command and interaction between ground workstations and operators for UAV systems, and the robustness and safety of MARL are deficient. When a UAV encounters interference and cannot make the correct decisions, it can cause serious harm to human society. Considering the interaction between intelligent UAV systems and humans to achieve more efficient and safer UAV systems is one of the goals in future 10-20 years." }, { "figure_ref": [], "heading": "Intelligent Information System", "publication_ref": [ "b12", "b82", "b97", "b103", "b120", "b183", "b195", "b226", "b25", "b103", "b178", "b39", "b50", "b71", "b231", "b245", "b39", "b191", "b97", "b226", "b120", "b12", "b183", "b82", "b103", "b140", "b93", "b178", "b25", "b131", "b50", "b245", "b39", "b71", "b231" ], "table_ref": [ "tab_3" ], "text": "MARL has tremendous potential for applications in intelligent information systems, including natural language processing (NLP) [13,83,98,104,120,183,195,226], programming generation [26,104,178], and recommender systems [40,51,72,231,245]. Techniques based on SARL have been studied in NLP and programming generation, and we will summarize these studies and point out the significant advantages of MARL in these applications. The correspondence between this application and RL methods is shown in Table 4. Learning communication MA [40] IQL-based [191] MA Natural language processing: Li et al. [98] describe how RL can be applied to chatbot dialogue generation to predict the impact of current actions on future rewards. By utilizing a policy gradient approach to optimize long-term rewards defined by the developer, the model learns all possible strategies for speaking in an infinite action space, resulting in more interactive and consistent conversation generation for chatbots. Yang et al. [226] combine multi-task learning and RL to present a personalized dialog system called MRPDG. Three kinds of rewards are used to guide the model to produce highly rewarded dialogs. In order to address the problems of sparse rewards and few successful dialogues, Lu et al. [120] propose two complex methods for hindsight experience replay. During the RL training process, chatbot agents can be made to generate more authentic dialogues by introducing human-relevant evaluation metrics. Chen et al. [13] present a framework called \"companion teaching\" in which a human teacher guides the machine in real-time during the learning process and uses example actions of the teacher to improve policy learning. Su et al. [183] present two approaches to address the challenge of measuring rewards in real-world dialogue system applications. Keneshloo et al. [83] use RL to solve the effects of sequence-to-sequence exposure bias and inconsistency between training and test measurements. Li et al. [104] propose a new framework that includes a generator and an evaluator for learning from data. The generator is a learning model for paragraph generation, and the evaluator is a matching model used to provide a reward function for RL. Large language models may produce fake or useless outputs. Ouyang et al. [140] introduce RL with human feedback to fine-tune GPT-3 to reduce unwanted outputs and propose a language model called instructGPT. They believe that it is important to use human feedback to make the output of large language models close to human intention.\nProgramming generation: Le et al. [94] design a program synthesis framework called CodeRL that uses pre-trained language models and RL to generate programs.Shojaee et al. [178] integrate a pre-trained programming language model with PPO to optimize the model through execution feedback and present a new code generation framework called PPOCoder. Software testing is essential for quality assurance but expensive and time-consuming. Esnaashari et al. [26] propose a new method using a memetic algorithm with RL as a local search that outperforms traditional evolutionary or heuristic algorithms in speed, coverage, and evaluation.\nMARL has advantages over SARL in NLP and programming generation due to its stronger collaboration ability and adaptability. In NLP, MARL can be used for tasks such as chatbots and text translation. In these tasks, multiple agents can work together to learn the knowledge and skills DQN-based [131] MA of a conversational system, thereby improving its performance and interaction experience. For programming generation, MARL is usually more suitable for scenarios that require the generation of complex systems or large-scale software. This is because in MARL, each agent can be responsible for generating a part of the code, and the whole system can be built through collaboration. This approach can improve the efficiency and quality of the generated code and can reduce the repetition and error rate of the code.\nRecommender system: He et al. [51] propose a MARL method with communication restrictions to address sub-optimal global strategies due to the lack of cooperation among optimization teams. Zhang et al. [245] propose a novel dynamic, collaborative recommendation method utilizing MARL for recommending academic collaborators, optimizing collaborator selection from different similarity measures. To improve communication efficiency on Twitter-like social networking, Gui et al. [40] propose a MARL by combining dozens of more historical tweets to choose a set of users. Jin et al. [72] propose a method for optimizing bids using MARL to achieve specific goals, such as maximizing revenue and return on investment for real-time advertising. The method uses a clustering approach to assign strategic bidding agents to each advertiser cluster and proposes a practical distributed coordinated multi-agent bidding to balance competition and cooperation among advertisers. Li and Tong [231] propose a social MARL framework named MATR, where one agent captures the dynamic preferences of users while the other exploits social networks to reduce data sparsity and cold starts. The state representation module aims to learn from social networks and user rating matrices, using trust inference and feature aggregation modeling to optimize the use of social networks.\nMARL has many advantages in intelligent information processing, but the lack of robustness and transparency prevents MARL decisions from being trusted by humans. In order to apply MARL to the real world, it is first necessary to improve its trustworthiness, and in addition, RL with human feedback needs to be further considered to make the generated language more realistic, the programming more efficient, and the recommended content more attractive." }, { "figure_ref": [], "heading": "Public Health and Intelligent Medical Diagnosis", "publication_ref": [ "b83", "b84", "b89", "b249", "b64", "b13", "b1", "b105", "b122", "b248", "b202", "b95", "b202", "b79", "b108", "b109", "b193", "b169", "b72", "b106", "b152", "b153", "b169", "b240", "b131", "b126", "b191", "b0", "b169", "b192", "b131", "b88", "b118", "b92", "b87", "b182", "b118", "b163", "b169", "b159" ], "table_ref": [ "tab_4" ], "text": "MARL is widely explored and applied in public health and intelligent medical diagnosis. For example, MARL can be applied in COVID-19 prediction and management, medical image processing, and disease diagnosis to improve disease prevention, diagnosis, and treatment efficiency and accuracy. The correspondence between this application and RL methods is shown in Table 5.\nCOVID-19 prediction and diagnosis: Khalilpourazari et al. [84,85] present the Hybrid Qlearning-based algorithm (HQLA) as a solution to predict the COVID-19 pandemic. HQLA accurately reflects the future trend in France and Quebec, Canada. Furthermore, their analysis also provides critical insights into pandemic growth and factors that policymakers should consider when making social measures. Kumar et al. [90] utilize two learning algorithms, DL and RL, to forecast COVID-19, where LSTM is used to forecasts newly affected individuals, losses, and cures in the coming days, and DQN is suggested for optimizing predictive outcomes based on symptoms. Zheng et al. [249] propose developing MDPs to model the oxygen flow trajectory and health outcomes of COVID-19 patients. Using Deep Deterministic Policy Gradient (DDPG), an optimal oxygen control policy is obtained for each patient, resulting in a reduced mortality rate.\nRegarding the prediction and diagnosis of COVID-19, existing studies are based on SARL. compared with SARL, MARL can be responsible for different tasks, such as virus transmission model prediction and clinical diagnosis, separately and then complete the task through communication and collaboration. In addition, the COVID-19 epidemic develops rapidly and is influenced by multiple factors, and MARL can better handle the uncertainty and complexity. Therefore, we believe that MARL has excellent potential in this area.\nMedical image processing: X-ray images have become crucial for expediting the diagnostics of COVID-19. Jalali et al. [65] propose an ensemble of CNNs to differentiate COVID-19 patients from non-patients according to an automated X-ray image. The selective ensemble approach utilizes DQN to heighten model accuracy while reducing the required classifiers. Chen et al. [14] suggest an RL-based detection framework to quickly and effectively diagnose COVID-19. They build a mixed loss, enabling efficient detection of the virus. Additionally, they propose a prediction framework that allows for integrating multiple detection frameworks through parameter sharing. This allows for the prediction of disease progression without the need for additional training. Allioui et al. [2] develop a new method for more efficient automatic image segmentation that employs MARL. This approach addresses mask extraction difficulties and uses a modified version of the DQN to identify masks in CT images of COVID-19 patients. MARL can be used for interactive image segmentation, where each voxel is an agent with a shared behavior policy to reduce exploration space and dependence among voxels. [106] is for the field of medical image segmentation, considering clinical criteria, using MARL to solve the problem, reducing the exploration space, and using a sharing strategy to capture the dependencies between pixels; While [122] is for interactive image segmentation, using MDP and MARL models to model iterative segmentation, introducing a boundary-based reward function to update the segmentation strategy. Zheng et al. [248] use a MARL approach to prostate localization in Magnetic Resonance (MR) images. They create a communication environment by sharing convolutions and maintaining independent action policy via distinct fully connected layers for each agent. Anatomical landmark detection is crucial in medical image analysis. Vlontzos et al. [202] present a novel approach using MARL to detect multiple landmarks simultaneously. This theory suggests that the positioning of anatomical landmarks in human anatomy is interdependent and not random. It can accommodate 𝐾 agents to detect 𝐾 different landmarks with implicit intercommunication. Leroy et al. [96] develop a communicative MARL framework, aiding in detecting landmarks in MR images. In contrast to [202], agent communication is explicit. Kasseroller et al. [80] propose a solution to the long inference time caused by DQN-based methods being limited to a discrete action space. They recommend using a continuous action space to allow the agent to move smoothly in any direction with varying step sizes, resulting in fewer required steps and increased landmark identification accuracy.\nDisease diagnosis: Ling et al. [109,110] propose an RL-based method to improve clinical diagnostic inferencing. This approach can extract clinical concepts, integrate external evidence, and identify accurate diagnoses, which is especially beneficial in cases with limited annotated data. The system uses a DQN architecture and a reward function to optimize accuracy during training. Tang et al. [193] introduce a new neural symptom checker that employs an ensemble model. They incorporate an RL framework to develop inquiry and diagnosis policies as MDPs without using PPO-based [169] MA [73] DDPG-based [107] MA [152,153] PPO-based [169] MA [240] DQN-based [131] MA [126] IQL-based [191] MA Industrial robots [1] PPO-based [169] MA [192] DQN-based [131] MA [89] MADDPG-based [119] MA [93] AC-based [88] MA Preventive maintenance [182] MADDPG-based [119] MA [163] PPO-based [169] MA previous approximation methods. Furthermore, they develop a model for each anatomical section reflective of the practices of various hospital departments. This new approach offers improved user experience and significant enhancements in disease prediction accuracy over current models. Rajesh et al. [159] created the IMRLDPTR system, which uses mobile agents to collect data from multiple sources and generates rule sets for different disease categories. MARL has many benefits in public health and intelligent medical diagnosis, such as the ability to handle highly complex tasks and to consider the interaction of multiple factors and variables. However, MARL also has some drawbacks, such as low transparency of the learning process and decision results, making it difficult to understand the decision process and behavior of the model. In addition, the robustness of MARL is poor, and the decisions are sensitive to perturbations. Therefore, the above drawbacks must be addressed when applying MARL to this field." }, { "figure_ref": [], "heading": "Smart Manufacturing", "publication_ref": [ "b96", "b212", "b246", "b169", "b72", "b152", "b153", "b240", "b126", "b0", "b192", "b88", "b92", "b182", "b163" ], "table_ref": [ "tab_5" ], "text": "Smart manufacturing is the integration of advanced technologies, e.g., IoT, AI, and so on, into the manufacturing process to optimize the production process. As for smart manufacturing, MARL is a promising approach. In the context of smart manufacturing, MARL can be utilized as a tool for production scheduling, shop industrial robot control, quality control, and equipment maintenance to achieve an intelligent and efficient production process [97]. The correspondence between this application and RL methods is shown in Table 6.\nJob shop scheduling is a key challenge in smart manufacturing because it involves complex decision-making processes and resource allocation problems. Traditional approaches are usually based on rules or static algorithms, but these approaches frequently fall short of adjusting to the changing production environment. In recent years, MARL has been introduced to job shop scheduling to improve the efficiency and accuracy of shop floor task scheduling by learning and adapting strategies from a progressively changing environment. In the resource preemption that addresses the high-dimensional action space problem. A MARL algorithm for job scheduling is proposed in [212]. In the algorithm, the environment is modeled as a Markov decision process which is decentralized and partially observable. And every job is regarded as an agent which selects the available robot. Zhang et al. [246] propose a multi-agent manufacturing system for efficient and autonomous personalized order processing in a changeable workshop environment. The manufacturing equipment is built as an agent with an AI scheduler, which generates excellent production strategies in sight of the workshop state and is periodically trained through the PPO algorithm [169]. This algorithm can tackle resource or task disturbances and obtain solutions that satisfy different performance metrics. Jing et al. [73] address the flexible job shop scheduling issues by utilizing a graph-based MARL with centralized learning decentralized execution. The approach uses a directed acyclic graph to simulate the flexible job shop scheduling issues and predicts the connection probability among edges to adjust the scheduling strategy. Popper et al. [152,153] use MARL to deal with the issues of flexible job shop scheduling with multiple objectives. Zhang et al. [240] propose a new model called DeepMAG for flexible job shop scheduling according to MARL. DeepMAG provides each machine and job with an agent, and they work together to find the best action. In Industry 4.0, a user-friendly MARL tool for the job shop scheduling problem is designed in [126], which provides users with the chance to communicate with the learning algorithms. Users can either maintain the optimal schedule produced by Q-Learning or change it to meet constraints.\nIndustrial robots have a growing amount of influence on industrial manufacturing. However, with the increasing complexity of production tasks, it is often difficult for individual robots to complete tasks effectively. MARL is widely used in smart manufacturing robots. Agrawal et al. [1] propose a framework based on MARL that integrates job scheduling and navigation control for an autonomous mobile robot-operated shop floor. To address the challenge of increasing demands for customization and rapid product iterations, Tan et al. [192] propose a multi-agent model for the industrial robot assembly process, and the communication of agents which have real-time data acquisition and fusion is studied. Besides, they also propose an excellent algorithm for planning and scheduling industrial robot assembly using a MARL approach. Krnjaic et al. [89] use MARL to optimize order-picking systems in commercial warehouses. The goal is to improve efficiency and flexibility while minimizing resource constraints. The MARL framework is applicable to various configurations of warehouses and allows agents to learn how to cooperate optimally with one another. Lan et al. [93] explore the use of MARL to optimize coordination in a multi-robot pickand-place system for smart manufacturing.\nPreventive maintenance: With the increasing scale and productivity of the manufacturing industry, how to design useful preventive maintenance strategies to guarantee the steady operation of production systems has become a vital issue in the manufacturing field. The MARL approach has provided a new idea to address this issue. Due to the problem of action space explosion, traditional RL methods are difficult to be applied directly. Therefore, [182] adopts a MARL-based approach in a manufacturing system to model every machine as a collaborative intelligence and implements adaptive learning through the multi-agent value decomposition Actor-Critic algorithm to obtain an efficient and cost-reasonable preventive maintenance strategy. [163] present a multi-agent approach using RL to coordinate maintenance scheduling and dynamically assign tasks to technicians with various skills under the uncertainty of multiple machine failures.\nMARL shows potential applications in smart manufacturing and achieves some stunning results. However, this approach has challenges in scalability and is difficult to scale to situations with a high number of agents. It also suffers from poor generalization, which makes it difficult to be applied well to real scenarios. In addition, smart manufacturing is a task that involves humancomputer interaction, so human behavior and human-computer priority switching need to be considered when applying MARL. All these factors need to be fully considered when designing and implementing MARL algorithms to ensure the reliability and applicability of the models." }, { "figure_ref": [], "heading": "Financial Trade", "publication_ref": [], "table_ref": [], "text": "Financial trading is a challenging activity that requires fast judgment and adjustment to continuously changing market conditions. Single-agent approaches and DL techniques from the past are no longer adequate to meet market expectations. MARL offers a fresh idea for tackling the difficulties in " }, { "figure_ref": [], "heading": "Applications", "publication_ref": [], "table_ref": [], "text": "Papers Methods SA/MA" }, { "figure_ref": [], "heading": "Financial Trade", "publication_ref": [ "b123", "b150", "b118", "b59", "b94", "b175", "b131", "b156", "b227", "b157", "b118", "b143", "b131", "b78", "b197", "b6", "b106", "b5", "b33", "b169", "b48", "b131", "b59", "b94", "b123", "b150", "b175", "b78", "b143", "b156", "b157", "b5", "b33", "b48", "b150", "b94", "b59", "b123", "b175", "b156", "b157", "b227", "b131", "b132", "b90", "b102", "b186", "b190", "b131", "b145", "b118", "b135", "b87", "b197", "b118", "b143", "b78", "b197", "b6", "b5", "b33", "b48" ], "table_ref": [ "tab_6" ], "text": "Portfolio management [123,150] MADDPG-based [119] MA [60,95,175] DQN-based [131] MA Trading strategy optimization [156] MFRL-based [227] MA [157] MADDPG-based [119] MA [143] DQN-based [131] MA [79] Double-Q-based [197] MA [7] DDPG-based [107] MA\nRisk management [6] Multi-agent System MA [34] PPO-based [169] MA [49] DQN-based [131] MA financial trade by combining collaboration and competition among various agents. We summarize the applications of MARL in financial trade from the perspectives of portfolio management [60,95,123,150,175], trading strategy optimization [79,143,156,157], and risk management [6,34,49].\nThe correspondence between this application and RL methods is shown in Table 7.\nPortfolio management: In portfolio management, MARL can help investors better optimize asset allocation and improve returns. Multiple agents make investment decisions and are trained to achieve optimal investment portfolios and returns. For a portfolio of 10 equities on the Vietnam stock market, Pham et al. [150] use MARL to create an automatic hedging strategy. They develop a simulator including transaction fees, taxes, and settlement dates for training the RL agent. The agent can get knowledge of trading and hedging to minimize losses and maximize earnings. It also protected portfolios and generated positive profits in case of a systematic market collapse. Lee et al. [95] propose a new investment strategy called a MARL-based portfolio management system (MAPS) that uses a cooperative system of independent \"investor\" agents to create a diversified portfolio. The agents are trained to act in a variety of ways and maximize their return using a thought-out loss function. To address the scalability and re-usability in RL-based portfolio management, Huang and Tanaka [60] propose a MARL-based system with Evolving Agent Module (EAM) and the Strategic Agent Module (SAM). EAM generates signal-comprised information for a particular asset using a DQN agent. In contrast, SAM uses a PPO agent for portfolio optimization by connecting to multiple EAMs to reallocate corresponding assets. Ma et al. [123] introduce a new MARL for optimizing financial portfolio management. The algorithm employs two agents to study the best trading policy for two distinct categories of stock trends, with a trend consistency factor that takes into account the consistency of stocks within a portfolio. Besides, the reward function now includes a novel TC regularization, which is based on the trend consistency factor value. The algorithm dynamically alternates between the two agents in order to obtain the best portfolio strategy based on the state of the market. Shavandi and Khedmati [175] propose a MARL framework that leverages the collective intelligence of expert traders on various periods. The DQN and a hierarchical structure are used in the framework to train the agents.\nTrading strategy optimization: In the financial markets, developing an effective trading strategy is always a challenging issue. Traditionally, trading strategies are usually designed by individuals or teams based on their experience and skills, but there are many limitations in this approach. With the continuous advance in AI methods, MARL is widely applied in the optimization of trading strategies. It allows multiple agents to collaborate and compete to learn and improve strategies, leading to better trading results. [156] and [157] worked by Qiu et al use MFRL [227] and MADDPG DQN-based [131] SA [132] SARSA-based [91] MA\nResource optimization [103,186,190] DQN-based [131] MA [145] MADDPG-based [119] MA [135] Double-Q, AC [88,197] MA [119] to optimize energy trading and market strategies, respectively. Patel [143] applies MARL to place limit orders to optimize market-making. The MARL framework consists of a macro-agent that decides whether to buy, sell, or hold an asset and a micro-agent that places limited orders within the order book. A model-free method is proposed by Karpe et al. [79]. It uses the Double Deep Q-Learning algorithm [197], which is trained in a multi-agent realistic market simulation environment. The approach involves configuring a historical order book simulation environment with multiple agents and evaluating the simulation with real market data. Bao [7] proposes a MARL method to formulate stock trading strategies for investment banks with multiple clients. The method aims to balance revenue and fairness among clients with different order sizes and requirements.\nThe proposed scheme uses RL to adapt trading strategies to complex market environments and uses MAS to optimize individual revenues. Risk management: Risk management is always a crucial part of business and organization management. Compared with traditional SARL, MARL can help enterprises and organizations better manage risk and reduce potential losses and risks. Bajo et al. [6] discuss the need for innovative tools to help small to medium enterprises predict risks and manage inefficiencies and create a multi-agent system that uses advanced reasoning to detect situations with risks and offer decision support. Ganesh et al. [34] use a simulation to study how RL can be utilized for training market maker agents in a dealer market. The RL agent learns to manage inventory and adapt to market price trends while also learning about the pricing strategies of its competitors. They also propose and test reward formulations to create risk-averse RL-based market makers. He et al. [49] propose a new approach to train a trading agent using RL by using a multi-agent virtual market model consisting of multiple generative adversarial networks. The model creates simulated market data that takes into account how the action of the agent affects the state of the market. A backtest of the China Shanghai Shenzhen 300 stock index futures in 2019 shows that the trained agent has a 12 percent higher profit and a low risk of loss." }, { "figure_ref": [], "heading": "Network Security", "publication_ref": [ "b53", "b117", "b117", "b132", "b172", "b173", "b102", "b135", "b145", "b186", "b190", "b69", "b172", "b173", "b53", "b132", "b90", "b117", "b16", "b190", "b186", "b145", "b118", "b102", "b135" ], "table_ref": [ "tab_7", "tab_8" ], "text": "Network security is an important issue facing society today, where attackers use various techniques and means to compromise computer systems and networks, threatening the security of individuals, organizations, and nations. MARL is a promising approach that can be used in the field of network security, with major applications in intrusion detection [54,118,118,132,172,173] and network resource optimization [103,135,145,186,190]. The correspondence between this application and RL methods is shown in Table 8.\nIntrusion detection: Intrusion detection is one of the critical aspects to protect network security [70]. However, traditional intrusion detection systems may have limitations in the face of complex and variable network attacks. MARL is an effective solution that can be used to enhance the accuracy and robustness of intrusion detection through collaborative learning and mutual communication. Servin and Kudenko [172] present a MARL-based intrusion detection method that enables the identification and prediction of normal and abnormal states in a network through learning and interaction between distributed sensors and decision-making intelligence. Sethi et al. [173] propose an intrusion detection system according to MARL with attention mechanisms for efficient detection and classification of advanced network attacks. A DRL algorithm is proposed by Hsu and Matsuoka [54] for the anomaly network intrusion detection systems, which can update itself to detect new types of network traffic behavior. The system is tested on two benchmark datasets and a real campus network log and compared to three classic machine learning methods and two related published results. The model is capable of processing large amounts of network traffic in real time. Safa and Ridha [132] propose a new adversarial MARL approach-based Deep SARSA [91] for intrusion detection in dynamic environments. The proposed algorithm addresses the problem of imbalanced distribution datasets by improving the detection of minority classes, which can improve classifier performance. Louati et al. [118] propose an intelligent and distributed intrusion detection system using the MAS based on parallel ML algorithms. Chowdhary et al. [17] propose a MARL framework for an adversarial game in a software-defined network-managed cloud environment. This model takes into account the dynamic nature of the network and minimal impact on service availability.\nResource optimization: Suzuki and Harada [190] propose a safe MARL to optimize network resources efficiently even during significant changes in network demands. This method uses DRL algorithms to learn the relationship between network demand patterns and optimal allocation in advance. Safety considerations and multi-agent techniques are developed to reduce constraint violations and improve scalability, respectively. Sun et al. [186] propose a dynamic controller workload balancing scheme based on MARL to address the time-consuming or under-performing issues of iterative optimization algorithms. Peng and Shen [145] explore multi-dimensional resource management for UAVs in vehicular networks, and the problem is formulated as a distributive optimization problem that can be addressed by the MADDPG method [119]. Li et al. [103] propose a MARL approach to address resource-balancing challenges within complex transportation networks. The traditional solutions leveraging combinatorial optimization face challenges due to high complexity, uncertainty, and non-convex business constraints. The proposed approach introduces a cooperative mechanism for state and reward design, resulting in better transportation which is more efficient and effective. Naderializadeh et al. [135] propose a distributed resource management and interference mitigation mechanism for wireless networks using MARL. In the network, each transmitter is equipped with a DRL agent responsible for selecting the user to serve and determining the transmission power to utilize based on delayed observations from its associated users and neighboring agents.\nMARL has excellent potential in the field of network security, especially when dealing with complex network attacks and defense strategies. However, there are some shortcomings of MARL in the network security domain. One of the main problems is insufficient training data and performance issues. The behaviors of attackers are usually covert and small in number, so obtaining reliable training data is a challenge. In addition, attackers may use adversarial samples to spoof MARL models, leading to model failure. Therefore, it is necessary to address the robustness and generalization problem of MARL in addition to improving its performance. The correspondence between this application and RL methods is shown in Table 9." }, { "figure_ref": [], "heading": "Smart Education", "publication_ref": [ "b194", "b47", "b111", "b30" ], "table_ref": [], "text": "Smart education uses the IoT and AI to digitize learning processes and offer individualized learning experiences and support depending on the learning styles and features of specific students. Sensors can be used to capture students' learning behaviors and data. Communication enables real-time [194] create an adaptive tutoring game that allows students to personalize their learning without the guidance of teachers. In order to optimize student learning, this system uses a Petri net graph structure to monitor students' progress in the game and an RL agent to adaptively change system behavior in response to student performance. Then they apply Petri Nets and hierarchical reinforcement learning algorithm to personalized student assistance based on the above game [48]. The algorithm can assist teachers in giving students in-game instruction and feedback that is specifically tailored to them, allowing them to gradually master complex knowledge and skills by breaking down the tasks in games into several stages. The algorithm can help educators provide customized support and feedback to students in games and gradually master complex knowledge and skills by dividing the tasks in games into multiple levels. [112] and [31] both monitor student learning progress using data gathered by sensors and offer students personalized learning advice using RL techniques. Smart Education based on MARL can enhance teaching efficiency, save time, and ultimately, better learning outcomes for students. However, the collection of daily behavioral data from students is required by smart education, which presents privacy concerns. Additionally, since the core of intelligent education is human, its purpose is to assist teachers in teaching and students in learning. As a result, it necessitates prioritizing switching according to different scenarios, such as when there are discrepancies between the assessment of the teacher and AI for the level of knowledge mastery. Improper prioritization switching may lead to reduced educational effectiveness and poor student experiences. Therefore, how to conduct reasonable prioritization switching is a problem that needs to be explored." }, { "figure_ref": [], "heading": "RL for Science", "publication_ref": [ "b127", "b171", "b21", "b4", "b37", "b121", "b24" ], "table_ref": [ "tab_8" ], "text": "Recently, AI for science has been a popular topic, and AI is highly regarded as a critical tool in achieving scientific progress [127]. RL has demonstrated significant scientific potential, with particular promise in chemistry, physics, and materials research. RL has proven instrumental in solving challenges like exploring uncharted physical phenomena. The correspondence between this application and RL methods is shown in Table 9. Seo et al. [171] utilize RL to control feedforward 𝛽 in the KSTAR tokamak. Degrave et al. [22] introduce an innovative RL approach that enables the magnetic control system of a tokamak fusion device to learn autonomously, achieving precise control over various plasma configurations, significantly reducing design efforts, and representing a pioneering application of RL to the fusion domain. Bae et al. [5] introduce a scientific MARL Safe MARL Optimization [38][111] [121] Formal methods [25][176] Fig. 2. Categories of Safety in MARL (SciMARL) for discovering wall models in turbulent flow simulations, dramatically reducing computational cost while reproducing key flow quantities and offering unprecedented capabilities for simulating turbulent flows. RL scientific research offers more possibilities, and we believe that RL will have a wider range of scientific applications in the future." }, { "figure_ref": [], "heading": "VISIONARY PROSPECTS", "publication_ref": [], "table_ref": [], "text": "Although MARL has shown superior performance in many domains, some issues, such as safety, robustness, and generalization, limit the application of MARL in the real world. We believe that maximizing the superiority of MARL in practical applications in the future needs to first address these issues and need to consider the moral constraints of human society. This section reviews the current state of research in four areas: safety, robustness, generalization, and ethical constraints, and discusses the gaps that need to be addressed in future research." }, { "figure_ref": [], "heading": "Safety of Multi-agent Reinforcement Learning", "publication_ref": [ "b34", "b38", "b225", "b37" ], "table_ref": [], "text": "The increasing popularity of MARL has brought attention to the need to ensure the safety of these systems. In MARL, the actions of one agent can potentially cause harm to the task or other agents involved. Therefore, there is a pressing need to develop safe MARL approaches. To achieve safety in MARL, one common approach is to add constraints to the training process. By incorporating safety constraints, agents are encouraged to avoid unsafe actions that could lead to task failure or harm to other agents. There have been numerous reviews on the safety of RL, as summarized in [35], [39], and [225]. However, there is currently no systematic review of the safety of MARL, and there is relatively little research on this topic. In this section, we give a definition of safe MARL which is used in [38]." }, { "figure_ref": [], "heading": "Definition 3 (Safe MARL).", "publication_ref": [ "b37", "b24", "b176" ], "table_ref": [], "text": "A multi-agent constrained stochastic game can be modeled as the tuple\n𝑁 , S, A 1 , • • • , A 𝑁 , 𝑅, C 1 , • • • , C 𝑁 , 𝒄 1 , • • • , 𝒄 𝑁 , 𝑝, 𝛾 , where 𝑅 : S × A 1 × • • • × A 𝑁 × S → R is the joint reward function, C 𝑖 = {𝐶 𝑖 𝑗 } 𝑖 ≤𝑁 1≤ 𝑗 ≤𝑚 𝑖 is a set of cost function of agent 𝑖 (𝑚 𝑖 is the number of cost functions of agent 𝑖), 𝐶 𝑖 𝑗 : S × A 1 × • • • × A 𝑁 × S → R is the cost function, and 𝒄 𝑖 = {𝑐 𝑖 𝑗 } 𝑖 ≤𝑁 1≤ 𝑗 ≤𝑚 𝑖 ∈ R is cost-constraining values.\nThe goal of agents is to maximize the expected total reward while trying to satisfy the safety constraint of each agent,\nJ (𝝅) = E 𝝅 ∞ ∑︁ 𝑡 =0 𝛾 𝑡 𝑅 (𝑠 𝑡 , 𝒂 𝑡 , 𝑠 𝑡 +1 |𝑠 0 = 𝑠) , 𝑠.𝑡 .J 𝑖 𝑗 (𝝅) = E 𝝅 ∞ ∑︁ 𝑡 =0 𝛾 𝑡 𝐶 𝑖 𝑗 (𝑠 𝑡 , 𝒂 𝑡 , 𝑠 𝑡 +1 |𝑠 0 = 𝑠) ≤ 𝑐 𝑖 𝑗 , ∀𝑗 = 1, • • • , 𝑚 𝑖 . (11\n)\nWe then summarize relevant research from two perspectives: optimization and formal methods, as shown in Fig. 2.\n4.1.1 Optimization. Gu et al. [38] introduce Multi-Agent Constrained Policy Optimization (MACPO) and MAPPO-Lagrangian to devise safety MARL algorithms. These algorithms aim to meet safety constraints while concurrently enhancing rewards by integrating theories from constrained policy optimization and multi-agent trust region learning, yielding strong theoretical guarantees. Furthermore, the authors have established a benchmark suite, Safe Multi-Agent MuJoCo, to evaluate the efficacy of their approaches, which exhibit performance levels comparable to baselines and persistently comply with safety constraints. [25]: centralized shielding monitors actions of all agents and corrects unsafe actions, while factored shielding uses multiple shields to monitor subsets of agents concurrently. Both approaches ensure safety without sacrificing policy quality, but factored shielding is larger numbers of agents. Sheebaelhamd et al. [176] improve the MADDPG framework for multi-agent control problems with safety constraints. A safety mechanism is integrated into the deep policy network to avoid in-feasibility problems in the action correction step, which guarantee constraint satisfaction using exact penalty functions. Empirical results show that this approach reduces constraint violations, enabling safety during learning." }, { "figure_ref": [], "heading": "Limitations of current methods.", "publication_ref": [], "table_ref": [], "text": "Although there has been some progress in researching the safety of MARL, there are still some limitations. First, the existing approach to MARL safety is designed for small numbers of agents and may not be applicable to large-scale systems. Second, most existing research on MARL safety assumes that the environment is static and unchanging. In real-world applications, however, the environment is often dynamic and unpredictable, which can pose additional safety risks. Finally, in order to apply MARL to human society, it is necessary to add constraints to protect human safety. Furthermore, human interactions lead to a non-Markov environment. Hence, MARL which accounts for the safety of large-scale human society, is a challenging and significant research direction for the future." }, { "figure_ref": [], "heading": "Robustness of Multi-agent Reinforcement Learning", "publication_ref": [ "b35", "b57", "b68", "b70", "b142" ], "table_ref": [], "text": "The robustness of DL in classification tasks has a series of studies [36,58,69,71,142]. RL is a sequential decision problem, where misclassification at a one-time step is not equivalent to expecting the minimum reward. In MARL, a decision failure of any agent can lead to team task failure, which makes the study of robustness MARL challenging. Furthermore, MARL faces various challenges in real-world applications, such as uncertainty in the environment, uncertainty policies of other agents, and sensor noise. All these factors may cause the trained models to perform poorly or fail. Therefore, it is crucial to improve the robustness of MARL, which will help ensure that the" }, { "figure_ref": [], "heading": "Robustness in MARL", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Test", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Train", "publication_ref": [ "b14", "b214", "b40", "b99", "b149", "b46", "b206", "b253", "b28", "b100", "b185", "b221", "b215", "b241", "b187", "b253", "b241" ], "table_ref": [], "text": "Backdoor attack [15][209] [214] Adversarial attack [41][57] [100][108] [149] State observations [47][50] [206] [253] Actions [29][56] [101][151] [185] Data poisoning [221] Rewards and models [215][221] [241] Communications [187] Adversarial policy [42][43] Fig. 3. Categories of Robustness in MARL models can operate stably and reliably in various situations. The following are related definitions of robust MARL. We use the definition of [253] and [241].\nDefinition 4 (Robustness against state observations perturbation). A state-adversarial stochastic game can be defined as a tuple S, A 1 , . . . , A 𝑁 , B 1 , . . . , B 𝑀 , 𝑅 1 , . . . , 𝑅 𝑁 , 𝑝, 𝛾 where B 𝑗 is the uncertainty set of adversarial states of agent 𝑗, and 𝑀 is the number of attacked agents such that 𝑀 ≤ 𝑁 ." }, { "figure_ref": [], "heading": "Given the joint policy 𝝅", "publication_ref": [], "table_ref": [], "text": ": S → PD A 1 × • • • × A 𝑁 and the joint adversarial perturbation 𝒗 : S → B 1 × • • • × B 𝑀 ,\nThe Bellman equation with fixed 𝝅 and 𝒗 is as follows,\nV 𝑖 * (𝑠) = 𝑚𝑎𝑥 𝜋 𝑖 ( • |𝑠) 𝑚𝑖𝑛 𝑣 ∑︁ 𝒂 ∈A 1 ו••×A 𝑁 𝝅 (𝒂|𝑠, 𝒗 (𝑠)) ∑︁ 𝑠 ′ ∈S 𝑝 (𝑠 ′ |𝑠, 𝒂) 𝑅 𝑖 (𝑠, 𝒂, 𝑠 ′ ) + 𝛾 V 𝑖 * (𝑠 ′ ) ,(12)\nDefinition 5 (Robustness against model perturbation). A model-adversarial stochastic game can be defined as the tuple 𝑁 , S, A 1 , • • • , A 𝑁 , R1 , • • • , R𝑁 , p, 𝛾 , where R𝑖 and p are the uncertainty sets of reward functions and transition probabilities, respectively. The Bellman-type equation is as follows:\nV 𝑖 * (𝑠) = 𝑚𝑎𝑥 𝜋 𝑖 ( • |𝑠) 𝑚𝑖𝑛 R𝑖 ∈ R𝑖 , p ∈ p ∑︁ 𝒂 ∈A 1 ו••×A 𝑁 𝝅 (𝒂|𝑠) ∑︁ 𝑠 ′ ∈S p (𝑠 ′ |𝑠, 𝒂) R𝑖 (𝑠, 𝒂, 𝑠 ′ ) + 𝛾 V 𝑖 * (𝑠 ′ )(13)\nCurrently, research on the robustness of MARL is being pursued from both attacks and defense. Attacks research aims to identify stronger perturbations to test the robustness of MARL models, while defense aims to develop MARL algorithms that are robust to perturbations." }, { "figure_ref": [], "heading": "Testing. :", "publication_ref": [ "b221", "b107", "b40", "b56", "b149", "b99", "b14", "b188", "b161", "b214", "b253", "b227", "b46", "b49", "b206", "b28", "b100", "b118", "b100", "b185", "b185", "b151", "b55", "b215", "b241", "b221", "b41", "b42", "b196", "b238", "b104", "b44", "b187" ], "table_ref": [], "text": "As shown in Fig. 3, similar to DL, the robustness testing methods for MARL can be classified into three categories: adversarial attacks, backdoor attacks, and data poisoning.\nData poisoning: Wu et al. [221] discuss how an attacker can modify rewards in a dataset used for offline MARL to encourage each agent to adopt a harmful target policy with minimal modifications. The attacker can establish the target policy as a Markov perfect dominant strategy equilibrium, which is a strategy that rational agents will adopt. The article explores the effectiveness of attacks on various MARL agents and their cost compared to separate single-agent attacks. It also examines the relationship between dataset structure and attack cost and highlights the need for future research on defense in offline MARL.\nAdversarial attacks: Lin et al. [108] show that Cooperative MARL (c-MARL) is vulnerable to attacks on a single agent. By manipulating agent observations, the attacker reduces the overall team reward. The proposed attack strategy involves training a policy network to induce the victim agent to take an incorrect action and utilizing targeted adversarial attack methods to compel the agent to take that action. Experiments demonstrate a significant reduction in team reward and winning rate. Guo et al. [41] discuss the potential vulnerabilities of c-MARL algorithms to adversarial attacks and the importance of testing their robustness before deployment in safety-critical applications. The authors propose MARLSafe, a comprehensive testing framework that considers state, action, and reward robustness to address this. Experimental results on the SMAC environment demonstrate the low robustness of advanced c-MARL algorithms in all aspects. Hu and Zhang [57] propose a sparse adversarial attack on c-MARL systems to test their robustness. The attack is trained using MARL with regularization and is shown to significantly decrease performance when only a few agents are attacked at a few timesteps. This highlights the need for more robust cMARL algorithms. Pham et al. [149] introduce a novel model-based approach for evaluating the robustness of c-MARL agents against adversarial states. They demonstrate the superiority of their approach over existing baselines by crafting more robust adversarial state perturbations and employing a victim-agent selection strategy. Through experiments on multi-agent MuJoCo benchmarks, they demonstrate that the approach is effective by achieving a reduction in total team rewards. Li et al. [100] propose the Adversarial Minority Influence attack, which introduces an adversarial agent that influences other cooperative victims to achieve worst-case cooperation. The attack addresses the complexity and cooperative nature of c-MARL by characterizing and maximizing the influence from the adversary to the victims. The proposed approach is demonstrated to be superior to existing methods in various simulation environments.\nBackdoor attack: Chen et al. [15] introduce a novel backdoor attack framework, known as MARNet, which is specifically designed for c-MARL scenarios. MARNet comprises three primary modules: trigger design, action poisoning, and reward hacking, all of which work together to manipulate the actions and rewards of poisoned agents. The framework is evaluated on two popular c-MARL algorithms, VDN [188] and QMIX [161], in two commonly used c-MARL games. The experimental results demonstrate that MARNet outperforms baselines from SARL backdoor attacks, reducing the utility under attack by up to 100%. Although fine-tuning is employed as a defense mechanism against MARNet, it is not entirely effective in eliminating the impact of the attack. Wang et al. [214] investigate research on the backdoor attack for DRL-based Autonomous Vehicles (AVs) controllers. They develop a trigger based on traffic physics principles. Experiments are conducted on both single-lane and two-lane circuits, and they demonstrate that the attack can cause a crash or congestion when triggered while maintaining normal operating performance. These findings underscore the importance of robust security measures in AVs controller design. Wang et al. [209] examine backdoor attacks in MARL systems and put forward a technique called BACKDOORL to detect and prevent such attacks. State Observations: In our previous work [253], we combine a policy gradient function and an action loss function, along with a regularized action loss term, to develop a new objective function for training actors in mean-field actor-critic reinforcement learning [227] that improves its robustness. Furthermore, we define State-Adversarial Stochastic Game (SASG) and discuss its properties. Due to the traditional solution concepts do not always exist in SASG, [47] and [50] introduce a new solution concept called robust agent policy and develop a Robust Multi-Agent Adversarial Actor-Critic (RMA3C) algorithm to learn robust policies for MARL agents. Wang et al. [206] propose a training framework for c-MARL to address the weakness of agents to adversarial attacks. The framework generates adversarial attacks on agent observations to help them learn a robust cooperative policy. The attacker selects an agent to attack and outputs a perturbation vector. The victim policy is then trained against the attacker. Experimental results demonstrate that the generated attacks improve the robustness against observation perturbations.\nActions: Foerster et al. [29] consider how the policies adopted by different agents in the environment interact with each other and affect the learning process of all agents. They propose Learning with Opponent-Learning Awareness (LOLA), a framework that takes into account the influence of one agent's policy on the expected parameter update of the other agents through a specific term. The method leads to the emergence of cooperation in the iterated dilemma of prisoners and convergence to the Nash equilibrium in repeated matching pennies. The extension of the policy gradient estimator enables efficient computation of LOLA, making it suitable for handling large parameters and input spaces that use nonlinear function approximators. Li et al. [101] design MiniMax Multi-agent Deep Deterministic Policy Gradient (M3DDPG) to train MARL agents with continuous actions that can handle robustness issues in complex environments. M3DDPG adds a minimax component to MADDPG [119] and employs multi-agent adversarial learning (MAAL) to optimize the learning objective. Through experiment evaluation in four multi-agent environments, the proposed algorithm surpasses existing baselines in terms of performance. Based on [101], Sun et al. [185] apply the convex relaxation of neural networks instead of MAAL to apply the convex relaxation of neural networks to overcome computationally difficult, which enables robustness in interacting with agents that have significantly different behaviors and achieves a certified bound of the original optimization problem. To overcome the computational difficulties of MAAL, Sun et al. [185] utilize the convex relaxation technique to guarantee robustness in the interaction of agents with varying actions and yield a certified bound for the original optimization problem. Phan et al. [151] propose a value decomposition scheme that trains competing teams of varying sizes to improve resilience against arbitrary agent changes. By doing so, RADAR offers a more versatile and flexible approach to MARL that can adapt to changing agent behavior and system conditions. According to [56], in order to enhance robustness, non-adversarial agents should collaborate and make decisions based on correlated equilibrium rather than acting independently. The authors introduce new approaches to encourage agents to learn and follow correlated equilibrium while maintaining the benefits of decentralized execution.\nRewards and models: Wang and Zou [215] propose a sample-based approach to estimate the uncertainty set of a misspecified MDP in model-free robust RL. They develop robust Q-learning and robust TDC algorithms that converge to optimal or stationary points without additional conditions on the discount factor. The algorithms also have similar convergence rates as their vanilla counterparts and can be extended to other RL algorithms. Zhang et al. [241] focus on the problem of MARL in situations where there is uncertainty in the model, such as inaccurate knowledge of reward functions. They model this as a robust Markov game, where agents aim to find policies that lead to equilibrium points that are robust to model uncertainty. They present a novel solution concept known as robust Nash equilibrium and a Q-learning algorithm that guarantees convergence. Additionally, policy gradients are derived, and an actor-critic algorithm that uses function approximation is developed to effectively tackle large state-action spaces. Wu et al. [221] introduce linear programs that can efficiently address the attack problem and analyze the connection between the characteristics of datasets and the minimal attack cost.\nAdversarial policy: Guo et al. [42] propose Backdoor Detection in MARL systems, using Policy Cleanse to detect and mitigate Trojan agents and their trigger actions. Besides, they also design a machine unlearning-based approach to effectively mitigate the detected backdoors. In contrast to previous techniques that rely on the zero-sum assumption, the recent work by Guo et al. [43] proposes a novel approach that resets the optimization objective and employs a new surrogate Generalization in MARL Multi-tasks transfer Sim2Real Others [196][239] Hierarchies [11][199] [238] Meta-learning [105] Decentralized Learning [139][236] Domain randomization [10][177] Real data [45] Fig. 4. Categories of Generalization in MARL optimization function. This method has been shown through experiments to significantly enhance the ability of adversarial agents to exploit weaknesses in a given game and take advantage of any inherent unfairness in the game mechanics. Moreover, agents that are trained adversarially against this approach have demonstrated a greater level of resistance against adversarial attacks. Overall, these findings suggest that the proposed approach represents a promising direction for improving the robustness and fairness of game-playing AI agents.\nCommunication: A certifiable defense mechanism is proposed by Sun et al. [187], which employs a message-ensemble policy to merge several message sets with random ablations. Theoretical analysis indicates that this mechanism can withstand various adversarial communications." }, { "figure_ref": [], "heading": "Limitations of current methods.", "publication_ref": [], "table_ref": [], "text": "Current research on MARL robustness leaves much to be desired. First, the recent research only focuses on one of the states, actions, or policies and needs to consider the robustness of a combination of multiple aspects. Second, there needs to be more team robustness evaluation metrics. It is insufficient to test the robustness of MARL only by way of attacks because it can only cover some possible perturbations. In addition, existing studies tend to ignore the impact of non-cooperative and malicious behaviors caused by human factors on robustness, which is also an issue that needs further research. Therefore, further in-depth integration of robust MARL with multiple perturbation types, verifiable robustness evaluation metrics, and robustness with human intervention must be considered in the future." }, { "figure_ref": [], "heading": "Generalization of Multi-agent Reinforcement Learning", "publication_ref": [ "b86", "b201", "b225", "b247", "b133", "b160", "b165", "b81", "b167", "b237", "b2", "b26", "b76" ], "table_ref": [], "text": "Within the domain of MARL, generalization pertains to the capacity of agents to transfer their learned knowledge and skills from a specific environment or scenario to novel and diverse ones without necessitating significant modifications or retraining. Several surveys have investigated generalization in RL [87,201,225,247]. In the generalization of SARL, various techniques such as domain randomization [133,160,165], causal inference [82,167,237], and meta-learning [3,27,77] have been employed to address generalization issues. However, compared to single-agent settings, research on the generalization of MARL remains relatively scarce. In this regard, we provide an overview of pertinent work from two perspectives, namely multi-task learning, and sim2real, as shown in Fig. 4." }, { "figure_ref": [], "heading": "4.3.1", "publication_ref": [ "b199", "b10", "b238", "b104", "b139", "b236" ], "table_ref": [], "text": "Multi-tasks transfer. The goal of multi-task learning is to improve the generalization ability of a model by incorporating knowledge from related tasks as a form of inductive bias. In order to learn shared and task-specific representations and improve overall performance and efficiency in complicated domains entails training a model to carry out several tasks at once.\nHierarchies: To address the issue of generalization to unknown opponents in multi-agent games, Vezhnevets et al. [199] propose a hierarchical agent architecture grounded in game theory, which enables credit assignment across hierarchy levels and achieves better generalization to unseen opponents than conventional baselines. Carion et al. [11] propose a structured prediction method to assign agents to tasks that uses coordination inference procedures and scoring models. Zhang et al. [238] propose an offline multi-task collaborative reinforcement learning algorithm called ODIS, which is able to extract universal coordination skills from offline multi-task data, enabling better generalization in handling multi-task coordination problems. Specifically, the ODIS algorithm has a two-step process for improving the generalization and performance of c-MARL tasks. First, it extracts coordination skills from offline data that are applicable across different tasks. It then uses these skills to differentiate between different agent behaviors. Second, it trains a coordination policy that selects the most effective coordination skills using the CTDE paradigm. The effectiveness of ODIS is demonstrated in experiments where it significantly improves generalization to unseen tasks, achieving superior performance in various cooperative MARL benchmarks. Importantly, the ODIS algorithm achieves these results using only limited sources of offline data.\nMeta-learning: Liang et al. [105] present a Self-adaptive Meta-learning (SAML) framework that employs gradient-based methods to combine individual task policies into a unified policy capable of adapting to new tasks. Experimental results demonstrate that SAML outperforms baseline methods in terms of efficiency and continuous adaptation.\nDecentralized learning: Omidshafiei et al. [139] tackle the challenge of multi-task MARL with partial observability and limited communication. They introduce a decentralized single-task learning approach that can be synthesized into a unified policy for multiple correlated tasks without the need for explicit indication of task identity. The work by Zeng et al. [236] presents a novel mathematical framework for addressing multi-task RL problems using a policy gradient method. Specifically, the authors propose a decentralized entropy-regularized policy gradient method for solving these problems. The efficacy of the proposed method is evaluated through experimental results on both small-scale and large-scale multi-task RL problems. The findings demonstrate that the proposed approach offers promising performance for tackling complex multi-task RL problems." }, { "figure_ref": [], "heading": "Sim2Real.", "publication_ref": [ "b9", "b177", "b44" ], "table_ref": [], "text": "To train MARL agents, simulations are often used due to their efficiency and ease of implementation. However, a significant challenge arises when attempting to transfer policies learned in simulation to the real world, as differences between the two environments can lead to a performance gap. To address this issue, researchers have been investigating methods for Sim2Real transfer, which aim to minimize the performance gap between simulation and the real world. These methods typically involve fine-tuning policies in the real world, using domain randomization to increase the generalization of policies learned in simulation, or combining real data to achieve better results.\nDomain randomization: Candela et al. [10] create a simulation platform for autonomous driving and use the MAPPO with domain randomization to enable the transfer of policies from simulation to reality. In our previous work [177], we developed a simulation platform for multi-UAV transport, utilizing domain randomization to facilitate the transfer from simulation to reality. Additionally, we formulated a non-stationary variant of Markov games and established the efficacy of RNNs in addressing non-stationary Markov games.\nReal data: Gurevich et al. [45] present a novel approach for implementing homogeneous MAS by transferring data between real and simulated robots. Their method involves designing a deep neural network architecture called CR-Net, which can simulate the motion of individual robots in this system. To train the CR-Net in a simulated environment, they generate synthetic data using a generative model trained on real data from one robot. The effectiveness of their approach is validated by testing the RL models trained using this method on real ground and underwater vehicles, which showed successful policy transfer from simulation to reality." }, { "figure_ref": [], "heading": "Others.", "publication_ref": [ "b196", "b239" ], "table_ref": [], "text": "The generalization to unexplored state-action pairs is considered in [196], which uses tensors of low CP-rank to model the transition and reward functions. Zhang et al. [239] propose a novel multi-task actor-critic paradigm based on a share critic with knowledge transfer to solve heterogeneous state-action learning problems. Current research in multi-agent learning has mainly focused on generalization in the context of cyber-physical systems, which considers the abstraction of agents to unknown agents and the differences between virtual and real-world environments. However, the functionality of human social systems is multifaceted, and human behavior is highly diverse, making the consideration of the generalization of interactions with humans a crucial research area for MAS. For instance, in intelligent transportation systems, traffic signal control algorithms based on MARL must generalize over different cities and time periods. Similarly, in smart education, personalized education assistance based on MARL needs to consider individual differences in living environments and personality traits to develop tailored learning plans for students. Hence, MARL which accounts for the generalization of human behavior, is a promising and challenging research direction for the future." }, { "figure_ref": [ "fig_4" ], "heading": "Learning with Ethical Constraint", "publication_ref": [ "b3" ], "table_ref": [], "text": "As AI technology continues to evolve, it is increasingly important to consider the ethical implications of AI systems [4]. MARL systems involve the interaction of multiple agents whose actions can have significant real-world. Therefore, it is critical to ensure that the design and training of MARL systems take ethical considerations into account. We summarize research related to the ethical constraints of MARL in terms of privacy protection, fairness, and transparency, as shown in Fig. 5." }, { "figure_ref": [], "heading": "Privacy protection.", "publication_ref": [ "b20", "b198", "b205", "b200", "b17", "b166", "b112", "b141", "b252", "b29", "b154", "b115", "b229", "b234", "b15", "b228", "b230", "b147", "b63", "b217", "b11", "b162", "b22", "b67", "b256", "b58" ], "table_ref": [], "text": "Privacy protection is a long-standing issue extensively discussed in machine learning. Some of the main topics and techniques studied in this area include differential privacy, federated learning, cryptography, trusted execution environments, and ML-specific approaches [21]. The research on privacy protection in RL is still in its early stages. We outline relevant studies in the following areas: the privacy of state and action, environment, reward function, and MARL scenario.\nState and action: Venkitasubramaniam [198] proposes an MDP to explore the development of controller actions while satisfying privacy requirements. They analyze the balance between the achievable privacy level and system utility using analytical methods. The optimization problem is formulated as a Bellman equation which owns the convex reward functions for a certain category of MDPs, and as a POMDP with belief-dependent rewards for the general MDP with privacy constraints. Differentially private algorithms are used in protecting reward information by Wang et al . [205] for RL in continuous spaces. The authors propose a method for protecting the value function approximator, which is realized by incorporating functional noise iterative into the training. They provide rigorous privacy guarantees and gain insight into the approximate optimality of the algorithm. Experiments show improvement over existing approaches. Vietri et al. [200] the privacy problem for episodic RL. Chowdhury et al. [18] design two frameworks, i.e., policy optimization and value iteration, and not only consider the JDP but Local Differential Privacy (LDP) in finite horizon tabular MDP to minimize regret. The previous text describes the use of differential privacy as a means of protecting sensitive user data in RL. There are also other methods of protecting user privacy, such as cryptographic techniques. Sakuma et al. [166] use a homomorphic encryption algorithm to realize the privacy protection of distributed RL. They divide private information based on time and observation, design a sarsa privacy protection method based on random actions for these two division methods, and extend these to Q-learning based on greedy and 𝜖-greedy action selections. A new privacy-preserving RL method is proposed by Liu et al. [113] named Preyer to provide treatment options for patients while protecting their privacy. Preyer is composed of an innovative encrypted data format, a secure mechanism for plaintext length management, and a privacy-preserving RL with experience replay.\nEnvironment: Pan et al. [141] first investigate the privacy in RL environment. They propose two methods based on genetic algorithms and shadow policies, respectively. Zhou [252] first discusses how to achieve privacy protection in finite-horizon MDPs, which have large state and action spaces. The author proposes two privacy-preserving RL algorithms according to value iteration and policy optimization and proves that they can achieve sub-linear regret performance while ensuring privacy protection.\nReward: Fu et al. [30] and Prakash et al. [154] investigate the problem of how to preserve the privacy of reward functions in reinforcement learning by employing adversarial reward and inverse reinforcement learning techniques. Liu et al. [116] studies privacy-preserving RL using dissimulation to hide the true reward function. Two models are presented and evaluated through computational and human experiments, showing that resulting policies are deceptive and make it more difficult for observers to determine the true reward function.\nMARL: The differential privacy is used by Ye et al. [229] in the field of multi-agent planning for the first time to protect agent privacy. Based on differential privacy, they propose a new strong privacy-preserving planning method, which can not only ensure strong privacy but also control communication overhead. Yuan et al. [234] delve into the issue of integrating Cooperative Intelligence (CI) to enhance the efficiency of communication networks, which is hampered by privacy concerns and practical limitations in communication. In response, the authors present a Privacy-preserving scheme based on MARL (PP-MARL) that employs a HE-friendly architecture. Experiment results indicate that PP-MARL exhibits better performance in privacy protection and reduced overhead compared to state-of-the-art approaches. Nonetheless, preserving privacy in CI-enabled communication networks remains a formidable challenge, especially when the number of agents involved is subject to variation or the system scales up. The research on privacy protection for MARL is still limited, and some studies have explored the use of differential privacy techniques to enhance the performance of MARL [16,228] or against malicious advise [230]. 4.4.2 Fairness. Fairness in machine learning refers to the concern that machine learning models and algorithms should not discriminate or create bias against certain groups of people based on their protected characteristics, such as race, gender, age, religion, etc. The review paper [147] provides a comprehensive summary of existing techniques. However, research on fairness in RL is still limited, and we provide an overview from both single-agent and multi-agent perspectives.\nSARL: Jabbari et al. [64] first consider the fairness in RL and demonstrate that an algorithm conforming to their fairness constraint requires an exponential amount of time to achieve a nontrivial approximation to the optimal policy. To overcome this challenge, they introduce a polynomial time algorithm that satisfies an approximate form of the fairness constraint. Weng et al. [217] address the issue of complete unfairness for some users or stakeholders by using a social welfare function encoded with fairness. Chen et al. [12] introduce a novel approach to incorporate fairness in actor-critic RL for network optimization problems. By considering the shape of the fairness utility function and past reward statistics, their proposed algorithm adjusts the rewards using a weight factor that is dependent on both of these factors. Ren et al. [162] propose a novel framework to obtain optimum and relative fairness solutions in space applications, including a new image quality representation method, a finite MDP model, and an algorithm based on RL. Deng et al. [23] propose an RL algorithm that enforces stepwise fairness constraints to ensure group fairness at every time step.\nMARL: Jiang and Lu [68] propose a hierarchical RL model, named FEN, which is aimed at both obtaining fairness and efficiency objectives. FEN decomposes fairness for each agent and utilizes a structure with a high-level controller and multiple sub-policies to avoid multi-objective conflict. The study by Zimmer et al. [256] also focuses on the two aspects of fairness and efficiency. They propose a generic neural network architecture to address this problem, which consists of two sub-networks specifically designed to consider the two aspects of fairness and can be implemented in centralized training and decentralized execution or fully decentralized MARL settings. In multi-intersection scenarios, Huang et al. [59] propose a novel fairness-aware model-based MARL (FM2Light) to deal with unfair control with superior reward design." }, { "figure_ref": [], "heading": "Transparency.", "publication_ref": [ "b155", "b203", "b52", "b138", "b244", "b155", "b203", "b80", "b116", "b128", "b235" ], "table_ref": [], "text": "Transparency is essential for building reliable MARL decision systems. Decisionmaking interactions among multiple agents are very complex and difficult to understand and explain. Without a transparent understanding of the interactions and decision-making processes among agents, the reliability and trustworthiness of the system are affected. Therefore, studying the transparency of MARL is an important direction. We summarize it in terms of both explainability and interpretability.\nExplainability refers to the ability of a model in machine learning to provide a rationale for its outputs that can be easily comprehended and trusted by humans [155,203]. Heuillet et al. [53] use a game theory concept of shapley values to explain the contribution of one agent in MARL and use Monte Carlo sampling to approximate shapley values to overcome the high overhead. This method provides an explanation for the model but can not give the precise reason why the action is taken by the agent. Ohana et al. [138] also use shapley values to understand the model behavior and explain local feature contributions. Zhang et al. [244] propose a framework composed of a variational autoencoder and graph neural networks to encode the interactions between pairs of agents.\nInterpretability refers to the ability of a human to understand and explain the inner workings of a machine learning model [155,203]. Kazhdan et al. [81] develop a library named MARLeME which uses symbolic models to improve the interpretability of MARL. It can be employed across a broad spectrum of existing MARL systems and has potential applications in safety-critical domains. Liu et al. [117] propose a novel interpretable architecture based on soft decision trees with recurrent structure. Milani et al. [128] propose two frameworks (IVIPER and MAVIPER) to extract interpretable coordination policies of MARL in sight of the decision tree. Zabounidis et al. [235] incorporate interpretable concepts from domain experts into MARL models trained. This approach improves interpretability, allows experts to understand which high-level concepts are used by the policy, and intervenes to improve performance.\nMARL for decision transparency involves not only the transparency of single-agent decisions but also the study of complex interactions among multiple agents. Currently, although there have some related research works, it is still relatively small, and more research works are needed to explore how to make MARL more transparent for better application to real-world problems." }, { "figure_ref": [], "heading": "CHALLENGES ON HUMAN-COMPATIBLE MULTI-AGENT REINFORCEMENT LEARNING", "publication_ref": [ "b8", "b114" ], "table_ref": [], "text": "The Human-Cyber-Physical System (HCPS) is developed based on the Cyber-Physical System (CPS) and integrates computer science, automatic technology, communication science, etc [9,115]. The applications of MARL summarized in Section 3 of this paper are typical of HCPS. Humans are seen as an essential component of HCPS. Therefore, the design of MARL algorithms needs to take into account the human factor. In addition to the challenges of scalability and non-stationary, MARL in HCPS faces many additional challenges due to the interactions between humans, physical systems, and computer systems." }, { "figure_ref": [], "heading": "Non-stationarity due to Human Intervention", "publication_ref": [], "table_ref": [], "text": "Non-stationarity refers to the dynamic changes in the environment or the behavior of agents over time. The existing MARL is based on SG, where the number of agents constant during the training process. Currently, research on non-stationarity in MARL is limited to the CPS level, only considering the non-stationarity caused by changes in agent policies on the overall environment[]. However, in HCPS, humans interact continuously with the CPS, and human behavior can affect the dynamic changes in the CPS system. In addition, the reward function for MARL agents is defined by human experts. Human needs will change with social progress, and the reward function for MARL agents will change accordingly. This is also an essential factor causing non-stationarity in HCPS. How to design stable MARL algorithms against human intervention is a vital challenge." }, { "figure_ref": [], "heading": "Diversity of Human Behavior", "publication_ref": [], "table_ref": [], "text": "Human behavior is diverse due to the influence of different geographies, cultures, and beliefs.\nIn HCPS, MARL needs to model human behavior in order to better achieve intelligence in interaction with humans. The quality of understanding human behavior predominantly affects the user experience of CPS. For example, in intelligent education, MARL agents need to understand student behavior well to better recommend personalized services for different students. However, the diversity of behavior makes this process very challenging. The current research for modeling human behavior is limited to the societal level and only takes into account human behavior, not the possible influence of machine intelligence on human behavior. How to consider the influence of machines on human behavior in the process of modeling human behavior is a significant challenge." }, { "figure_ref": [], "heading": "Complex Heterogeneity of HCPS", "publication_ref": [], "table_ref": [], "text": "The complexity of HCPS manifests itself in various aspects, including human heterogeneity, physical system heterogeneity, cyber system heterogeneity, and temporal heterogeneity. Human heterogeneity refers to the diversity of human behavior and the different roles played by humans in systems with different functions. Physical system heterogeneity refers to the variety of sensors used, such as GPS and cameras in UAV transportation systems. Cyber system heterogeneity is composed of various software, hardware, and algorithms, which require the integration of multiple intelligent algorithms due to the complexity of multi-source information and multi-task decision-making. This cannot be achieved by a single end-to-end algorithm. Finally, temporal heterogeneity is when making decisions; MARL agents require defining different time intervals based on the actual situation at each time step. How to design MARL algorithms to handle the decision-making process of complex heterogeneous HCPS is an enormous challenge." }, { "figure_ref": [], "heading": "Scalability of Multi-human and Multi-machine", "publication_ref": [], "table_ref": [], "text": "HCPS is a complex system of multi-human and multi-machine coexistence. Thus, MARL used for intelligent decision-making should have strong scalability, and the agent here should have a broad concept that includes both humans and intelligent machines. However, as the number of agents increases, the joint action space of agents grows exponentially, which makes the scalability of MARL algorithms poor. Existing research only focuses on the scalability of the number of machines without considering human factors. Designing scalable multi-agent reinforcement learning algorithms that are suitable for complex and heterogeneous HCPS is a significant challenge." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "This paper summarizes the fundamental methods of MARL and reviews its relevant research in various fields, such as smart transportation, unmanned aerial vehicles, intelligent information system, public health and intelligent medical diagnosis, smart manufacturing, financial trade, network security, smart education, and RL for science. In order to better serve human society, it is necessary to develop a trustworthy MARL. Therefore, we define trustworthy MARL from the perspectives of safety, robustness, generalization, and ethical constraints and summarize the current research and limitations in these areas. Finally, we discuss the additional challenges when considering HCPS in MARL, which is crucial for its practical application in human society. We hope this paper can provide a comprehensive review of various research approaches and application scenarios, encouraging and promoting the application of MARL in human societies for better service to humans." } ]
Multi-agent reinforcement learning (MARL) is a widely used Artificial Intelligence (AI) technique. However, current studies and applications need to address its scalability, non-stationarity, and trustworthiness. This paper aims to review methods and applications and point out research trends and visionary prospects for the next decade. First, this paper summarizes the basic methods and application scenarios of MARL. Second, this paper outlines the corresponding research methods and their limitations on safety, robustness, generalization, and ethical constraints that need to be addressed in the practical applications of MARL. In particular, we believe that trustworthy MARL will become a hot research topic in the next decade. In addition, we suggest that considering human interaction is essential for the practical application of MARL in various societies. Therefore, this paper also analyzes the challenges while MARL is applied to human-machine interaction. CCS Concepts:
Multi-Agent Reinforcement Learning: Methods, Applications, Visionary Prospects, and Challenges
[ { "figure_caption": ", Vol. 1 ,1No. 1, Article . Publication date: May 2023.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": ",Vol. 1, No. 1, Article . Publication date: May 2023.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": ", Vol. 1 ,1No. 1, Article . Publication date: May 2023.", "figure_data": "", "figure_id": "fig_2", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "4. 2 . 222Training. : Robustness testing and training in MARL are still in the early stages of research. Therefore, we summarize robustness training methods from five aspects: state observation, action, reward and model, adversarial policy, and communication.", "figure_data": "", "figure_id": "fig_3", "figure_label": "22", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. Categories of MARL with Ethical Constraint", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "The difference between this paper and other related reviews.", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Correspondence between smart transportation and RL methods.", "figure_data": "ApplicationsPapersMethodsSA/MATraffic light control[99] [220]DQN-based [131]SASmartTransportation", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Correspondence between unmanned aerial vehicles and RL methods.", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Correspondence between intelligent information system and RL methods.", "figure_data": "ApplicationsPapersMethodsSA/MA[98]REINFORCE-based [218]SAIntelligent Information SystemNatural language processing Programming generation[226] [120] [83] [104] [140] [94] [178] [26]AC-based [88] DQN-based [131] REINFORCE,AC,DQN [88, 131, 218] AC-based [88] PPO-based [169] REINFORCE [218] PPO-based [169] DQN-based [131]SA SA SA SA SA SA SA SARecommender system[51, 72, 231] [245]MADDPG-based [119]MA", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Correspondence between public health and intelligent medical diagnosis and RL methods.", "figure_data": "ApplicationsPapersMethodsSA/MACOVID-19[84, 85, 90] [249]DQN-based [131] DDPG-based [107]SA SAPublic Health and Intelligent Medical DiagnosisMedical image processing[14, 65] [2, 96, 202, 202, 248] DQN-based [131] DQN-based [131] [106, 122] A3C-based [129] [80] AC-based [88]SA MA MA MADisease diagnosis[109, 110, 193] [159]DQN-based [131]SA", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Correspondence between smart manufacturing and RL methods.", "figure_data": "ApplicationsPapersMethodsSA/MA[212]QMIX-based [161]MA[246]Job shop schedulingSmart Manufacturing", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Correspondence between financial trade and RL methods.", "figure_data": "", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Correspondence between network security and RL methods.", "figure_data": "ApplicationsPapersMethodsSA/MAIntrusion detection[17, 118, 172, 173] [54]DQN-based [131]MANetwork Security", "figure_id": "tab_7", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Correspondence between smart education and science and RL methods. and teachers, as well as collaborative learning among students. AI can be used to analyze learning behavior, offer personalized learning, and evaluate teaching. Scene reconstruction, experiment simulation, and remote teaching are made easier by virtual reality technology. In MARL-based smart education, we summarize the existing techniques[31,48,112,194]. Education 4.0 intends to incorporate AI technology into each stage of student self-regulated learning to increase interest and effectiveness during the process[19,46,170]. Tang and Hare", "figure_data": "ApplicationsPapersMethodsSA/MASmart Education[48, 194] DQN-based [131] [31, 112] DQN-based [131]SA SA[171]DDPG-based [107]SARL for Science[22]AC-based [88]SA[5]PPO-based [169]MAinteraction between students", "figure_id": "tab_8", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Lu et al. [121] propose a method called Safe Decentralized Policy Gradient (Safe Dec-PG) to solve a distributed RL problem where agents work together with safety constraints. The method is decentralized and considers coupled safety constraints while ensuring a measurable convergence rate. It can also solve other decentralized optimization problems.Liu et al. [111] propose a novel algorithm CMIX that can be used for MARL in a partially observable environment with constraints on both peak and average reward. CMIX enables CTDE and outperforms existing algorithms in maximizing the global reward function subject to constraints. The algorithm is evaluated on two scenarios, including a blocker game and a vehicular network routing problem, demonstrating its ability to satisfy both peak and average constraints, which has not been achieved before in a CTDE learning algorithm.", "figure_data": "", "figure_id": "tab_9", "figure_label": "", "figure_type": "table" }, { "figure_caption": "use the notion of Joint Differential Privacy (JDP) and a private optimism-based learning method to address", "figure_data": "", "figure_id": "tab_11", "figure_label": "", "figure_type": "table" } ]
Ziyuan Zhou; Guanjun Liu; Ying Tang
[ { "authors": "Akash Agrawal; Sung Jun Won; Tushar Sharma; Mayuri Deshpande; Christopher Mccomb", "journal": "Proceedings of the Design Society", "ref_id": "b0", "title": "A MULTI-AGENT REINFORCEMENT LEARNING FRAMEWORK FOR INTELLIGENT MANUFACTURING WITH AUTONOMOUS MOBILE ROBOTS", "year": "2021" }, { "authors": "Hanane Allioui; Abed Mazin; Narjes Mohammed; Belal Benameur; Karrar Al-Khateeb; Begonya Hameed Abdulkareem; Robertas Garcia-Zapirain; Rytis Damaševičius; Maskeliūnas", "journal": "Journal of Personalized Medicine", "ref_id": "b1", "title": "A Multi-Agent Deep Reinforcement Learning Approach for Enhancement of COVID-19 CT Image Segmentation", "year": "2022" }, { "authors": "Karol Arndt; Murtaza Hazara; Ali Ghadirzadeh; Ville Kyrki", "journal": "", "ref_id": "b2", "title": "Meta Reinforcement Learning for Simto-real Domain Adaptation", "year": "2020" }, { "authors": "Mona Ashok; Rohit Madan; Anton Joha; Uthayasankar Sivarajah", "journal": "International Journal of Information Management", "ref_id": "b3", "title": "Ethical framework for Artificial Intelligence and Digital technologies", "year": "2022" }, { "authors": "H ; Jane Bae; Petros Koumoutsakos", "journal": "Nature Communications", "ref_id": "b4", "title": "Scientific multi-agent reinforcement learning for wall-models of turbulent flows", "year": "2022-03-17" }, { "authors": "Javier Bajo; L María; Juan F De Borrajo; Juan M Paz; María A Corchado; Pellicer", "journal": "Expert Systems with Applications", "ref_id": "b5", "title": "A multi-agent system for web-based risk management in small and medium business", "year": "2012" }, { "authors": "Wenhang Bao", "journal": "", "ref_id": "b6", "title": "Fairness in multi-agent reinforcement learning for stock trading", "year": "2019" }, { "authors": "Sushrut Bhalla; Sriram Ganapathi Subramanian; Mark Crowley", "journal": "Springer International Publishing", "ref_id": "b7", "title": "Deep Multi Agent Reinforcement Learning for Autonomous Driving", "year": "2020" }, { "authors": "Alexandros Bousdekis; Dimitris Apostolou; Gregoris Mentzas", "journal": "Manufacturing Letters", "ref_id": "b8", "title": "A human cyber physical system framework for operator 4.0 -artificial intelligence symbiosis", "year": "2020" }, { "authors": "Eduardo Candela; Leandro Parada; Luis Marques; Tiberiu-Andrei Georgescu; Yiannis Demiris; Panagiotis Angeloudis", "journal": "", "ref_id": "b9", "title": "Transferring Multi-Agent Reinforcement Learning Policies for Autonomous Driving using Simto-Real", "year": "2022" }, { "authors": "Nicolas Carion; Nicolas Usunier; Gabriel Synnaeve; Alessandro Lazaric", "journal": "Curran Associates, Inc", "ref_id": "b10", "title": "A Structured Prediction Approach for Generalization in Cooperative Multi-Agent Reinforcement Learning", "year": "2019" }, { "authors": "Jingdi Chen; Yimeng Wang; Tian Lan", "journal": "", "ref_id": "b11", "title": "Bringing Fairness to Actor-Critic Reinforcement Learning for Network Utility Optimization", "year": "2021" }, { "authors": "Lu Chen; Runzhe Yang; Cheng Chang; Zihao Ye; Xiang Zhou; Kai Yu", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "On-line Dialogue Policy Learning with Companion Teaching", "year": "2017" }, { "authors": "Siying Chen; Minghui Liu; Pan Deng; Jiali Deng; Yi Yuan; Xuan Cheng; Tianshu Xie; Libo Xie; Wei Zhang; Haigang Gong; Xiaomin Wang; Lifeng Xu; Hong Pu; Ming Liu", "journal": "IEEE Journal of Biomedical and Health Informatics", "ref_id": "b13", "title": "Reinforcement Learning Based Diagnosis and Prediction for COVID-19 by Optimizing a Mixed Cost Function From CT Images", "year": "2022" }, { "authors": "Yanjiao Chen; Zhicong Zheng; Xueluan Gong", "journal": "", "ref_id": "b14", "title": "MARNET: Backdoor Attacks against Value-Decomposition Multi-Agent Reinforcement Learning", "year": "2022" }, { "authors": "Zishuo Cheng; Dayong Ye; Tianqing Zhu; Wanlei Zhou; Philip S Yu; Congcong Zhu", "journal": "International Journal of Intelligent Systems", "ref_id": "b15", "title": "Multi-agent reinforcement learning via knowledge transfer with differentially private noise", "year": "2022" }, { "authors": "Ankur Chowdhary; Dijiang Huang; Abdulhakim Sabur; Neha Vadnere; Myong Kang; Bruce Montrose", "journal": "", "ref_id": "b16", "title": "SDN-based Moving Target Defense using Multi-agent Reinforcement Learning", "year": "2021" }, { "authors": "Sayak Ray; Chowdhury ; Xingyu Zhou", "journal": "", "ref_id": "b17", "title": "Differentially Private Regret Minimization in Episodic Markov Decision Processes", "year": "2022-06" }, { "authors": "Monica Ionita; Ciolacu ; Leon Binder; Heribert Popp", "journal": "", "ref_id": "b18", "title": "Enabling IoT in Education 4.0 with BioSensors from Wearables and Artificial Intelligence", "year": "2019" }, { "authors": "Kai Cui; Anam Tahir; Gizem Ekinci; Ahmed Elshamanhory; Yannick Eich; Mengguang Li; Heinz Koeppl", "journal": "", "ref_id": "b19", "title": "A Survey on Large-Population Systems and Scalable Multi-Agent Reinforcement Learning", "year": "2022" }, { "authors": "Emiliano De; Cristofaro ", "journal": "", "ref_id": "b20", "title": "An overview of privacy in machine learning", "year": "2020" }, { "authors": "Jonas Degrave; Federico Felici; Jonas Buchli; Michael Neunert; Brendan Tracey; Francesco Carpanese; Timo Ewalds; Roland Hafner; Abbas Abdolmaleki; Diego De Las Casas; Craig Donner; Leslie Fritz; Cristian Galperti; Andrea Huber; James Keeling; Maria Tsimpoukelli; Jackie Kay; Antoine Merle; Jean-Marc Moret; Seb Noury; Federico Pesamosca; David Pfau; Olivier Sauter; Cristian Sommariva; Stefano Coda; Basil Duval; Ambrogio Fasoli; Pushmeet Kohli; Koray Kavukcuoglu; Demis Hassabis; Martin Riedmiller", "journal": "Nature", "ref_id": "b21", "title": "Magnetic control of tokamak plasmas through deep reinforcement learning", "year": "2022-02-01" }, { "authors": "Zhun Deng; He Sun; Steven Zhiwei; Linjun Wu; David C Zhang; Parkes", "journal": "", "ref_id": "b22", "title": "Reinforcement Learning with Stepwise Fairness Constraints", "year": "2022" }, { "authors": "Yali Du; Bo Liu; Vincent Moens; Ziqi Liu; Zhicheng Ren; Jun Wang; Xu Chen; Haifeng Zhang", "journal": "International Foundation for Autonomous Agents and Multiagent Systems", "ref_id": "b23", "title": "Learning Correlated Communication Topology in Multi-Agent Reinforcement Learning", "year": "2021" }, { "authors": "Ingy Elsayed-Aly; Suda Bharadwaj; Christopher Amato; Rüdiger Ehlers; Ufuk Topcu; Lu Feng", "journal": "", "ref_id": "b24", "title": "Safe multi-agent reinforcement learning via shielding", "year": "2021" }, { "authors": "Mehdi Esnaashari; Amir Hossein; Damia ", "journal": "Expert Systems with Applications", "ref_id": "b25", "title": "Automation of software test data generation using genetic algorithm and reinforcement learning", "year": "2021" }, { "authors": "Chelsea Finn; Pieter Abbeel; Sergey Levine", "journal": "PMLR", "ref_id": "b26", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks", "year": "2017" }, { "authors": "Jakob Foerster; Alexandros Ioannis; Nando Assael; Shimon De Freitas; Whiteson", "journal": "Curran Associates, Inc", "ref_id": "b27", "title": "Learning to Communicate with Deep Multi-Agent Reinforcement Learning", "year": "2016" }, { "authors": "Jakob N Foerster; Maruan Richard Y Chen; Shimon Al-Shedivat; Pieter Whiteson; Igor Abbeel; Mordatch", "journal": "", "ref_id": "b28", "title": "Learning with opponent-learning awareness", "year": "2017" }, { "authors": "Justin Fu; Katie Luo; Sergey Levine", "journal": "", "ref_id": "b29", "title": "Learning robust rewards with adversarial inverse reinforcement learning", "year": "2017" }, { "authors": "Siyong Fu", "journal": "International Journal of e-Collaboration (IJeC)", "ref_id": "b30", "title": "A Reinforcement Learning-Based Smart Educational Environment for Higher Education", "year": "2022" }, { "authors": "Sriram Ganapathi Subramanian; Pascal Poupart; Matthew E Taylor; Nidhi Hegde", "journal": "International Foundation for Autonomous Agents and Multiagent Systems", "ref_id": "b31", "title": "Multi Type Mean Field Reinforcement Learning", "year": "2020" }, { "authors": "Sriram Ganapathi Subramanian; Matthew E Taylor; Mark Crowley; Pascal Poupart", "journal": "International Foundation for Autonomous Agents and Multiagent Systems", "ref_id": "b32", "title": "Partially Observable Mean Field Reinforcement Learning", "year": "2021" }, { "authors": "Sumitra Ganesh; Nelson Vadori; Mengda Xu; Hua Zheng; Prashant Reddy; Manuela Veloso", "journal": "", "ref_id": "b33", "title": "Reinforcement learning for market making in a multi-agent dealer market", "year": "2019" }, { "authors": "Javier García; Fern Fernández", "journal": "Journal of Machine Learning Research", "ref_id": "b34", "title": "A Comprehensive Survey on Safe Reinforcement Learning", "year": "2015" }, { "authors": "Ian J Goodfellow; Jonathon Shlens; Christian Szegedy", "journal": "", "ref_id": "b35", "title": "Explaining and harnessing adversarial examples", "year": "2014" }, { "authors": "John St; Jonathan Grimbly; Arnu Shock; Pretorius", "journal": "", "ref_id": "b36", "title": "Causal multi-agent reinforcement learning: Review and open problems", "year": "2021" }, { "authors": "Shangding Gu; Jakub Grudzien Kuba; Munning Wen; Ruiqing Chen; Ziyan Wang; Zheng Tian; Jun Wang; Alois Knoll; Yaodong Yang", "journal": "", "ref_id": "b37", "title": "Multi-agent constrained policy optimisation", "year": "2021" }, { "authors": "Shangding Gu; Long Yang; Yali Du; Guang Chen; Florian Walter; Jun Wang; Yaodong Yang; Alois Knoll", "journal": "", "ref_id": "b38", "title": "A review of safe reinforcement learning: Methods, theory and applications", "year": "2022" }, { "authors": "Tao Gui; Peng Liu; Qi Zhang; Liang Zhu; Minlong Peng; Yunhua Zhou; Xuanjing Huang", "journal": "Association for Computing Machinery", "ref_id": "b39", "title": "Mention Recommendation in Twitter with Cooperative Multi-Agent Reinforcement Learning", "year": "2019" }, { "authors": "Jun Guo; Yonghong Chen; Yihang Hao; Zixin Yin; Yin Yu; Simin Li", "journal": "", "ref_id": "b40", "title": "Towards Comprehensive Testing on the Robustness of Cooperative Multi-Agent Reinforcement Learning", "year": "2022" }, { "authors": "Junfeng Guo; Ang Li; Cong Liu", "journal": "", "ref_id": "b41", "title": "Backdoor detection in reinforcement learning", "year": "2022" }, { "authors": "Wenbo Guo; Xian Wu; Sui Huang; Xinyu Xing", "journal": "PMLR", "ref_id": "b42", "title": "Adversarial Policy Learning in Two-player Competitive Games", "year": "2021" }, { "authors": "Nikunj Gupta; G Srinivasaraghavan; Swarup Kumar Mohalik; Matthew E Taylor", "journal": "", "ref_id": "b43", "title": "Hammer: Multi-level coordination of reinforcement learning agents via learned messaging", "year": "2021" }, { "authors": "Anton Gurevich; Eran Bamani; Avishai Sintov", "journal": "", "ref_id": "b44", "title": "Real-to-Sim-to-Real: Learning Models for Homogeneous Multi-Agent Systems", "year": "2022" }, { "authors": "Bernhard Haderer; Monica Ciolacu", "journal": "Procedia Computer Science", "ref_id": "b45", "title": "Education 4.0: Artificial Intelligence Assisted Task-and Time Planning System", "year": "2022" }, { "authors": "Songyang Han; Sanbao Su; Sihong He; Shuo Han; Haizhao Yang; Fei Miao", "journal": "", "ref_id": "b46", "title": "What is the Solution for State Adversarial Multi-Agent Reinforcement Learning", "year": "2022" }, { "authors": "Ryan Hare; Ying Tang", "journal": "", "ref_id": "b47", "title": "Petri Nets and Hierarchical Reinforcement Learning for Personalized Student Assistance in Serious Games", "year": "2022" }, { "authors": "Fei-Fan He; Chiao-Ting Chen; Szu-Hao Huang", "journal": "Applied Soft Computing", "ref_id": "b48", "title": "A multi-agent virtual market model for generalization in reinforcement learning based trading strategies", "year": "2023" }, { "authors": "Sihong He; Songyang Han; Sanbao Su; Shuo Han; Shaofeng Zou; Fei Miao", "journal": "", "ref_id": "b49", "title": "Robust Multi-Agent Reinforcement Learning with State Uncertainties", "year": "2023" }, { "authors": "H E Xu; Bo An; Yanghua Li; Haikai Chen; Rundong Wang; Xinrun Wang; Runsheng Yu; Xin Li; Zhirong Wang", "journal": "Association for Computing Machinery", "ref_id": "b50", "title": "Learning to Collaborate in Multi-Module Recommendation via Multi-Agent Reinforcement Learning without Communication", "year": "2020" }, { "authors": "Pablo Hernandez-Leal; Bilal Kartal; Matthew E Taylor", "journal": "learning", "ref_id": "b51", "title": "Is multiagent deep reinforcement learning the answer or the question? A brief survey", "year": "2018" }, { "authors": "Alexandre Heuillet; Fabien Couthouis; Natalia Díaz-Rodríguez", "journal": "IEEE Computational Intelligence Magazine", "ref_id": "b52", "title": "Collective eXplainable AI: Explaining Cooperative Strategies and Agent Contribution in Multiagent Reinforcement Learning With Shapley Values", "year": "2022" }, { "authors": "Ying-Feng Hsu; Morito Matsuoka", "journal": "", "ref_id": "b53", "title": "A Deep Reinforcement Learning Approach for Anomaly Network Intrusion Detection System", "year": "2020" }, { "authors": "Guangzheng Hu; Yuanheng Zhu; Dongbin Zhao; Mengchen Zhao; Jianye Hao", "journal": "", "ref_id": "b54", "title": "Event-triggered multi-agent reinforcement learning with communication under limited-bandwidth constraint", "year": "2020" }, { "authors": "Yizheng Hu; Kun Shao; Dong Li; Jianye Hao; Wulong Liu; Yaodong Yang; Jun Wang; Zhanxing Zhu", "journal": "", "ref_id": "b55", "title": "Robust Multi-Agent Reinforcement Learning Driven by Correlated Equilibrium", "year": "2021" }, { "authors": "Yizheng Hu; Zhihua Zhang", "journal": "", "ref_id": "b56", "title": "Sparse adversarial attack in multi-agent reinforcement learning", "year": "2022" }, { "authors": "Sandy Huang; Nicolas Papernot; Ian Goodfellow; Yan Duan; Pieter Abbeel", "journal": "", "ref_id": "b57", "title": "Adversarial attacks on neural network policies", "year": "2017" }, { "authors": "Xingshuai Huang; Di Wu; Benoit Boulet", "journal": "", "ref_id": "b58", "title": "Fairness-Aware Model-Based Multi-Agent Reinforcement Learning for Traffic Signal Control", "year": "2023" }, { "authors": "Zhenhan Huang; Fumihide Tanaka", "journal": "Plos one", "ref_id": "b59", "title": "MSPM: A modularized and scalable multi-agent reinforcement learningbased system for financial portfolio management", "year": "2022" }, { "authors": "Zhiyu Huang; Jingda Wu; Chen Lv", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b60", "title": "Efficient Deep Reinforcement Learning With Imitative Expert Priors for Autonomous Driving", "year": "2022" }, { "authors": "Inaam Ilahi; Muhammad Usama; Junaid Qadir; Muhammad Umar Janjua; Ala Al-Fuqaha; Dinh Thai Hoang; Dusit Niyato", "journal": "IEEE Transactions on Artificial Intelligence", "ref_id": "b61", "title": "Challenges and Countermeasures for Adversarial Attacks on Deep Reinforcement Learning", "year": "2022" }, { "authors": "Shariq Iqbal; Fei Sha", "journal": "PMLR", "ref_id": "b62", "title": "Actor-Attention-Critic for Multi-Agent Reinforcement Learning", "year": "2019" }, { "authors": "Shahin Jabbari; Matthew Joseph; Michael Kearns; Jamie Morgenstern; Aaron Roth", "journal": "PMLR", "ref_id": "b63", "title": "Fairness in Reinforcement Learning", "year": "2017" }, { "authors": "Seyed Mohammad; Jafar Jalali; Milad Ahmadian; Sajad Ahmadian; Abbas Khosravi; Mamoun Alazab; Saeid Nahavandi", "journal": "Applied Soft Computing", "ref_id": "b64", "title": "An oppositional-Cauchy based GSK evolutionary algorithm with a novel deep ensemble reinforcement learning strategy for COVID-19 diagnosis", "year": "2021" }, { "authors": "Sangwoo Jeon; Hoeun Lee; Vishnu Kumar Kaliappan; Tuan ; Anh Nguyen; Hyungeun Jo; Hyeonseo Cho; Dugki Min", "journal": "Energies", "ref_id": "b65", "title": "Multiagent Reinforcement Learning Based on Fusion-Multiactor-Attention-Critic for Multiple-Unmanned-Aerial-Vehicle Navigation Control", "year": "2022" }, { "authors": "Jiechuan Jiang; Chen Dun; Tiejun Huang; Zongqing Lu", "journal": "", "ref_id": "b66", "title": "Graph Convolutional Reinforcement Learning", "year": "2020" }, { "authors": "Jiechuan Jiang; Zongqing Lu", "journal": "Curran Associates, Inc", "ref_id": "b67", "title": "Learning Fairness in Multi-Agent Systems", "year": "2019" }, { "authors": "Yang Jiao; Kai Yang; Dongjin Song", "journal": "", "ref_id": "b68", "title": "Distributed Distributionally Robust Optimization with Non-Convex Objectives", "year": "2022" }, { "authors": "Yang Jiao; Kai Yang; Dongjing Song; Dacheng Tao", "journal": "IEEE Transactions on Network Science and Engineering", "ref_id": "b69", "title": "TimeAutoAD: Autonomous Anomaly Detection With Self-Supervised Contrastive Loss for Multivariate Time Series", "year": "2022" }, { "authors": "Yang Jiao; Kai Yang; Tiancheng Wu; Dongjin Song; Chengtao Jian", "journal": "", "ref_id": "b70", "title": "Asynchronous Distributed Bilevel Optimization", "year": "2022" }, { "authors": "Junqi Jin; Chengru Song; Han Li; Kun Gai; Jun Wang; Weinan Zhang", "journal": "Association for Computing Machinery", "ref_id": "b71", "title": "Real-Time Bidding with Multi-Agent Reinforcement Learning in Display Advertising", "year": "2018" }, { "authors": "Xuan Jing; Xifan Yao; Min Liu; Jiajun Zhou", "journal": "Journal of Intelligent Manufacturing", "ref_id": "b72", "title": "Multi-agent reinforcement learning based on graph convolutional network for flexible job shop scheduling", "year": "2022-10-12" }, { "authors": "Hyungeun Jo; Hoeun Lee; Sangwoo Jeon; Vishnu Kumar Kaliappan; Tuan ; Anh Nguyen; Dugki Min; Jae-Woo Lee", "journal": "Springer Nature Singapore", "ref_id": "b73", "title": "Multi-agent Reinforcement Learning-Based UAS Control for Logistics Environments", "year": "2023" }, { "authors": "D Kyle; Julian; J Mykel; Kochenderfer", "journal": "Journal of Guidance, Control, and Dynamics", "ref_id": "b74", "title": "Distributed wildfire surveillance with autonomous aircraft using deep reinforcement learning", "year": "2019" }, { "authors": "Leslie Pack; Kaelbling Michael L Littman; Andrew W Moore", "journal": "Journal of artificial intelligence research", "ref_id": "b75", "title": "Reinforcement learning: A survey", "year": "1996" }, { "authors": "Christos Kaplanis; Murray Shanahan; Claudia Clopath", "journal": "PMLR", "ref_id": "b76", "title": "Continual Reinforcement Learning with Complex Synapses", "year": "2018" }, { "authors": "Sanyam Kapoor", "journal": "", "ref_id": "b77", "title": "Multi-agent reinforcement learning: A report on challenges and approaches", "year": "2018" }, { "authors": "Michaël Karpe; Jin Fang; Zhongyao Ma; Chen Wang", "journal": "Association for Computing Machinery", "ref_id": "b78", "title": "Multi-Agent Reinforcement Learning in a Realistic Limit Order Book Market Simulation", "year": "2021" }, { "authors": "Klemens Kasseroller; Franz Thaler; Christian Payer; Darko Štern", "journal": "Springer International Publishing", "ref_id": "b79", "title": "Collaborative Multi-agent Reinforcement Learning for Landmark Localization Using Continuous Action Space", "year": "2021" }, { "authors": "Dmitry Kazhdan; Zohreh Shams; Pietro Lio", "journal": "", "ref_id": "b80", "title": "MARLeME: A Multi-Agent Reinforcement Learning Model Extraction Library", "year": "2020" }, { "authors": "Nan Rosemary Ke; Olexa Bilaniuk; Anirudh Goyal; Stefan Bauer; Hugo Larochelle; Bernhard Schölkopf; C Michael; Chris Mozer; Yoshua Pal; Bengio", "journal": "", "ref_id": "b81", "title": "Learning neural causal models from unknown interventions", "year": "2019" }, { "authors": "Yaser Keneshloo; Tian Shi; Naren Ramakrishnan; Chandan K Reddy", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b82", "title": "Deep Reinforcement Learning for Sequence-to-Sequence Models", "year": "2020" }, { "authors": "Soheyl Khalilpourazari; Hossein Hashemi; Doulabi ", "journal": "", "ref_id": "b83", "title": "Using reinforcement learning to forecast the spread of COVID-19 in France", "year": "2021" }, { "authors": "Soheyl Khalilpourazari; Hossein Hashemi; Doulabi ", "journal": "Annals of Operations Research", "ref_id": "b84", "title": "Designing a hybrid reinforcement learning based algorithm with application in prediction of the COVID-19 pandemic in Quebec", "year": "2022-05-01" }, { "authors": "Ozsel Kilinc; Giovanni Montana", "journal": "", "ref_id": "b85", "title": "Multi-agent deep reinforcement learning with extremely noisy observations", "year": "2018" }, { "authors": "Robert Kirk; Amy Zhang; Edward Grefenstette; Tim Rocktäschel", "journal": "J. Artif. Int. Res", "ref_id": "b86", "title": "A Survey of Zero-Shot Generalisation in Deep Reinforcement Learning", "year": "2023-02" }, { "authors": "Vijay Konda; John Tsitsiklis", "journal": "MIT Press", "ref_id": "b87", "title": "Actor-Critic Algorithms", "year": "1999" }, { "authors": "Aleksandar Krnjaic; Jonathan D Thomas; Georgios Papoudakis; Lukas Schäfer; Peter Börsting; Stefano V Albrecht", "journal": "", "ref_id": "b88", "title": "Scalable Multi-Agent Reinforcement Learning for Warehouse Logistics with Robotic and Human Co-Workers", "year": "2022" }, { "authors": "R Lakshmana Kumar; Firoz Khan; Sadia Din; S Shahab; Amir Band; Ebuka Mosavi; Ibeke", "journal": "Frontiers in Public Health", "ref_id": "b89", "title": "Recurrent Neural Network and Reinforcement Learning Model for COVID-19 Prediction", "year": "2021" }, { "authors": "Valery Kuzmin", "journal": "Citeseer", "ref_id": "b90", "title": "Connectionist Q-learning in robot control task", "year": "2002" }, { "authors": "Hang Lai; Weinan Zhang; Xialin He; Chen Yu; Zheng Tian; Yong Yu; Jun Wang", "journal": "", "ref_id": "b91", "title": "Sim-to-Real Transfer for Quadrupedal Locomotion via Terrain Transformer", "year": "2022" }, { "authors": "Xi Lan; Yuansong Qiao; Brian Lee", "journal": "", "ref_id": "b92", "title": "Towards Pick and Place Multi Robot Coordination Using Multi-agent Deep Reinforcement Learning", "year": "2021" }, { "authors": "Hung Le; Yue Wang; Akhilesh Deepak Gotmare; Silvio Savarese; Steven Chu; Hong Hoi", "journal": "Curran Associates, Inc", "ref_id": "b93", "title": "CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning", "year": "2022" }, { "authors": "Jinho Lee; Raehyun Kim; Seok-Won Yi; Jaewoo Kang", "journal": "", "ref_id": "b94", "title": "MAPS: Multi-Agent reinforcement learning-based Portfolio management System", "year": "2020" }, { "authors": "Guy Leroy; Daniel Rueckert; Amir Alansary", "journal": "Springer International Publishing", "ref_id": "b95", "title": "Communicative Reinforcement Learning Agents for Landmark Detection in Brain Images", "year": "2020" }, { "authors": "Chengxi Li; Pai Zheng; Yue Yin; Baicun Wang; Lihui Wang", "journal": "CIRP Journal of Manufacturing Science and Technology", "ref_id": "b96", "title": "Deep reinforcement learning in smart manufacturing: A review and prospects", "year": "2023" }, { "authors": "Jiwei Li; Will Monroe; Alan Ritter; Michel Galley; Jianfeng Gao; Dan Jurafsky", "journal": "", "ref_id": "b97", "title": "Deep reinforcement learning for dialogue generation", "year": "2016" }, { "authors": "Li Li; Yisheng Lv; Fei-Yue Wang", "journal": "IEEE/CAA Journal of Automatica Sinica", "ref_id": "b98", "title": "Traffic signal timing via deep reinforcement learning", "year": "2016" }, { "authors": "Simin Li; Jun Guo; Jingqiao Xiu; Pu Feng; Xin Yu; Jiakai Wang; Aishan Liu; Wenjun Wu; Xianglong Liu", "journal": "", "ref_id": "b99", "title": "Attacking Cooperative Multi-Agent Reinforcement Learning by Adversarial Minority Influence", "year": "2023" }, { "authors": "Shihui Li; Yi Wu; Xinyue Cui; Honghua Dong; Fei Fang; Stuart Russell", "journal": "", "ref_id": "b100", "title": "Robust Multi-Agent Reinforcement Learning via Minimax Deep Deterministic Policy Gradient", "year": "2019-07" }, { "authors": "Tianxu Li; Kun Zhu; Cong Nguyen; Dusit Luong; Qihui Niyato; Yang Wu; Bing Zhang; Chen", "journal": "IEEE Communications Surveys & Tutorials", "ref_id": "b101", "title": "Applications of Multi-Agent Reinforcement Learning in Future Internet: A Comprehensive Survey", "year": "2022" }, { "authors": "Xihan Li; Jia Zhang; Jiang Bian; Yunhai Tong; Tie-Yan Liu", "journal": "", "ref_id": "b102", "title": "A cooperative multi-agent reinforcement learning framework for resource balancing in complex logistics network", "year": "2019" }, { "authors": "Zichao Li; Xin Jiang; Lifeng Shang; Hang Li", "journal": "", "ref_id": "b103", "title": "Paraphrase generation with deep reinforcement learning", "year": "2017" }, { "authors": "Wenqian Liang; Ji Wang; Weidong Bao; Xiaomin Zhu; Qingyong Wang; Beibei Han", "journal": "Complex & Intelligent Systems", "ref_id": "b104", "title": "Continuous self-adaptive optimization to learn multi-task multi-agent", "year": "2022-04" }, { "authors": "Xuan Liao; Wenhao Li; Qisen Xu; Xiangfeng Wang; Bo Jin; Xiaoyun Zhang; Yanfeng Wang; Ya Zhang", "journal": "", "ref_id": "b105", "title": "Iteratively-Refined Interactive 3D Medical Image Segmentation With Multi-Agent Reinforcement Learning", "year": "2020" }, { "authors": "Jonathan J Timothy P Lillicrap; Alexander Hunt; Nicolas Pritzel; Tom Heess; Yuval Erez; David Tassa; Daan Silver; Wierstra", "journal": "", "ref_id": "b106", "title": "Continuous control with deep reinforcement learning", "year": "2015" }, { "authors": "Jieyu Lin; Kristina Dzeparoska; Qian Sai; Alberto Zhang; Nicolas Leon-Garcia; Papernot", "journal": "", "ref_id": "b107", "title": "On the Robustness of Cooperative Multi-Agent Reinforcement Learning", "year": "2020" }, { "authors": "Yuan Ling; Sadid A Hasan; Vivek Datla; Ashequl Qadir; Kathy Lee; Joey Liu; Oladimeji Farri", "journal": "PMLR", "ref_id": "b108", "title": "Diagnostic Inferencing via Improving Clinical Concept Extraction with Deep Reinforcement Learning: A Preliminary Study", "year": "2017-05" }, { "authors": "Yuan Ling; Sadid A Hasan; Vivek Datla; Ashequl Qadir; Kathy Lee; Joey Liu; Oladimeji Farri", "journal": "", "ref_id": "b109", "title": "Learning to Diagnose: Assimilating Clinical Narratives using Deep Reinforcement Learning", "year": "2017" }, { "authors": "Chenyi Liu; Nan Geng; Vaneet Aggarwal; Tian Lan; Yuan Yang; Mingwei Xu", "journal": "Springer International Publishing", "ref_id": "b110", "title": "CMIX: Deep Multi-agent Reinforcement Learning with Peak and Average Constraints", "year": "2021" }, { "authors": "Su Liu; Ye Chen; Hui Huang; Liang Xiao; Xiaojun Hei", "journal": "", "ref_id": "b111", "title": "Towards Smart Educational Recommendations with Reinforcement Learning in Classroom", "year": "2018" }, { "authors": "Ximeng Liu; Robert H Deng; Kim-Kwang Raymond Choo; Yang Yang", "journal": "IEEE Transactions on Emerging Topics in Computing", "ref_id": "b112", "title": "Privacy-Preserving Reinforcement Learning Design for Patient-Centric Dynamic Treatment Regimes", "year": "2021" }, { "authors": "Yong Liu; Weixun Wang; Yujing Hu; Jianye Hao; Xingguo Chen; Yang Gao", "journal": "", "ref_id": "b113", "title": "Multi-Agent Game Abstraction via Graph Attention Neural Network", "year": "2020-04" }, { "authors": "Zhiming Liu; Ji Wang", "journal": "Frontiers of Information Technology & Electronic Engineering", "ref_id": "b114", "title": "Human-cyber-physical systems: concepts, challenges, and research opportunities", "year": "2020" }, { "authors": "Zhengshang Liu; Yue Yang; Tim Miller; Peta Masters", "journal": "", "ref_id": "b115", "title": "Deceptive reinforcement learning for privacypreserving planning", "year": "2021" }, { "authors": "Zichuan Liu; Yuanyang Zhu; Zhi Wang; Chunlin Chen", "journal": "", "ref_id": "b116", "title": "MIXRTs: Toward Interpretable Multi-Agent Reinforcement Learning via Mixing Recurrent Soft Decision Trees", "year": "2022" }, { "authors": "Faten Louati; Farah Barika Ktata; Ikram Amous; Ben Amor", "journal": "SENSORNETS", "ref_id": "b117", "title": "A Distributed Intelligent Intrusion Detection System based on Parallel Machine Learning and Big Data Analysis", "year": "2022" }, { "authors": "Ryan Lowe; Y I Wu; Aviv Tamar; Jean Harb; Pieter Openai; Igor Abbeel; Mordatch", "journal": "", "ref_id": "b118", "title": "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments", "year": "2017" }, { "authors": "U Guyon; S Von Luxburg; H Bengio; R Wallach; Fergus", "journal": "Curran Associates, Inc", "ref_id": "b119", "title": "", "year": "" }, { "authors": "Keting Lu; Shiqi Zhang; Xiaoping Chen", "journal": "", "ref_id": "b120", "title": "Goal-Oriented Dialogue Policy Learning from Failures", "year": "2019-07" }, { "authors": "Songtao Lu; Kaiqing Zhang; Tianyi Chen; Tamer Başar; Lior Horesh", "journal": "", "ref_id": "b121", "title": "Decentralized policy gradient descent ascent for safe multi-agent reinforcement learning", "year": "2021" }, { "authors": "Chaofan Ma; Qisen Xu; Xiangfeng Wang; Bo Jin; Xiaoyun Zhang; Yanfeng Wang; Ya Zhang", "journal": "IEEE Transactions on Medical Imaging", "ref_id": "b122", "title": "Boundary-Aware Supervoxel-Level Iteratively Refined Interactive 3D Image Segmentation With Multi-Agent Reinforcement Learning", "year": "2021" }, { "authors": "Cong Ma; Jiangshe Zhang; Zongxin Li; Shuang Xu", "journal": "Neural Computing and Applications", "ref_id": "b123", "title": "Multi-agent deep reinforcement learning algorithm with trend consistency regularization for portfolio management", "year": "2023-03-01" }, { "authors": "G Bruna; Letizia Maciel-Pearson; Samet Marchegiani; Amir Akcay; James Atapour-Abarghouei; Toby P Garforth; Breckon", "journal": "", "ref_id": "b124", "title": "Online deep reinforcement learning for autonomous UAV navigation and exploration of outdoor environments", "year": "2019" }, { "authors": "Anuj Mahajan; Tabish Rashid; Mikayel Samvelyan; Shimon Whiteson", "journal": "Curran Associates, Inc", "ref_id": "b125", "title": "MAVEN: Multi-Agent Variational Exploration", "year": "2019" }, { "authors": "Yailen Martínez; Jiménez ; Jessica Coto Palacio; Ann Nowé", "journal": "Springer International Publishing", "ref_id": "b126", "title": "Multi-Agent Reinforcement Learning Tool for Job Shop Scheduling Problems", "year": "2020" }, { "authors": "Qinghai Miao; Min Huang; Yisheng Lv; Fei-Yue Wang", "journal": "", "ref_id": "b127", "title": "Parallel Learning between Science for AI and AI for Science: A Brief Overview and Perspective", "year": "2022" }, { "authors": "Stephanie Milani; Zhicheng Zhang; Nicholay Topin; Zheyuan ; Ryan Shi; Charles Kamhoua; Evangelos E Papalexakis; Fei Fang", "journal": "Springer Nature Switzerland", "ref_id": "b128", "title": "MAVIPER: Learning Decision Tree Policies for Interpretable Multi-agent Reinforcement Learning", "year": "2023-05" }, { "authors": "Volodymyr Mnih; Adria Puigdomenech Badia; Mehdi Mirza; Alex Graves; Timothy Lillicrap; Tim Harley; David Silver; Koray Kavukcuoglu", "journal": "PMLR", "ref_id": "b129", "title": "Asynchronous Methods for Deep Reinforcement Learning", "year": "1928" }, { "authors": "Volodymyr Mnih; Koray Kavukcuoglu; David Silver; Alex Graves; Ioannis Antonoglou; Daan Wierstra; Martin Riedmiller", "journal": "", "ref_id": "b130", "title": "Playing atari with deep reinforcement learning", "year": "2013" }, { "authors": "Volodymyr Mnih; Koray Kavukcuoglu; David Silver; Andrei A Rusu; Joel Veness; Marc G Bellemare; Alex Graves; Martin Riedmiller; Andreas K Fidjeland; Georg Ostrovski", "journal": "nature", "ref_id": "b131", "title": "Human-level control through deep reinforcement learning", "year": "2015" }, { "authors": "Safa Mohamed; Ridha Ejbali", "journal": "International Journal on Information Technologies & Security", "ref_id": "b132", "title": "ADVERSARIAL MULTI-AGENT REINFORCEMENT LEARNING ALGORITHM FOR ANOMALY NETWORK INTRUSION DETECTION SYSTEM", "year": "2021" }, { "authors": "Igor Mordatch; Kendall Lowrey; Emanuel Todorov", "journal": "", "ref_id": "b133", "title": "Ensemble-CIO: Full-body dynamic motion planning that transfers to physical humanoids", "year": "2015" }, { "authors": "Zhiyu Mou; Yu Zhang; Feifei Gao; Huangang Wang; Tao Zhang; Zhu Han", "journal": "IEEE Journal on Selected Areas in Communications", "ref_id": "b134", "title": "Deep Reinforcement Learning Based Three-Dimensional Area Coverage With UAV Swarm", "year": "2021" }, { "authors": "Navid Naderializadeh; Jaroslaw J Sydir; Meryem Simsek; Hosein Nikopour", "journal": "IEEE Transactions on Wireless Communications", "ref_id": "b135", "title": "Resource Management in Wireless Networks via Multi-Agent Deep Reinforcement Learning", "year": "2021" }, { "authors": "Thanh Thi Nguyen; Ngoc Duy Nguyen; Saeid Nahavandi", "journal": "IEEE Transactions on Cybernetics", "ref_id": "b136", "title": "Deep Reinforcement Learning for Multiagent Systems: A Review of Challenges, Solutions, and Applications", "year": "2020" }, { "authors": "Yaru Niu; Rohan Paleja; Matthew Gombolay", "journal": "International Foundation for Autonomous Agents and Multiagent Systems", "ref_id": "b137", "title": "Multi-Agent Graph-Attention Communication and Teaming", "year": "2021" }, { "authors": "Jean Jacques Ohana; Steve Ohana; Eric Benhamou; David Saltiel; Beatrice Guez", "journal": "Springer International Publishing", "ref_id": "b138", "title": "Explainable AI (XAI) Models Applied to the Multi-agent Environment of Financial Markets", "year": "2021" }, { "authors": "Shayegan Omidshafiei; Jason Pazis; Christopher Amato; Jonathan P How; John Vian", "journal": "PMLR", "ref_id": "b139", "title": "Deep Decentralized Multi-task Multi-Agent Reinforcement Learning under Partial Observability", "year": "2017" }, { "authors": "Long Ouyang; Jeffrey Wu; Xu Jiang; Diogo Almeida; Carroll Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray; John Schulman; Jacob Hilton; Fraser Kelton; Luke Miller; Maddie Simens; Amanda Askell; Peter Welinder; Jan Paul F Christiano; Ryan Leike; Lowe", "journal": "Curran Associates, Inc", "ref_id": "b140", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Xinlei Pan; Weiyao Wang; Xiaoshuai Zhang; Bo Li; Jinfeng Yi; Dawn Song", "journal": "International Foundation for Autonomous Agents and Multiagent Systems", "ref_id": "b141", "title": "How You Act Tells a Lot: Privacy-Leaking Attack on Deep Reinforcement Learning", "year": "2019" }, { "authors": "Nicolas Papernot; Patrick Mcdaniel; Somesh Jha; Matt Fredrikson; Z Berkay Celik; Ananthram Swami", "journal": "", "ref_id": "b142", "title": "The Limitations of Deep Learning in Adversarial Settings", "year": "2016" }, { "authors": "Yagna Patel", "journal": "", "ref_id": "b143", "title": "Optimizing market making using multi-agent reinforcement learning", "year": "2018" }, { "authors": "Bei Peng; Tabish Rashid; Christian Schroeder De Witt; Pierre-Alexandre Kamienny; Philip Torr; Wendelin Boehmer; Shimon Whiteson", "journal": "Curran Associates, Inc", "ref_id": "b144", "title": "FACMAC: Factored Multi-Agent Centralised Policy Gradients", "year": "2021-05" }, { "authors": "Haixia Peng; Xuemin Shen", "journal": "IEEE Journal on Selected Areas in Communications", "ref_id": "b145", "title": "Multi-Agent Reinforcement Learning Based Resource Management in MECand UAV-Assisted Vehicular Networks", "year": "2021" }, { "authors": "Peng Peng; Ying Wen; Yaodong Yang; Quan Yuan; Zhenkun Tang; Haitao Long; Jun Wang", "journal": "", "ref_id": "b146", "title": "Multiagent bidirectionally-coordinated nets: Emergence of human-level coordination in learning to play starcraft combat games", "year": "2017" }, { "authors": "Dana Pessach; Erez Shmueli", "journal": "ACM Comput. Surv", "ref_id": "b147", "title": "A Review on Fairness in Machine Learning", "year": "2022-02" }, { "authors": "Xuan Huy; Hung Manh Pham; David La; Luan Feil-Seifer; Van Nguyen", "journal": "", "ref_id": "b148", "title": "Cooperative and Distributed Reinforcement Learning of Drones for Field Coverage", "year": "2018" }, { "authors": " Nhan H Pham; Jie Lam M Nguyen; Chen; Thanh Hoang; Subhro Lam; Tsui-Wei Das; Weng", "journal": "", "ref_id": "b149", "title": "Evaluating Robustness of Cooperative MARL: A Model-based Approach", "year": "2022" }, { "authors": "Uyen Pham; Quoc Luu; Hien Tran", "journal": "Soft Computing", "ref_id": "b150", "title": "Multi-agent reinforcement learning approach for hedging portfolio problem", "year": "2021-06-01" }, { "authors": "Thomy Phan; Lenz Belzner; Thomas Gabor; Andreas Sedlmeier; Fabian Ritz; Claudia Linnhoff-Popien", "journal": "", "ref_id": "b151", "title": "Resilient Multi-Agent Reinforcement Learning with Adversarial Value Decomposition", "year": "2021-05" }, { "authors": "Jens Popper; William Motsch; Alexander David; Teresa Petzsche; Martin Ruskowski", "journal": "", "ref_id": "b152", "title": "Utilizing Multi-Agent Deep Reinforcement Learning For Flexible Job Shop Scheduling Under Sustainable Viewpoints", "year": "2021" }, { "authors": "Jens Popper; Martin Ruskowski", "journal": "", "ref_id": "b153", "title": "Using Multi-Agent Deep Reinforcement Learning For Flexible Job Shop Scheduling Problems", "year": "2021" }, { "authors": "Kritika Prakash; Fiza Husain; Praveen Paruchuri; Sujit Gujar", "journal": "", "ref_id": "b154", "title": "How Private Is Your RL Policy? An Inverse RL Based Analysis Framework", "year": "2022-06" }, { "authors": "Erika Puiutta; M S P Eric; Veith", "journal": "Springer International Publishing", "ref_id": "b155", "title": "Explainable Reinforcement Learning: A Survey", "year": "2020" }, { "authors": "Dawei Qiu; Jianhong Wang; Zihang Dong; Yi Wang; Goran Strbac", "journal": "IEEE Transactions on Power Systems", "ref_id": "b156", "title": "Mean-Field Multi-Agent Reinforcement Learning for Peer-to-Peer Multi-Energy Trading", "year": "2022" }, { "authors": "Dawei Qiu; Jianhong Wang; Junkai Wang; Goran Strbac", "journal": "", "ref_id": "b157", "title": "Multi-Agent Reinforcement Learning for Automated Peer-to-Peer Energy Trading in Double-Side Auction Market", "year": "2021" }, { "authors": "Huaxin Qiu; Haibin Duan", "journal": "Information Sciences", "ref_id": "b158", "title": "A multi-objective pigeon-inspired optimization approach to UAV distributed flocking among obstacles", "year": "2020" }, { "authors": "Thota Radha; Rajesh ; Surendran Rajendran", "journal": "", "ref_id": "b159", "title": "Intelligent Multi-Agent Reinforcement Learning Based Disease Prediction and Treatment Recommendation Model", "year": "2022" }, { "authors": "Aravind Rajeswaran; Sarvjeet Ghotra; Balaraman Ravindran; Sergey Levine", "journal": "", "ref_id": "b160", "title": "Epopt: Learning robust neural network policies using model ensembles", "year": "2016" }, { "authors": "Tabish Rashid; Mikayel Samvelyan; Christian Schroeder De; Gregory Witt; Jakob Farquhar; Shimon Foerster; Whiteson", "journal": "J. Mach. Learn. Res", "ref_id": "b161", "title": "Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning", "year": "2020-01" }, { "authors": "Lili Ren; Xin Ning; Zheng Wang", "journal": "Computers & Industrial Engineering", "ref_id": "b162", "title": "A competitive Markov decision process model and a recursive reinforcement-learning algorithm for fairness scheduling of agile satellites", "year": "2022" }, { "authors": "Marcelo Luis; Ruiz Rodríguez; Sylvain Kubler; Andrea De Giorgio; Maxime Cordy; Jérémy Robert; Yves Le Traon", "journal": "Robotics and Computer-Integrated Manufacturing", "ref_id": "b163", "title": "Multi-agent deep reinforcement learning based Predictive Maintenance on parallel machines", "year": "2022" }, { "authors": "Heechang Ryu; Hayong Shin; Jinkyoo Park", "journal": "", "ref_id": "b164", "title": "Multi-Agent Actor-Critic with Hierarchical Graph Attention Network", "year": "2020-04" }, { "authors": "Fereshteh Sadeghi; Alexander Toshev; Eric Jang; Sergey Levine", "journal": "", "ref_id": "b165", "title": "Sim2Real Viewpoint Invariant Visual Servoing by Recurrent Control", "year": "2018-05" }, { "authors": "Jun Sakuma; Shigenobu Kobayashi; Rebecca N Wright", "journal": "Association for Computing Machinery", "ref_id": "b166", "title": "Privacy-Preserving Reinforcement Learning", "year": "2008" }, { "authors": "Nino Scherrer; Olexa Bilaniuk; Yashas Annadani; Anirudh Goyal; Patrick Schwab; Bernhard Schölkopf; Yoshua Michael C Mozer; Stefan Bengio; Nan Rosemary Bauer; Ke", "journal": "", "ref_id": "b167", "title": "Learning neural causal models with active interventions", "year": "2021" }, { "authors": "John Schulman; Sergey Levine; Pieter Abbeel; Michael Jordan; Philipp Moritz", "journal": "", "ref_id": "b168", "title": "Trust Region Policy Optimization", "year": "2015" }, { "authors": "John Schulman; Filip Wolski; Prafulla Dhariwal; Alec Radford; Oleg Klimov", "journal": "", "ref_id": "b169", "title": "Proximal policy optimization algorithms", "year": "2017" }, { "authors": "Clara Schumacher; Dirk Ifenthaler", "journal": "The Internet and Higher Education", "ref_id": "b170", "title": "Investigating prompts for supporting students' self-regulation -A remaining challenge for learning analytics approaches?", "year": "2021" }, { "authors": "Jaemin Seo; Y-S Na; B Kim; C Y Lee; Park; Park; Lee", "journal": "Nuclear Fusion", "ref_id": "b171", "title": "Feedforward beta control in the KSTAR tokamak by deep reinforcement learning", "year": "2021" }, { "authors": "Arturo Servin; Daniel Kudenko", "journal": "Springer", "ref_id": "b172", "title": "Multi-Agent Reinforcement Learning for Intrusion Detection: A Case Study and Evaluation", "year": "2008" }, { "authors": "Kamalakanta Sethi; Y Venu; Rahul Madhav; Padmalochan Kumar; Bera", "journal": "Journal of Information Security and Applications", "ref_id": "b173", "title": "Attention based multi-agent intrusion detection systems using reinforcement learning", "year": "2021" }, { "authors": "Shital Shah; Debadeepta Dey; Chris Lovett; Ashish Kapoor", "journal": "Springer International Publishing", "ref_id": "b174", "title": "AirSim: High-Fidelity Visual and Physical Simulation for Autonomous Vehicles", "year": "2018" }, { "authors": "Ali Shavandi; Majid Khedmati", "journal": "Expert Systems with Applications", "ref_id": "b175", "title": "A multi-agent deep reinforcement learning framework for algorithmic trading in financial markets", "year": "2022" }, { "authors": "Ziyad Sheebaelhamd; Konstantinos Zisis; Athina Nisioti; Dimitris Gkouletsos; Dario Pavllo; Jonas Kohler", "journal": "", "ref_id": "b176", "title": "Safe Deep Reinforcement Learning for Multi-Agent Systems with Continuous Action Spaces", "year": "2021" }, { "authors": "Haoran Shi; Guanjun Liu; Kaiwen Zhang; Ziyuan Zhou; Jiacun Wang", "journal": "IEEE Transactions on Systems, Man, and Cybernetics: Systems", "ref_id": "b177", "title": "MARL Sim2real Transfer: Merging Physical Reality With Digital Virtuality in Metaverse", "year": "2022" }, { "authors": "Parshin Shojaee; Aneesh Jain; Sindhu Tipirneni; Chandan K Reddy", "journal": "", "ref_id": "b178", "title": "Execution-based Code Generation using Deep Reinforcement Learning", "year": "2023" }, { "authors": "David Silver; Aja Huang; Chris J Maddison; Arthur Guez; Laurent Sifre; George Van Den; Julian Driessche; Ioannis Schrittwieser; Veda Antonoglou; Marc Panneershelvam; Lanctot", "journal": "nature", "ref_id": "b179", "title": "Mastering the game of Go with deep neural networks and tree search", "year": "2016" }, { "authors": "David Silver; Richard S Sutton; Martin Müller", "journal": "", "ref_id": "b180", "title": "Reinforcement Learning of Local Shape in the Game of Go", "year": "2007" }, { "authors": "Kyunghwan Son; Daewoo Kim; Wan Ju Kang; David Earl Hostallero; Yung Yi", "journal": "PMLR", "ref_id": "b181", "title": "QTRAN: Learning to Factorize with Transformation for Cooperative Multi-Agent Reinforcement Learning", "year": "2019" }, { "authors": "Jianyu Su; Jing Huang; Stephen Adams; Qing Chang; Peter A Beling", "journal": "Expert Systems with Applications", "ref_id": "b182", "title": "Deep multi-agent reinforcement learning for multi-level preventive maintenance in manufacturing systems", "year": "2022" }, { "authors": "Pei-Hao Su; Milica Gašić; Steve Young", "journal": "Computer Speech & Language", "ref_id": "b183", "title": "Reward estimation for dialogue policy optimisation", "year": "2018" }, { "authors": "Sainbayar Sukhbaatar; Rob Fergus", "journal": "Curran Associates, Inc", "ref_id": "b184", "title": "Learning Multiagent Communication with Backpropagation", "year": "2016" }, { "authors": "Chuangchuang Sun; Dong-Ki Kim; Jonathan P How", "journal": "", "ref_id": "b185", "title": "ROMAX: Certifiably Robust Deep Multiagent Reinforcement Learning via Convex Relaxation", "year": "2022" }, { "authors": "Penghao Sun; Zehua Guo; Gang Wang; Julong Lan; Yuxiang Hu", "journal": "Computer Networks", "ref_id": "b186", "title": "MARVEL: Enabling controller load balancing in software-defined networks with multi-agent reinforcement learning", "year": "2020" }, { "authors": "Yanchao Sun; Ruijie Zheng; Parisa Hassanzadeh; Yongyuan Liang; Soheil Feizi; Sumitra Ganesh; Furong Huang", "journal": "", "ref_id": "b187", "title": "Certifiably Robust Policy Learning against Adversarial Multi-Agent Communication", "year": "2023" }, { "authors": "Peter Sunehag; Guy Lever; Audrunas Gruslys; Wojciech ; Marian Czarnecki; Vinicius Zambaldi; Max Jaderberg; Marc Lanctot; Nicolas Sonnerat; Joel Z Leibo; Karl Tuyls; Thore Graepel", "journal": "International Foundation for Autonomous Agents and Multiagent Systems", "ref_id": "b188", "title": "Value-Decomposition Networks For Cooperative Multi-Agent Learning Based On Team Reward", "year": "2018" }, { "authors": "S Richard; Andrew G Sutton; Barto", "journal": "MIT press", "ref_id": "b189", "title": "Reinforcement learning: An introduction", "year": "2018" }, { "authors": "Akito Suzuki; Shigeaki Harada", "journal": "", "ref_id": "b190", "title": "Safe Multi-Agent Deep Reinforcement Learning for Dynamic Virtual Network Allocation", "year": "2020" }, { "authors": "Ardi Tampuu; Tambet Matiisen; Dorian Kodelja; Ilya Kuzovkin; Kristjan Korjus; Juhan Aru; Jaan Aru; Raul Vicente", "journal": "PLOS ONE", "ref_id": "b191", "title": "Multiagent cooperation and competition with deep reinforcement learning", "year": "2017" }, { "authors": "Qingmeng Tan; Yifei Tong; Shaofeng Wu; Dongbo Li", "journal": "The International Journal of Advanced Manufacturing Technology", "ref_id": "b192", "title": "Modeling, planning, and scheduling of shop-floor assembly process with dynamic cyber-physical interactions: a case study for CPS-based smart industrial robot production", "year": "2019-12-01" }, { "authors": "Kai-Fu Tang; Hao-Cheng Kao; Chun-Nan Chou; Edward Y Chang", "journal": "", "ref_id": "b193", "title": "Inquire and diagnose: Neural symptom checking ensemble using deep reinforcement learning", "year": "2016" }, { "authors": "Ying Tang; Ryan Hare", "journal": "ASEE Conferences", "ref_id": "b194", "title": "Evaluation of an AI-assisted Adaptive Educational Game System", "year": "2022" }, { "authors": "Víctor Uc-Cetina; Nicolás Navarro-Guerrero; Anabel Martin-Gonzalez; Cornelius Weber; Stefan Wermter", "journal": "Artificial Intelligence Review", "ref_id": "b195", "title": "Survey on reinforcement learning for language processing", "year": "2023-02-01" }, { "authors": "Pascal Van; Der Vaart; Anuj Mahajan; Shimon Whiteson", "journal": "", "ref_id": "b196", "title": "Model based multi-agent reinforcement learning with tensor decompositions", "year": "2021" }, { "authors": "Hado Van Hasselt; Arthur Guez; David Silver", "journal": "", "ref_id": "b197", "title": "Deep reinforcement learning with double q-learning", "year": "2016" }, { "authors": "Parv Venkitasubramaniam", "journal": "", "ref_id": "b198", "title": "Privacy in stochastic control: A Markov Decision Process perspective", "year": "2013" }, { "authors": "Alexander Vezhnevets; Yuhuai Wu; Maria Eckstein; Rémi Leblond; Joel Z Leibo", "journal": "PMLR", "ref_id": "b199", "title": "OPtions as REsponses: Grounding behavioural hierarchies in multi-agent reinforcement learning", "year": "2020" }, { "authors": "Giuseppe Vietri; Borja Balle; Akshay Krishnamurthy; Steven Wu", "journal": "PMLR", "ref_id": "b200", "title": "Private Reinforcement Learning with PAC and Regret Guarantees", "year": "2020" }, { "authors": "Nelson Vithayathil Varghese; Qusay H Mahmoud", "journal": "Electronics", "ref_id": "b201", "title": "A Survey of Multi-Task Deep Reinforcement Learning", "year": "2020" }, { "authors": "Athanasios Vlontzos; Amir Alansary; Konstantinos Kamnitsas; Daniel Rueckert; Bernhard Kainz", "journal": "Springer International Publishing", "ref_id": "b202", "title": "Multiple Landmark Detection Using Multi-agent Reinforcement Learning", "year": "2019" }, { "authors": "George A Vouros", "journal": "ACM Comput. Surv", "ref_id": "b203", "title": "Explainable Deep Reinforcement Learning: State of the Art and Challenges", "year": "2022-12" }, { "authors": "Ory Walker; Fernando Vanegas; Felipe Gonzalez; Sven Koenig", "journal": "", "ref_id": "b204", "title": "Multi-UAV Target-Finding in Simulated Indoor Environments using Deep Reinforcement Learning", "year": "2020-05" }, { "authors": "Baoxiang Wang; Nidhi Hegde", "journal": "Curran Associates, Inc", "ref_id": "b205", "title": "Privacy-Preserving Q-Learning with Functional Noise in Continuous Spaces", "year": "2019" }, { "authors": "Chenghe Wang; Yuhang Ran; Lei Yuan; Yang Yu; Zongzhang Zhang", "journal": "", "ref_id": "b206", "title": "Robust Multi-Agent Reinforcement Learning against Adversaries on Observation", "year": "2023" }, { "authors": "Dawei Wang; Tingxiang Fan; Tao Han; Jia Pan", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b207", "title": "A Two-Stage Reinforcement Learning Approach for Multi-UAV Collision Avoidance Under Imperfect Sensing", "year": "2020" }, { "authors": "Jianhao Wang; Zhizhou Ren; Terry Liu; Yang Yu; Chongjie Zhang", "journal": "", "ref_id": "b208", "title": "{QPLEX}: Duplex Dueling Multi-Agent Q-Learning", "year": "2021" }, { "authors": "Lun Wang; Zaynah Javed; Xian Wu; Wenbo Guo; Xinyu Xing; Dawn Song", "journal": "", "ref_id": "b209", "title": "Backdoorl: Backdoor attack against competitive reinforcement learning", "year": "2021" }, { "authors": "Liang Wang; Kezhi Wang; Cunhua Pan; Wei Xu; Nauman Aslam; Lajos Hanzo", "journal": "IEEE Transactions on Cognitive Communications and Networking", "ref_id": "b210", "title": "Multi-Agent Deep Reinforcement Learning-Based Trajectory Planning for Multi-UAV Assisted Mobile Edge Computing", "year": "2021" }, { "authors": "Rundong Wang; Xu He; Runsheng Yu; Wei Qiu; Bo An; Zinovi Rabinovich", "journal": "PMLR", "ref_id": "b211", "title": "Learning Efficient Multi-agent Communication: An Information Bottleneck Approach", "year": "2020" }, { "authors": "Xiaohan Wang; Lin Zhang; Tingyu Lin; Chun Zhao; Kunyu Wang; Zhen Chen", "journal": "Robotics and Computer-Integrated Manufacturing", "ref_id": "b212", "title": "Solving job scheduling problems in a resource preemption environment with multi-agent reinforcement learning", "year": "2022" }, { "authors": "Xihuai Wang; Zhicheng Zhang; Weinan Zhang", "journal": "", "ref_id": "b213", "title": "Model-based multi-agent reinforcement learning: Recent progress and prospects", "year": "2022" }, { "authors": "Yue Wang; Esha Sarkar; Wenqing Li; Michail Maniatakos; Saif Eddin Jabari", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b214", "title": "Stop-and-Go: Exploring Backdoor Attacks on Deep Reinforcement Learning-Based Traffic Congestion Control Systems", "year": "2021" }, { "authors": "Yue Wang; Shaofeng Zou", "journal": "Curran Associates, Inc", "ref_id": "b215", "title": "Online Robust Reinforcement Learning with Model Uncertainty", "year": "2021" }, { "authors": "Ziyu Wang; Tom Schaul; Matteo Hessel; Hado Hasselt; Marc Lanctot; Nando Freitas", "journal": "PMLR", "ref_id": "b216", "title": "Dueling network architectures for deep reinforcement learning", "year": "1995" }, { "authors": "Paul Weng", "journal": "", "ref_id": "b217", "title": "Fairness in reinforcement learning", "year": "2019" }, { "authors": "Ronald J Williams", "journal": "Springer US", "ref_id": "b218", "title": "Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning", "year": "1992" }, { "authors": "Annie Wong; Thomas Bäck; Anna V Kononova; Aske Plaat", "journal": "Artificial Intelligence Review", "ref_id": "b219", "title": "Deep multiagent reinforcement learning: challenges and directions", "year": "2022-10-19" }, { "authors": "Tong Wu; Pan Zhou; Kai Liu; Yali Yuan; Xiumin Wang; Huawei Huang; Dapeng Oliver Wu", "journal": "IEEE Transactions on Vehicular Technology", "ref_id": "b220", "title": "Multi-Agent Deep Reinforcement Learning for Urban Traffic Light Control in Vehicular Networks", "year": "2020" }, { "authors": "Young Wu; Jermey Mcmahan; Xiaojin Zhu; Qiaomin Xie", "journal": "", "ref_id": "b221", "title": "Reward Poisoning Attacks on Offline Multi-Agent Reinforcement Learning", "year": "2022" }, { "authors": "Cheng Xu; Ming Xu; Chanjuan Yin", "journal": "Computer Communications", "ref_id": "b222", "title": "Optimized multi-UAV cooperative path planning under the complex confrontation environment", "year": "2020" }, { "authors": "D Xu; G Chen", "journal": "The Aeronautical Journal", "ref_id": "b223", "title": "Autonomous and cooperative control of UAV cluster with multi-agent reinforcement learning", "year": "2022" }, { "authors": "Dan Xu; Gang Chen", "journal": "Aerospace Systems", "ref_id": "b224", "title": "The research on intelligent cooperative combat of UAV cluster with multi-agent reinforcement learning", "year": "2022-03-01" }, { "authors": "Mengdi Xu; Zuxin Liu; Peide Huang; Wenhao Ding; Zhepeng Cen; Bo Li; Ding Zhao", "journal": "", "ref_id": "b225", "title": "Trustworthy reinforcement learning against intrinsic vulnerabilities: Robustness, safety, and generalizability", "year": "2022" }, { "authors": "Min Yang; Weiyi Huang; Wenting Tu; Qiang Qu; Ying Shen; Kai Lei", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b226", "title": "Multitask Learning and Reinforcement Learning for Personalized Dialog Generation: An Empirical Study", "year": "2021-05" }, { "authors": "Yaodong Yang; Rui Luo; Minne Li; Ming Zhou; Weinan Zhang; Jun Wang", "journal": "PMLR", "ref_id": "b227", "title": "Mean Field Multi-Agent Reinforcement Learning", "year": "2018" }, { "authors": "Dayong Ye; Tianqing Zhu; Zishuo Cheng; Wanlei Zhou; Philip S Yu", "journal": "IEEE Transactions on Cybernetics", "ref_id": "b228", "title": "Differential Advising in Multiagent Reinforcement Learning", "year": "2022" }, { "authors": "Dayong Ye; Tianqing Zhu; Sheng Shen; Wanlei Zhou; Philip S Yu", "journal": "IEEE Transactions on Dependable and Secure Computing", "ref_id": "b229", "title": "Differentially Private Multi-Agent Planning for Logistic-Like Problems", "year": "2022" }, { "authors": "Dayong Ye; Tianqing Zhu; Wanlei Zhou; Philip S Yu", "journal": "IEEE Transactions on Cybernetics", "ref_id": "b230", "title": "Differentially Private Malicious Agent Avoidance in Multiagent Advising Learning", "year": "2020" }, { "authors": "Li Yinggang; Tong Xiangrong", "journal": "", "ref_id": "b231", "title": "Social Recommendation System Based on Multi-agent Deep Reinforcement Learning", "year": "2022" }, { "authors": "Chao Yu; Xin Wang; Xin Xu; Minjie Zhang; Hongwei Ge; Jiankang Ren; Liang Sun; Bingcai Chen; Guozhen Tan", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b232", "title": "Distributed Multiagent Coordinated Learning for Autonomous Driving in Highways Based on Dynamic Coordination Graphs", "year": "2020" }, { "authors": "Chen Yu; Weinan Zhang; Hang Lai; Zheng Tian; Laurent Kneip; Jun Wang", "journal": "", "ref_id": "b233", "title": "Multi-embodiment Legged Robot Control as a Sequence Modeling Problem", "year": "2022" }, { "authors": "Tingting Yuan; Hwei-Ming Chung; Xiaoming Fu", "journal": "", "ref_id": "b234", "title": "PP-MARL: Efficient Privacy-Preserving MARL for Cooperative Intelligence in Communication", "year": "2022" }, { "authors": "Renos Zabounidis; Joseph Campbell; Simon Stepputtis; Dana Hughes; Katia P Sycara", "journal": "PMLR", "ref_id": "b235", "title": "Concept Learning for Interpretable Multi-Agent Reinforcement Learning", "year": "2023" }, { "authors": "Sihan Zeng; Malik Aqeel Anwar; Thinh T Doan; Arijit Raychowdhury; Justin Romberg", "journal": "PMLR", "ref_id": "b236", "title": "A decentralized policy gradient approach to multi-task reinforcement learning", "year": "2021" }, { "authors": "Amy Zhang; Clare Lyle; Shagun Sodhani; Angelos Filos; Marta Kwiatkowska; Joelle Pineau; Yarin Gal; Doina Precup", "journal": "PMLR", "ref_id": "b237", "title": "Invariant Causal Prediction for Block MDPs", "year": "2020" }, { "authors": "Fuxiang Zhang; Chengxing Jia; Yi-Chen Li; Lei Yuan; Yang Yu; Zongzhang Zhang", "journal": "", "ref_id": "b238", "title": "Discovering Generalizable Multi-agent Coordination Skills from Multi-task Offline Data", "year": "2023" }, { "authors": "Gengzhi Zhang; Liang Feng; Yaqing Hou", "journal": "PMLR", "ref_id": "b239", "title": "Multi-task Actor-Critic with Knowledge Transfer via a Shared Critic", "year": "2021" }, { "authors": "Jia-Dong Zhang; Zhixiang He; Chi-Yin Wing-Ho Chan; Chow", "journal": "Knowledge-Based Systems", "ref_id": "b240", "title": "DeepMAG: Deep reinforcement learning with multi-agent graphs for flexible job shop scheduling", "year": "2023" }, { "authors": "Kaiqing Zhang; Tao Sun; Yunzhe Tao; Sahika Genc; Sunil Mallya; Tamer Basar", "journal": "Curran Associates, Inc", "ref_id": "b241", "title": "Robust Multi-Agent Reinforcement Learning with Model Uncertainty", "year": "2020" }, { "authors": "Kaiqing Zhang; Zhuoran Yang; Tamer Başar", "journal": "Frontiers of Information Technology & Electronic Engineering", "ref_id": "b242", "title": "Decentralized multi-agent reinforcement learning with networked agents: recent advances", "year": "2021-06-01" }, { "authors": "Tianhao Zhang; Qiwei Ye; Jiang Bian; Guangming Xie; Tie-Yan Liu", "journal": "", "ref_id": "b243", "title": "MFVFD: A Multi-Agent Q-Learning Approach to Cooperative and Non-Cooperative Tasks", "year": "2021-08" }, { "authors": "Xianjie Zhang; Yu Liu; Xiujuan Xu; Qiong Huang; Hangyu Mao; Anil Carie", "journal": "Neurocomputing", "ref_id": "b244", "title": "Structural relational inference actor-critic for multi-agent reinforcement learning", "year": "2021" }, { "authors": "Yang Zhang; Chenwei Zhang; Xiaozhong Liu", "journal": "Association for Computing Machinery", "ref_id": "b245", "title": "Dynamic Scholarly Collaborator Recommendation via Competitive Multi-Agent Reinforcement Learning (RecSys '17)", "year": "2017" }, { "authors": "Yi Zhang; Haihua Zhu; Dunbing Tang; Tong Zhou; Yong Gui", "journal": "Robotics and Computer-Integrated Manufacturing", "ref_id": "b246", "title": "Dynamic job shop scheduling based on deep reinforcement learning for multi-agent manufacturing systems", "year": "2022" }, { "authors": "Wenshuai Zhao; Jorge Peña Queralta; Tomi Westerlund", "journal": "", "ref_id": "b247", "title": "Sim-to-Real Transfer in Deep Reinforcement Learning for Robotics: a Survey", "year": "2020" }, { "authors": "Chenyang Zheng; Xiangyu Si; Lei Sun; Zhang Chen; Linghao Yu; Zhiqiang Tian", "journal": "SPIE", "ref_id": "b248", "title": "Multi-agent reinforcement learning for prostate localization based on multi-scale image representation", "year": "2021" }, { "authors": "Hua Zheng; Jiahao Zhu; Wei Xie; Judy Zhong", "journal": "BMC Medical Informatics and Decision Making", "ref_id": "b249", "title": "Reinforcement learning assisted oxygen therapy for COVID-19 patients under intensive care", "year": "2021-12-17" }, { "authors": "Ming Zhou; Jun Luo; Julian Villella; Yaodong Yang; David Rusu; Jiayu Miao; Weinan Zhang; Montgomery Alban; Iman Fadakar; Zheng Chen; Chongxi Huang; Ying Wen; Kimia Hassanzadeh; Daniel Graves; Zhengbang Zhu; Yihan Ni; Nhat Nguyen; Mohamed Elsayed; Haitham Ammar; Alexander Cowen-Rivers; Sanjeevan Ahilan; Zheng Tian; Daniel Palenicek; Kasra Rezaee; Peyman Yadmellat; Kun Shao; Baokuan Dong Chen; Hongbo Zhang; Jianye Zhang; Wulong Hao; Jun Liu; Wang", "journal": "PMLR", "ref_id": "b250", "title": "SMARTS: An Open-Source Scalable Multi-Agent RL Training School for Autonomous Driving", "year": "2021" }, { "authors": "Wei Zhou; Dong Chen; Jun Yan; Zhaojian Li; Huilin Yin; Wanchen Ge", "journal": "Autonomous Intelligent Systems", "ref_id": "b251", "title": "Multi-agent reinforcement learning for cooperative lane changing of connected and autonomous vehicles in mixed traffic", "year": "2022" }, { "authors": "Xingyu Zhou", "journal": "Proc. ACM Meas. Anal. Comput. Syst", "ref_id": "b252", "title": "Differentially Private Reinforcement Learning with Linear Function Approximation", "year": "2022-02" }, { "authors": "Ziyuan Zhou; Guanjun Liu", "journal": "", "ref_id": "b253", "title": "Romfac: A robust mean-field actor-critic reinforcement learning against adversarial perturbations on states", "year": "2022" }, { "authors": "Changxi Zhu; Mehdi Dastani; Shihan Wang", "journal": "", "ref_id": "b254", "title": "A survey of multi-agent reinforcement learning with communication", "year": "2022" }, { "authors": "Yiting Zhu; Zhaocheng He; Guilong Li", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b255", "title": "A bi-Hierarchical Game-Theoretic Approach for Network-Wide Traffic Signal Control Using Trip-Based Data", "year": "2022" }, { "authors": "Matthieu Zimmer; Claire Glanois; Umer Siddique; Paul Weng", "journal": "PMLR", "ref_id": "b256", "title": "Learning Fair Policies in Decentralized Cooperative Multi-Agent Reinforcement Learning", "year": "2021" } ]
[ { "formula_coordinates": [ 5, 224.93, 197.68, 215.24, 17.08 ], "formula_id": "formula_0", "formula_text": "𝑎 𝑖 ∈A 𝑖 𝛾 { Q 𝑗 (𝑠 𝑡 , 𝒂 𝑡 )} 𝑗 ∈ {1,...,𝑁 } -Q𝑖 (𝑠 𝑡 , 𝒂 𝑡 ) .(9)" }, { "formula_coordinates": [ 5, 116.61, 607.64, 323.56, 12.94 ], "formula_id": "formula_1", "formula_text": "▽ 𝜃 𝑖 J 𝑖 (𝜃 ) = E 𝑠∼𝜇 𝝅 𝜃 𝑖 ▽ 𝜃 𝑖 𝑙𝑜𝑔 𝜋 𝜃 𝑖 𝑎 𝑖 |𝑠 ▽ 𝑎 𝑖 𝑄 𝜋 𝜃 𝑖 (𝑠, 𝒂) | 𝒂=𝝅 𝜃 (𝑠) .(10)" }, { "formula_coordinates": [ 19, 45.46, 481.75, 394.71, 61.28 ], "formula_id": "formula_2", "formula_text": "𝑁 , S, A 1 , • • • , A 𝑁 , 𝑅, C 1 , • • • , C 𝑁 , 𝒄 1 , • • • , 𝒄 𝑁 , 𝑝, 𝛾 , where 𝑅 : S × A 1 × • • • × A 𝑁 × S → R is the joint reward function, C 𝑖 = {𝐶 𝑖 𝑗 } 𝑖 ≤𝑁 1≤ 𝑗 ≤𝑚 𝑖 is a set of cost function of agent 𝑖 (𝑚 𝑖 is the number of cost functions of agent 𝑖), 𝐶 𝑖 𝑗 : S × A 1 × • • • × A 𝑁 × S → R is the cost function, and 𝒄 𝑖 = {𝑐 𝑖 𝑗 } 𝑖 ≤𝑁 1≤ 𝑗 ≤𝑚 𝑖 ∈ R is cost-constraining values." }, { "formula_coordinates": [ 19, 140.54, 579.66, 295.83, 78.58 ], "formula_id": "formula_3", "formula_text": "J (𝝅) = E 𝝅 ∞ ∑︁ 𝑡 =0 𝛾 𝑡 𝑅 (𝑠 𝑡 , 𝒂 𝑡 , 𝑠 𝑡 +1 |𝑠 0 = 𝑠) , 𝑠.𝑡 .J 𝑖 𝑗 (𝝅) = E 𝝅 ∞ ∑︁ 𝑡 =0 𝛾 𝑡 𝐶 𝑖 𝑗 (𝑠 𝑡 , 𝒂 𝑡 , 𝑠 𝑡 +1 |𝑠 0 = 𝑠) ≤ 𝑐 𝑖 𝑗 , ∀𝑗 = 1, • • • , 𝑚 𝑖 . (11" }, { "formula_coordinates": [ 19, 436.37, 615.77, 3.8, 8.84 ], "formula_id": "formula_4", "formula_text": ")" }, { "formula_coordinates": [ 21, 45.58, 325.11, 394.59, 23.12 ], "formula_id": "formula_5", "formula_text": ": S → PD A 1 × • • • × A 𝑁 and the joint adversarial perturbation 𝒗 : S → B 1 × • • • × B 𝑀 ," }, { "formula_coordinates": [ 21, 69.87, 354.25, 370.3, 24.89 ], "formula_id": "formula_6", "formula_text": "V 𝑖 * (𝑠) = 𝑚𝑎𝑥 𝜋 𝑖 ( • |𝑠) 𝑚𝑖𝑛 𝑣 ∑︁ 𝒂 ∈A 1 ו••×A 𝑁 𝝅 (𝒂|𝑠, 𝒗 (𝑠)) ∑︁ 𝑠 ′ ∈S 𝑝 (𝑠 ′ |𝑠, 𝒂) 𝑅 𝑖 (𝑠, 𝒂, 𝑠 ′ ) + 𝛾 V 𝑖 * (𝑠 ′ ) ,(12)" }, { "formula_coordinates": [ 21, 82.72, 458.61, 357.45, 24.89 ], "formula_id": "formula_7", "formula_text": "V 𝑖 * (𝑠) = 𝑚𝑎𝑥 𝜋 𝑖 ( • |𝑠) 𝑚𝑖𝑛 R𝑖 ∈ R𝑖 , p ∈ p ∑︁ 𝒂 ∈A 1 ו••×A 𝑁 𝝅 (𝒂|𝑠) ∑︁ 𝑠 ′ ∈S p (𝑠 ′ |𝑠, 𝒂) R𝑖 (𝑠, 𝒂, 𝑠 ′ ) + 𝛾 V 𝑖 * (𝑠 ′ )(13)" } ]
2024-02-04
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b7", "b8", "b9", "b10", "b13", "b6", "b14", "b6", "b15", "b16", "b5", "b17", "b18" ], "table_ref": [], "text": "E QUIVARIANCE or invariance under some transforma- tions of input is a desired characteristic for many applications including computer vision tasks. To achieve equivariant deep learning, researchers generally adopt two different approaches: data augmentation [1], [2], and group-equivariant network architectures [3]. Data augmentation is an efficient and popular method for enhancing a given model's group equivariance. However, there is no guarantee for group equivariance with respect to unseen data.\nGroup equivariant neural networks are another way to enhance group equivariant deep learning by focusing on the neural network's architecture itself. Convolutional neural networks (CNNs), one of the most widespread deep neural network architectures in computer vision, shows a desirable property of translation equivariance due to its \"sliding window\" strategy inspired by human vision [4], [5]. In recent years, a sheer amount of publications have emerged aiming at developing and applying more advanced group equivariant CNNs to improve CNN's sample efficiency and generalizability [6]- [8]. The concept of group equivariant CNN (G-CNN) was first proposed by Cohen and Welling in [9], which exploited a higher degree of weight sharing by increasing the number of convolutional channels with the periodical rotation of the same convolutional kernel. This idea was further extended in [10] by introducing steerable filters which decomposed the convolutional kernel with an orthogonal basis of roto-reflection groups.\nFollowing the work of rotation equivariant CNN, in recent years, there have been a lot of studies based on filter decomposition for exploring scale equivariant CNN [11]- [14], and scale-rotation equivariant CNN [7], [15]. Attention mechanisms have been introduced in [7], [16] to help better identify optimal filter banks and boost equivariance performance. The idea of group equivariance has also been introduced to transformer networks to improve the transformer's data efficiency. Apart from filter decomposition, more recently, the feature alignment has also proven to be helpful for improving CNN's group equivariance against affine image transforms [17].\nThe existing works for filter-decomposition-based group equivariant CNN all require increasing channel numbers to increase parameter sharing degree, which brings in a heavy computational burden [6] and hence hampers their practical application to complex neural network architectures. Due to the computational burden needed for considering one kind of transform equivariance, the existing works of affine G-CNN are limited to transforms such as scaling, rotation, and reflection. So far, further including the shear transform is rarely considered in the conventional framework of affine G-CNN.\nIn addition, it has been shown that neural networks with greater depth and a larger number of parameters usually have better generalization performance [18], [19]. The heavy computational burden of one single group equivariant layer makes it difficult to apply parameter-sharing G-CNN to large neural network models. In this work, we show that the proposed efficient non-parameter-sharing G-CNNs can achieve superior performance to parameter-sharing G-CNNs when combined with advanced neural network architectures.\nIn this paper, we propose an efficient implementation of non-parameter-sharing G-CNNs based on an adaptive aggregation of Monte Carlo augmented decomposed filters. The contribution of this paper is embodied in four aspects:\n• We propose an efficient non-parameter-sharing group equivariant network, which serves as an efficient extension of standard CNN. We give theoretical proof of how the group equivariance is guaranteed with conventional neural network training. • Thanks to the convenience of weighted Monte Carlo (MC) sampling in implementation, our work can consider a more flexible mix of different transforms, we thereby introduce shear transform G-CNN and demonstrate its potential to improve G-CNNs' performance on natural images.\n• Our non-parameter-sharing G-CNNs achieve superior performance to parameter-sharing-based G-CNNs when combined with advanced neural network architectures. Our approach does not increase the computation burden and achieves high parameter and data efficiency compared with standard CNNs. • With a set of suitable filter bases, the proposed networks serve as promising alternatives to standard CNNs for both image classification and image denoising tasks.\nCompared with standard CNNs, the proposed methods are good at exploiting large convolutional kernels efficiently, which helps build an efficient lightweight imagedenoising network. The paper is organized as follows: In the Methods section, we review the general framework of the group-equivariant model and introduce the details of our approach. We show the experimental results and discussions in the Experiments section and conclude the paper in the Conclusion section." }, { "figure_ref": [], "heading": "II. METHODS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. The general framework of group-equivariant model", "publication_ref": [ "b19", "b2", "b2", "b21", "b2" ], "table_ref": [], "text": "Group [20] is a classical mathematical concept, which is defined to be a set with a corresponding binary operation that is associative, contains an identity element, and has an inverse element for each element. In this paper, all the discussed groups are assumed to be locally compact groups. Following [3], we will briefly introduce the definition of group equivariant mapping and group convolution.\n1) Group equivariance : In this paper, we consider a group G for the affine transformations on 2D images R 2 , which can be written as G = R 2 ⋊ A, a semidirect product between the translation group R 2 and another affine transform group A (the general linear group whose group element for 2D images takes the representation of a 2 × 2 matrix). Its group product rule is defined as\ng 1 • g 2 = (x 1 , M (a 1 )) • (x 2 , M (a 2 )) = (x 1 + M (a 1 )x 2 , M (a 1 )M (a 2 )),(1)\nwhere \"•\" denotes the group product operator,\ng 1 = (x 1 , M (a 1 )), g 2 = (x 2 , M (a 2 )) with x 1 , x 2 ∈ R 2 , a 1 , a 2 ∈\nR d , and function M : R d → A with d the number of parameters for the decomposition of the transform matrix. Without loss of generality, we can consider the following decomposition of the transform matrix, in particular, let d = 5 and for any a = (α, β, θ, s, r) with α, β, θ, s, r ∈ R, M (a) = R(θ)A 1 (α)A 2 (β)S 1 (s)S 2 (r), where\nS 1 (s) = 1 s 0 1 ,(2)\nS 2 (r) = 1 0 r 1 ,(3)\nA 1 (α) = 2 α 0 0 1 ,(4)\nA 2 (β) = 1 0 0 2 β ,(5)\nR(θ) = cos θ sin θ -sin θ cos θ .(6)\nIt should be noted that the most existing works on G-CNN only consider translation, scaling, rotation, and mirror transforms.\nIn this work, shear transform is included to form a more general case and explore its potential for boosting G-CNN's performance on natural images. For a group element of the affine transformation group g ∈ G, there is a corresponding group action on an index set X , i.e., a transformation T : G × X → X for the index set. And for any g 1 , g 2 ∈ G and x ∈ X , we have\nT (g 1 • g 2 , x) = T (g 1 , T (g 2 , x)).(7)\nThe corresponding transformation T g for any function f : X → C can be further defined as\nT g : f → f ′ where f ′ (T (g, x)) = f (x).\nWith the concept of group and group actions, we can now define the group equivariant map. Suppose we have a function f : X → V to be the input image or feature map of a neural network layer with V as a vector space. Let L V (X ) denote the Banach space of functions f : X → V . Consider a map φ :\nL V1 (X 1 ) → L V2 (X 2 ) between two function spaces L V1 (X 1 ) : {f : X 1 → V 1 } and L V2 (X 2 ) : {f : X 2 → V 2 }. For g ∈ G, we have T g and T ′\ng to be G actions corresponding to set X 1 and X 2 , as well as T g and T ′ g . The map φ is group equivariant if and only if\n∀g ∈ G, φ(T g (f )) = T ′ g (φ(f ))(8)\n2) Group convolution: A standard convolution of functions f with a kernel ψ: R → R, is a translation-equivariant map, which can be written as\n(ψ * f )(x) = ψ(-x + x ′ )f (x ′ )dx ′ ,(9)\nGroup convolution is a generalization of standard convolution by introducing the group operation. The group convolution [3] [21] [22] [7] on a compact group G at group element g is written as\n(ψ * f )(g) = G ψ(g -1 • g ′ )f (g ′ )dµ(g ′ ) (10\n)\nwhere µ is the Haar measure, and f, ψ : G → C. It should be noted that plain convolution is a special case of group convolution when only the translation group is considered (i.e., g -1 = -x; g ′ = x ′ and the \"•\" corresponds to \"+\"). [3] proved that the group convolution defined in equation ( 10) is a group-equivariant map." }, { "figure_ref": [], "heading": "B. Adaptive aggregation of Monte Carlo augmented decomposed filters", "publication_ref": [ "b22", "b9", "b23", "b23", "b24", "b25", "b26", "b28", "b9" ], "table_ref": [], "text": "In a discrete implementation of group convolution, the numerical integration is usually implemented based on the trapezoidal rule [23] using evenly sampled group elements g ′ in equation (10). For each input feature map channel (when considering many different kinds of affine transforms such as scaling, rotation, and mirror), nested integrals are needed, i.e. one nested integral per transform is considered. By this, the approach increases the computation burden exponentially with the number of considered transforms, which leads to the curse of dimensionality [24]. For example, when we have m different elements per transform and n transforms, this amounts to m n terms to be evaluated.\nTo improve the flexibility of group convolution for the general affine transform group and avoid the curse of dimensionality, in this work, we propose to approximate the multi-dimensional integral over group operations in the group convolution by MC integration.\n1) Monte Carlo integration: MC integration is known to tackle high-dimensional integration with robust convergence independent of the number of dimensions [24]. We consider for brevity only the standard MC variant, being aware that more efficient schemes such as Quasi-MC have the potential to substantially increase the performance further [25], [26].\nFor a multi-dimensional Monte Carlo integral, we have the theorem [27]- [29] as follows, Theorem II.1. Let µ p be a probabilistic measure on (R d , B(R d )), i.e., µ p (R d ) = 1, and B(R d ) denotes the Borel algebra on R d with d the number of dimensions. For f ∈ L 2 (R d , B(R d ), µ p ), we define\nI(f ) = R d f (x)dµ p (x),(11)\nand\nQ N (f ) = 1 N N i=1 f (ξ i ),(12)\nwhere (ξ i ) i∈N is an i.i.d sequence of random variables with distributions µ p . We have\nQ N (f ) → I(f ) when N → +∞.\nFor all N ∈ N, there is\n(E I(f ) -Q N (f ) 2 ) 1/2 = σ(f )/ √ N , (13\n)\nwhere σ 2 (f ) = I(f 2 ) -(I(f )) 2 , and • is the l 2 norm.\nThe Haar measure in (10) can be considered to be a corresponding probabilistic measure µ p . Therefore, it is theoretically justified to apply MC sampling for the discrete implementation of G-CNN." }, { "figure_ref": [ "fig_3" ], "heading": "2) Discrete implementation of G-CNN with MC integration:", "publication_ref": [ "b19", "b19", "b29", "b30", "b31", "b32", "b33" ], "table_ref": [], "text": "In the discrete implementation, we stochastically sample the group operations including, in our example, scaling, rotation, and shear transform. This approach allows a more flexible choice of the number of used transformations and decouples the relationship between the number of output channels and the number of categories of considered transformations.\nSpecifically, when we consider a filter W = w • ψ with a fixed base filter ψ and w the trainable scalar weight, a continuous CNN layer can be written as\nf (l+1) co (x) = ci w (l) co,ci (ψ * f (l) ci )(x) = ci R 2 w (l) co,ci ψ(u -x)f (l) ci (u)du(14)\nA corresponding discrete implementation of the convolutional layer 1 of l-th layer is as below\nf (l+1) co (x) = ci u w (l) co,ci ψ(u -x)f (l) ci (u)(15)\nwhere x, u ∈ R 2 , ψ(•) denotes the spatial convolutional filter function with a domain of translation group\nR 2 , c i ∈ [1, C l ] and c o ∈ [1, C l+1 ]. f (l) ci (x)\nis the feature map of the l-th layer and w l co,ci is the filter weight for the filter of the l-th layer with output channel c o and input channel c i .\nA continuous affine group equivariant CNN can be written as f\n(l+1) co (g) = ci w (l) co,ci (ψ * f (l) ci )(g) = ci G w (l) co,ci ψ(g -1 • g ′ )f (l) ci (g ′ )dµ(g ′ )(16)\nLet g = (x, M (a)) and g ′ = (u, M (b)), we can rewrite the Haar integration in a group convolution of the l-th layer as: A typical corresponding discrete G-CNN can be written as below:\nf (l+1) co (x, a) = ci R d R 2 w (l) co,ci 2 -α b -β b • ψ(-x + M (-a)u, M (-a)M (b))f (l) ci (u, b)dudb\nf (l+1) co (x, a) = ci b u w (l) co,ci 2 -α b -β b • ψ(-x + M (a)u, M (-a)M (b))f (l) ci (u, b)(18)\nIn particular, the sum over the parameter vector b is a threelayer nested sum corresponding to the nested integrals in the continuous domain, which, as mentioned in previous sections, leads to a heavy computational burden.\nThe Monte-Carlo integration considers a and b as random variables. Suppose their entries α = ξ α , β = ξ β , θ = ξ θ , s = tan(ξ s ) and r = tan(ξ r ), where ξ α , ξ β , ξ θ , ξ s and ξ r are uniformly distributed in the range of\n[η 1 α , η 2 α ), [η 1 β , η 2 β ), [-η θ , η θ ), [-η s , η s ) and [-η r , η r ), respectively.\nSuppose we draw N ′ samples of a, and N samples of b, respectively. The nested sum over b collapses into a onedimension sum over N samples for MCG-CNN (Monte Carlo Group-equivariant CNN):\nf (l+1) co (x, a n ′ ) = ci n u w (l) co,ci 2 -α bn -β bn • ψ(-x + M (-a n ′ )u, M (-a n ′ )M (b n ))f (l) ci (u, b n ) (19)\nwhere n ′ ∈ {1, . . . , N ′ }, and n ∈ {1, . . . , N }.\n3) Adaptive aggregation of MC-augmented filters: The Monte-Carlo approximation of G-CNN allows a flexible choice of the number of sampling points N per trainable weight w (l) independent of the number of dimensions. However, compared with standard CNN, the computational burden of MCG-CNN is still N times larger. To eliminate the difference in computational burden between MCG-CNN and standard CNN, we propose WMCG-CNN (Weighted Monte Carlo Group-equivariant CNN) 2 , which reduces the number of transformations per input feature map channel (also per trainable weight) N to 1 and uses filter-weight-wise sampling instead. Specifically, we establish a one-to-one relationship between b, c o and c i , as well as a and c o by using c o and c i to index a and b. Thus we introduce notation b co,ci and a co .\nIn this way, we yield WMCG-CNN with the equation ( 19) simplified into:\nf (l+1) co (x, a co ) = ci u w (l) co,ci 2 -α bc o,ci -β bc o,ci • ψ(-x + M (-a co )u, M (-a co )M (b co,ci ))f (l) ci (u, b co,ci ),(20)\nWMCG-CNN allows us to significantly increase the number of used transformations without increasing the computational burden, which, as shown in the later experiments, helps WMCG-CNN achieve superior performance to traditional discrete G-CNN.\nHowever, due to the changes happening to WMCG-CNN, a question arises, i.e., under which circumstances, the WMCG-CNN can still be analogous to continuous G-CNN as the discrete G-CNN does? Below, we show that random initialization of the trainable weights can help the WMCG-CNN to be analogous to continuous G-CNN.\nTheorem II.2. Let f (l) be an input feature map of the l-th layer with the number of channels C l , and for each channel the number of spatial sampling points along vertical direction N H , the number of spatial sampling points along horizontal direction N W . A WMCG-CNN layer is group equivariant when the width of CNN, C l → ∞, N H → ∞, N W → ∞, and there exists a constant C < +∞ so that R wdµ w (w) < C with µ w a probabilistic measure on (R, B(R)) for the filter weight w, being a random variable. Proof. To prove the theorem, we have two steps: first, we construct a weighted integration function I and prove it is group equivariant. Then, we show that equation (20) corresponds to the discrete form of I.\n1) Given g = (x, M (a co )) and g ′ = (u, M (b)), we define the integration on R × G as\nI(x, a co ) = R×G w • ψ(g -1 • g ′ )f (l) (g ′ )dµ(g ′ )dµ w (w) = R R d R 2 w2 -α b -β b ψ(-x + M (-a co )u, M (-a co )M (b)) f (l) (u, b)dudbdw(21)\nSince R wdµ w (w) < C, we have the constant\nC w = R wd(w). Thus I(x, a co ) = C w • R d R 2 2 -α b -β b ψ(-x + M (-a co )u, M (-a co )M (b)) f (l) (u, b)dudb (22) which is group equivariant. 2) Let q(x, a co , b) = R 2 2 -α b -β b ψ(-x + M (-a co )u, M (-a co )M (b))f (l) (u, b)du, so we have I(x, a co ) = R R d wq(x, a co , b)dbdw(23)\nNow, we consider the transition from continuous to discrete formulations. Since both w and b are independently randomly sampled with the samples indexed by c i . According to Theorem II.1, we have\nI(x, a co ) = lim C l →∞ 1 C l ci w (l) co,ci q(x, a co , b co,ci ) (24)\nSince u is sampled based on the trapezoidal rule, we have\nq(x, a co , b co,ci ) = lim NH →+∞ lim NW →+∞ 1 NH NW u 2 -α bc o,ci -β bc o,ci • ψ(-x + M (-a co )u, M (-a co )M (b co,ci ))f (l) ci (u, b co,ci ),(25)\nMeanwhile, we rewrite the corresponding convolution part of WMCG-CNN equation (20) as\nf (l+1) co (x, a co ) = 1 C l NH NW ci u w (l) co,ci 2 -α bc o ,c i -β bc o,ci • ψ(-x + M (-a co )u, M (-a co )M (b co,ci ))f (l) ci (u, b co,ci ),(26) where\nc i ∈ {1, 2, . . . , C l }, u = (u 1 , u 2 ) with u 1 ∈ {1, 2, . . . , N H } and u 2 ∈ {1, 2, . . . , N W }. Here we include coefficient 1 C l NH NW so that f (l+1) co\nis the average of the samples.\nTherefore, by combining ( 24) and ( 25), we have\nI(x, a co ) = lim C l →∞ lim NH →+∞ lim NW →+∞ f (l+1) co (x, a co )(27)\nThe proof is completed.\nAs we know, random initialization of trainable weights is a common strategy adopted in most existing state-of-theart deep learning methods. Theorem II.2 proves that the random weight initialization strategy together with the MCaugmented filters can help raise the CNN to a good starting point before training with an optimization algorithm, which therefore makes it easier for the network to find the optimal solution. This starting point is a network that approximately satisfies convolutional-layer-wise group equivariance. Obviously, a necessary condition of an optimal solution is that in contrast to the approximate convolutional-layer-wise group equivariance, it is at least at the level of the entire neural network that the group equivariance is achieved approximately.\nFrom Theorem II.1, we know that the convergence speed of the Monte Carlo integration is slow. When the number of samples is small, the variance may not be satisfactory. However, with the weight w as learnable parameters and the samples of transformations fixed, the neural network can learn optimal weight distribution to improve the group equivariance, which will be shown in the later experiments (Fig. 3). Such sampling mechanism is thereby similar to that of importance sampling [30]. The difference is that the weight distribution in WMCG-CNN is not manually designed but is learned by iterative data-driven optimization algorithms for neural networks instead.\n4) Filter decomposition and the relationship to traditional CNN filters: In the previous section, we only consider one basis filter function ψ, to increase the expressiveness of networks, we adopt the filter decomposition approach to build convolutional filters by the sum of multiple weighted filter bases. Specifically, we have W co,ci,j the trainable weights, j ∈ [1, K], and K the chosen number of basis functions. In the proposed WMCG-CNN, according to equation ( 20) the WMCG-CNN can be written in a similar way to the standard CNN in equation ( 15) as below:\nf (l+1) (c o , x, a co ) = ci u 2 -α bc o,ci -β bc o,ci W (l) co,ci ( -x + M (-a co )u, M (-a co )M (b co,ci ))f (l) ci (u, b co,ci ),(28)\nIn the practical discrete implementation, the choice of filter basis can be various. For different datasets and different tasks, the optimal filter basis can be different. In the following experiments, we generally adopt two kinds of filter basis: the Fourier-Bessel (FB) basis [31], and the continuous Mexican hat (MH) wavelet basis [32], [33]. The scaling, rotation, and shear transformations are used to augment the FB filters. The MH filters are augmented by scaling, translation, and shear transformations. Supposing any filter basis is a matrix of size k × k, for FB basis, we can have k 2 -1 non-constant bases and a constant scalar basis at the most.\nThe 2D MH filters can be written as\nψ(σ x , σ y , x, y) = 1 2πσxσy [2 -x 2 σ 2 x -y 2 σ 2 y ]e -x 2 2σ 2 x -y 2 2σ 2 y(29)\nThe peak frequency of the MH wavelet function\nf x = 1 √ 2πσx\nand f y = 1 √ 2πσy [34], which can be used for scaling along x or y axis.\nIt should be noted that when using the bases consisting of translation-augmented discrete Dirac delta functions, the proposed methods fall back into standard CNN filters. Figure 1 shows examples of different filter bases.\nIn addition, theoretically, in the training phase, the computational burden of the proposed method is slightly higher than that of the corresponding standard CNN that uses the same size of the convolutional kernel. This is because of the weighted sum of filter bases. Yet, the weighted sum can be pre-calculated with the results stored in memory before doing an inference. Thus, in the inference phase, the network using the proposed method has exactly the same computational complexity as the corresponding standard CNN." }, { "figure_ref": [], "heading": "C. Extending to discrete groups with bootstrap resampling", "publication_ref": [ "b34" ], "table_ref": [], "text": "In the previous sections, we focus on continuous groups and how the weighted G-CNN can be approximated with the discretized implementation assuming an infinite number of filter samples. However, our method can also apply to cases where the number of available group elements is far less than the number of input-output channel pairs (in equation ( 20)). We can use the bootstrap resampling [35] method to make the number of augmented basis samples large enough to match each weighted input-output channel pair." }, { "figure_ref": [], "heading": "D. Integrating WMCG-CNN into the existing state-of-the-art CNN architectures", "publication_ref": [ "b35", "b35", "b36" ], "table_ref": [], "text": "We see that when ψj degenerates to a scalar, i.e. a 1×1 base filter, the convolution is obviously exactly group equivariant, while on the other hand, the non-scalar filter ψj requires a huge number of sampling points to approximate the continuous G-CNN. To leverage the advantage of 1 × 1 base filters, one can add 1 × 1-filter-based convolution layers as a secondary adaptive aggregation of features from the output channels of WMCG-CNN. By combining the 1 × 1 layer with the k × k convolution layer into a single unit or block, the total number of considered transformations is increased from C l to C l+1 C l (i.e., the number of all the k × k filters used in the l-th layer) with a relatively small increase of the number of parameters. In addition, the 1×1 CNN layer also helps to enrich the design space for WMCG-CNN, where the use of the small 1×1 kernel helps to achieve a high parameter efficiency given the same level of expressiveness and the same number of parameters [36].\nInterestingly, the secondary aggregation with a cascaded 1 × 1 convolutional layer is intrinsically similar to the bottleneck architecture that is adopted in all the state-of-the-art CNNs derived from ResNet [36]. The only difference is that the bottleneck architecture uses one extra 1 × 1 convolution layer before the k × k convolution layer.\nApart from 1 × 1 layers, we also note that the channel grouping convolution technique 3 proposed in ResNeXt [37] is also a helpful technique for improving CNN's performance.\nThanks to the flexibility of the proposed WMCG-CNN, we can easily combine these techniques with the WMCG-CNN. An example is shown in Fig. 2. Similar blocks but with different filter sizes will be used in later image denoising experiments." }, { "figure_ref": [], "heading": "III. EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "We test WMCG-CNN on classification and regression tasks, such as image classification and image denoising. In the image classification part, we also conducted ablation experiments, and compared our method with the parameter-sharing group equivariant methods." }, { "figure_ref": [], "heading": "A. Performance metrics", "publication_ref": [], "table_ref": [], "text": "We adopt the following performance metrics: the number of trainable parameters in million (10 6 ), Params(M); the number of Multiply-Accumulate Operations in giga (10 9 ), MACs(G); the prediction error in percentage, Error(%); mean prediction error on corrupted validation image datasets in percentage, mCE(%); top 1 accuracy in percentage, top-1 acc.(%); top 5 accuracy in percentage, top-5 acc.(%); peak signal-to-noise ratio in dB, PSNR(dB); the degree of parameter-sharing, MACs/Params (G/M).\nIn addition, for the section of the ablation experiments, we define mean group-equivariant error (mGE) according to equation ( 8):\nmGE = E( φ(T g (f )) -T ′ g (φ(f )) )(30)\nwhere for each input image, a random affine transformation g ∈ G is selected with the shear range [-0.0625π, 0.0625π), the scaling range [1.0, 1.1) and rotation angle range [-0.125π, 0.125π)." }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "B. Ablation experiments", "publication_ref": [ "b36", "b37", "b37" ], "table_ref": [], "text": "For ablation experiments, we consider a subset of the Im-ageNet1k dataset. ImageNet1k has 1.28 million color images 3 It should be noted that here the channel group is a concept that differs from the transformation group. The channel grouping convolution technique divides the input feature map channels into multiple channel groups of the same width to perform convolution operations separately.\nfor 1,000 different classes from WordNet. The validation dataset consists of 50,000 images. For quick experiments, we extract the first 40 classes for ablation experiments (i.e., from class n01440764 to class n01677366), and thus we denote the corresponding datasets as ImageNet40. We scale all the images to 224 × 224 resolution and normalize images in a classic way. The prediction Error (%) is used to measure the classification performance.\nWe use ResNet18, ResNet50, and ResNeXt50 [37] as the baseline networks. We follow the state-of-the-art robust training methods as in [38]. The neural networks are trained for 90 epochs with an initial learning rate of 0.01 following a cosine decay schedule. The Pixmix augmentation technique is used with its default setting as in [38]. Pixmix uses affine transformations (including translation, rotation, and shear transform) as well as other augmentation methods to generate augmented clean images. As for WMCG-CNN, we replace all the hidden non-1×1 CNN layers with the proposed WMCG-CNN layers. By default, the size of FB basis is 5 × 5, the number of basis per filter is 9 (We have the number of bases per filter 9 (the first 9 low frequency Bessel filters), the scaling range is [1.0, 2.0), the rotation angle range [-2π, 2π), and the shear transform angle range [-0.25π, 0.25π). It should be noted that in this work, considering the symmetry of the filter basis, by default, we keep the shear angle of S 2 (r) as zero for simplicity, and the scaling parameters for A 1 (α) and A 2 (β) are of the same value.\nFor simplicity, we name each version of the tested networks with a series of suffixes. Specifically, \"kn\" means the filter size is n × n. In the experiments with shear transforms, we use the suffix \"shear-n s π\" to denote shear transform angle [-n s π, n s π). We change the value of n s from 0.00 to 0.50π to test the effect of the shear transform. In particular, n s = 0.00 means that there is no shear transform applied. The suffix \"shear2-n s π\" means that S 2 (r) is also sampled independently with the range of shear transform angle as [-n s π, n s π). The suffix \"scale2\" means that, different from the default setting, A 1 (α) and A 2 (β) are sampled independently within the default scaling range. The suffix \"FullAug\" means that, different from the default setting, A 1 (α), A 2 (β), and S 2 (r) are all sampled independently within the default range, respectively. The suffix \"WT\" means that the MH wavelet filter basis is used. The suffix \"not\" means that there is no translation augmentation used in the experiments with the MH wavelet. In addition, with ResNet18 as a baseline network, we also tested the conventional parameter-sharing scale-equivariant CNN, and the proposed MC scale-equivariant CNN. The suffix \"scale-n\" means the use of n scaling transformation α = {0, 1/n, 2/n, . . . , (n -1)/n, 1}. The suffix \"MC-scalen\" means using n MC-augmented scaling transformations. The suffix \"MC-affine-n\" means using n affine (scaling-rotationshear) transformations. For the implementation of \"scale-n\", \"MC-scale-n\", and \"MC-affine-n\", we draw n samples of transformation for the input feature map, and we only have 1 sample of transformation for the corresponding output feature map to avoid the computational burden from becoming too heavy. \"width1/n\" means that the total width of output feature maps is reduced to 1/n by decreasing the number of channels Figure 3a shows the mGE results for the first hidden CNN layer of ResNet18 and ResNet18-WMCG-shear-0.25π for the first 10-epoch training on ImageNet dataset. We see that compared with ResNet18, WMCG-CNNs start from a lower mGE but continues to converge smoothly. For WMCG network, the larger basis can give a better equivariant error but increases the computational burden. Figure 3b shows that the distribution of the learned weights is centered around zero.\nTable I shows the results of ablation experiments on Im-ageNet40, where the results with respect to Params(M) and MACs(G) are also displayed. We see that the shear transform with a suitable range shear angle is helpful for increasing WMCG-CNN's performance. In all the following experiments, we adopt n s = 0.25 for FB basis and n s = 0.5 for the MH basis by default if not explicitly stated. It should be noted that the MH basis additionally uses translation augmentation that proves to help increase its performance.\nThe results with ResNet18-k5-WMCG-shear2-0.25π, ResNet18-k5-WMCG-scale2-shear-0.25π, and ResNet18-k5-WMCG-FullAug show how a higher degree of augmentation of filter bases affects the performance. It is shown that shear transform augmentation along both directions gives a limited benefit when compared with shear augmentation along a single direction only. Scaling independently along two different directions can reduce the performance, which may be because in neural networks the scaling (such as pooling operations) is usually performed with the same scaling value along the two directions. Full augmentation can further hurt performance, which may corrupt filter information in the discrete implementation.\nAbout the results with different versions of ResNet18-WMCG-k5-nb1, we see the choice of FB basis affects the prediction performance significantly. Low-frequency basis, i.e., Bessel basis of low order, is shown to be more important than high-frequency basis. Therefore, to select a fixed number of bases, we must include the low-order Bessel basis first.\nThe conventional scale-equivariant CNN architecture ResNet18-k5-scale-4 has a decent prediction error. However, the computational burden is extremely high. When we try to reduce the computational burden by decreasing the width of the network to get ResNet18-k5-scale-4-width1/4, the number of trainable parameters is reduced significantly which leads to poorer prediction performance. The MCG-CNN also has a heavy computational burden and is superior to its corresponding G-CNN when we use a larger number of transformations and more transformation types (such as ResNet18-k5-MCaffine-16-width1/16).\nAmong the tested ResNet baseline architectures, the results with ResNet18 give the lowest mean error, which indicates that the deeper models such as ResNet50 and ResNeXt50 suffer from over-fitting because the number of classes is reduced from 1k to 40. However, the WMCG-CNN can reduce the over-fitting consistently for all the considered baseline models. WMCG-CNN versions of ResNet18 yield the best classification performance. Generally, the results on ImageNet40 demonstrate that, with suitable filter bases, WMCG-CNN is superior to standard CNN in sample efficiency, helps avoid over-fitting, and enables a quicker convergence. " }, { "figure_ref": [], "heading": "C. Comparison with the state-of-the-art parameter-sharing G-CNNs", "publication_ref": [ "b38", "b14", "b39", "b14", "b40", "b41", "b15", "b14", "b15", "b6", "b15", "b6", "b38", "b36", "b42", "b42" ], "table_ref": [], "text": "We test all the group equivariant networks on two common small-scale datasets, Rotated-Scaled-and-Sheared MNIST (RSS-MNIST), and CIFAR10 [39]. Similar to [15], RSS-MNIST is constructed through randomly rotating (by an angle uniformly distributed on [0, 2π]), shearing (by an angle uniformly distributed on [-π/4, π/4) as well as rescaling (by a uniformly random factor from [0.3, 1]) the original MNIST [40] images. The transformed images are zero-padded back to a size of 28 × 28. We upsize the image to 56 × 56 for better comparison of the models. The CIFAR-10 dataset consists of color images of size 32 × 32 × 3 for 10 classes. There are 50,000 training images and 10,000 testing images.\nAbout experiments on RSS-MNIST dataset, following the test procedure for group equivariant networks described in [15], [41], [42], we generate six independent realizations of augmented data. Each of them is split into three parts: 10,000 images for training, 2,000 for validation, and 50,000 for testing. Adam optimizer is used to train all models for 60 epochs with the batch size set as 128. The initial learning rate is 0.01 and decreases tenfold after 30 epochs. We compare methods with the state-of-the-art parameter-sharing group equivariant network, attentive G-CNN [16] and RST-CNN [15]. The implementation of the proposed method uses the same filter basis as RST-CNN. Since the number of their basis is much limited, we use the bootstrap resampling method to obtain an enough number of bases. We use modified versions of ResNeXt50 as baseline networks. Specifically, we first removed the first pooling layer, and all the convolutional layers with a stride size of two are replaced with a corresponding convolutional layer with a stride size of one concatenated with a max-pooling layer. Then we have ResNeXt50-All7 by replacing all the non-1 × 1 kernels with 7 × 7 kernel. We build ResNext-k15-base by further replacing the 7 × 7 kernels in the first three stages of ResNeXt50-All7 with 15 × 15 kernels. Finally, we get ResNeXt50-WMCG-k15 by replacing all 15 × 15 hidden convolutional layers of ResNext-k15base with the proposed WMCG convolutional layers using bootstrap resampled 15 × 15 basis.\nWe also show comparison results with attentive G-CNN [16] and efficient group equivariant network [7] on CIFAR10 datasets. The networks from [16] are trained in the same way as our methods. Since [7] does not publish their code, we simply refer to the results in their paper directly.\nFor experiments on CIFAR10 dataset [39], we use ResNeXt29 (32 × 4) [37] as the baseline network. We denote \"ResNeXt29-FB-k3-WMCG-nb9\" as the network created by replacing the 3×3 convolution layer with WMCG CNN of the 3×3 FB basis size and each convolutional filter using 9 bases. Empirically, only for the experiments with CIFAR10 dataset, we use scaling range [1.0, 1.5), while in other experiments we keep [1.0, 2.0). Apart from FB basis, we also tested the augmented continuous MH wavelet filter as the basis for WMCG CNN. We denote \"ResNeXt29-WT-k3-WMCG-nb9\" as the network created by replacing the 3 × 3 convolution layer with WMCG CNN of the 3 × 3 MH basis size and each convolutional filter using 9 bases. The filter basis is augmented with translation, scaling, and shear transforms.\nAll the CNNs are trained with the same training strategy as in [43]. Specifically, all the networks are trained using an initial learning rate of 0.1 and a cosine learning rate schedule. The optimizer uses stochastic gradient descent with Nesterov momentum and a weight decay of 0.0005. The input images are first pre-augmented with standard random left-right flipping and cropping, and then the Augmix method [43] is applied with its default settings. Augmix uses affine transformations (including translation, rotation, and shear transform) as well as other augmentation methods to generate augmented clean images. We repeat the training-and-testing experiments for six rounds and report the mean results ± standard deviation.\nTable II shows the results of compared G-CNNs. We see that the WMCG networks outperform the parameter-sharing networks with much less computational burden while not increasing the computational burden of standard CNNs. The MACs/Params results show that the proposed method has almost the same level of parameter-sharing as the standard CNNs. We also note that FB basis used to perform well on ImageNet datasets but obtained poor performance on CIFAR10 datasets. On the other hand, the wavelet basis which used to work poorly on ImageNet surpasses FB basis on CIFAR10. " }, { "figure_ref": [], "heading": "D. Image classification experiments on ImageNet benchmark datasets", "publication_ref": [ "b43", "b43", "b44", "b36", "b37", "b37", "b44", "b44", "b44" ], "table_ref": [], "text": "In this section, we test the proposed method on Ima-geNet1k datasets. In addition, we use ImageNet1k-C [44] validation datasets to test neural networks' robustness and generalizability against image corruptions, where 15 diverse corruption types [44] are included for both the ImageNet1k-C validation datasets. Two kinds of training routines are used: robust training strategies with affine transform augmentation included, and the state-of-the-art full training strategy for comparison with ConvNeXt [45].\nWe use ResNeXt50 [37] as the baseline network for the Pixmix-based [38] robust training. We denote \"ResNeXt50-k5-WMCG-nb9\" as the network created by replacing the 3 × 3 convolution layer with WMCG-CNN of the 5 × 5 FB basis size and each convolutional filter using 9 bases. The neural networks are trained with the same strategy in Pixmix [38].\nAll the neural networks are trained from scratch to compare the sample efficiency and convergence speed of different networks.\nIn addition, we test our methods with the recently proposed ConvNeXt network model [45] on ImageNet40 and ImageNet1k datasets. We use ConvNeXt-S as the baseline network. We denote \"ConvNeXt-S-k7-WMCG-nb49\" as the network created by replacing all the 7 × 7 convolution layer with WMCG-CNN of the 7 × 7 FB basis size and each convolutional filter using 49 bases. The training on both datasets is in the same way as described in [45], where the neural networks are trained for 300 epochs using an AdamW optimizer. Similar to [45], the top 1 and top 5 accuracies are considered.\nTable III shows all the results for our image classification experiments. We see that under the robust training strategies, the proposed WMCG-CNNs reduce the classification errors on both clean and corrupted datasets while using the same or smaller number of parameters. It is also noted that a large filter size can help increase the classification precision and robustness of neural networks. The WMCG-CNN is good at exploiting large-size filters for boosting the performance. As for the experiment with ConvNeXt, WMCG-CNN improves ConvNeXt-S on both ImageNet40 and ImageNet1k datasets without increasing the number of parameters and the computational burden. It is also noted that shear transform is also helpful for performance boost under the 300-epoch fulltraining routine. " }, { "figure_ref": [ "fig_5", "fig_6" ], "heading": "E. Image denoising experiments on simulation and real noisy image sets", "publication_ref": [ "b46", "b47", "b48", "b49", "b49", "b50", "b50", "b50", "b51", "b52", "b54", "b50", "b50", "b48", "b49", "b55", "b56", "b48", "b49", "b54", "b57", "b58", "b57", "b58" ], "table_ref": [ "tab_3", "tab_4", "tab_5" ], "text": "Although it has been shown that in certain cases with known noise levels, the traditional algorithms can surpass CNNs in denoising quality [47], [48], their processing speed is much slower than neural networks. Blind denoising with unknown noise levels is also a more practical scenario in the application. Thus in this paper, we only test the neural networks' performance on blind denoising tasks.\nThe experiments are divided into three parts: grayscale synthetic additive Gaussian noisy image denoising, color synthetic additive Gaussian noisy image denoising, and real-world color noisy image denoising (whose image noise is generated in the camera imaging process). For grayscale image denoising, as in [49], the same 400 180×180 images are used for training. The training images are corrupted by synthetic additive Gaussian noise of noise level (i.e., the standard deviation of noise) σ ∈ [0, 55]. 128 × 3,000 patches of size 50 × 50 are cropped to train the CNN model. For color synthetic noisy image denoising, we follow [50], where the same 400 color images are augmented with Bicubic downscaling, counterclockwise rotation, and horizontal flip. As for real-world noisy images, as in [50], the training dataset consists of 100 512 × 512 JPEG images collected from five digital cameras Canon 80D, Nikon D800, Canon 600D, Sony A7 II and Canon 5D Mark II with an ISO of 800, 1,600, 3,200, 6,400, 12,800 and 25,600.\nFive public test datasets are considered, including the grayscale image datasets Set12 [51], BSD68 [51], the color image datasets CBSD68 [51], Kodak24 [52], and the public real noisy consumer camera image dataset CC [53]. The public CC dataset consists of 15 images that are captured by three different digital cameras: Canon 5D Mark III, Nikon D600, and Nikon D800 with ISO values of 1,600, 3,200, or 6,400.\nThe training images are cropped into 41 × 41 patches for training the networks.\nWe consider one of the most famous denoising CNNs, DnCNN-B [54] [49] as the baseline network for experiments on gray-scale image denoising. We build a brand new denoising network called DnNeXt-B by replacing every plain hidden CNN layer in DnCNN-B with the bottleneck block shown in Fig. 2(b). We further denote \"DnNeXt-B-k5-WMCG-nb9\" as the network created by replacing the hidden 3 × 3 convolution layer in DnNeXt-B with WMCG-CNN of the 5 × 5 FB basis size and each convolutional filter decomposed by 9 bases. Likewise, \"DnNeXt-B-k7-WMCG-nb9\" is a corresponding version with FB basis of size 7 × 7. To emphasize the efficiency of our approach, we also include another Waveletbased denoising CNN, MWDCNN [55] for comparison. We test all the CNNs on the standard grayscale image datasets Set12 [51], and BSD68 [51]. The DnCNN, DnNeXt, and DnNeXt-WMCG are trained with the same training strategy as in [49]. We use SGD optimizer with a weight decay of 0.0001, and a momentum of 0.9. The networks are trained for 50 epochs with a batch size of 128. During the 50 epochs of training, the learning rate decreases exponentially from 1.0 × 10 -1 to 1.0 × 10 -4 .\nTable IV shows the denoising results with the metric of peak signal-to-noise ratio (PSNR) on images corrupted by simulated white Gaussian noise of different noise levels. The number of trainable parameters and MACs are also displayed. In particular, for all the calculations of MACs in image-denoising experiments, we assume the input patch size is 3 × 32 × 32 for a fair comparison of computational burden, which differs from the actual case. We find that the proposed DnNeXt and DnNeXt-MCG outperform DnCNN and MWDCNN with a much smaller number of learnable parameters. In addition, the proposed DnNeXt-WMCG achieves the highest average PSNR of all CNNs and yields especially higher PSNR on high noise levels. The larger FB basis helps gain a higher PSNR score on high noise levels, yet may cause poor performance on low noise levels.\nWe consider DudeNet [50], an upgrading of DnCNN as the baseline CNN for the synthetic color noisy image denoising . On synthetic color noisy dataset, we compare our methods with two conventional denoising algorithms CBM3D [56], TID [57], as well as three deep learning methods DnCNN [49], DudeNet [50], and MWDCNN [55]. On the real noisy image set CC, we additionally include two transformer-inspired networks for comparison, i.e., Restormer [58] and NAFNet-width64 [59]. For a fair comparison, we keep using the same training set, batch size, and epoch number. For parameter stabilization, these two networks use their best optimizer setting, respectively. Specifically, as in [58], Restormer uses AdamW optimizer with an initial learning rate 3e-4, betas (0.9, 0.999), weight decay 1e-4, and cosine annealing learning rate scheduler. As in [59], NAFNet uses AdamW optimizer with an initial learning rate 1e-3, betas (0.9, 0.9), weight decay 0, and cosine annealing learning rate scheduler.\nTable V shows the results on the public CBSD68 and Kodak24 color image datasets. Table VI shows the results on the public CC dataset. The average PSNR, the parameter number, and MACs are displayed. Fig. 4 and Fig. 5 show the visual results on CC dataset. On both synthetic and real-world color image denoising experiments, the proposed networks achieve superior performance regarding the average PSNR. DudeNeXt-WMCG gives a competitive visual performance with a lightweight architecture. The performance of Restormer and NAFNet may be further improved with more advanced training and pretraining techniques as well as larger iteration number. Yet our focus is to test the methods' data efficiency and parameter efficiency. We see that the proposed method help achieve a lightweight denoising solution by improving the DudeNeXt's performance without increasing the computational burden, which uses far fewer parameter than transformer-inspired networks." }, { "figure_ref": [ "fig_7" ], "heading": "F. Analysis and discussion", "publication_ref": [ "b60", "b30", "b32", "b61", "b61", "b62", "b24", "b63", "b64" ], "table_ref": [], "text": "The ablation experiments on ImageNet40 demonstrate the sample efficiency of WMCG-CNN for all the tested baseline network architectures including ResNet18, ResNet50, and ResNeXt50. We note that the proposed method gives a larger reduction in the Error for ResNet18 and ResNet50 than that for ResNeXt50. This is probably because a larger proportion of learnable parameters in ResNeXt50 lies in 1×1 Conv layers which, as shown in the results, causes heavy overfitting on the small dataset ImageNet40.\nThe comparison experiments with discrete G-CNN, MCG-CNN, and WMCG-CNN on ImageNet prove that the diversity of transformations is helpful for a performance boost. Introducing MC sampling allows us to consider any mix of affine transforms. In the experiments on ImageNet40, we see that the additional use of shear transform with a suitable shear range can consistently improve image classification. Meanwhile, a high degree of shear transform can harm the performance, which is because, in discrete implementation, shear transform leads to compression of information along a certain direction that causes information loss.\nAs we know a rotation transform can be decomposed into three shear transforms [61] as shown in equation (31), by combining one shear transform and one rotation transform, we can get all the possible rotation and shear transform (along horizontal or vertical direction), which thereby greatly increase the diversity of the considered transforms. In addition, shear transforms are common in daily life (as shown in Fig. 6). Obviously, a good machine learning model for classification and regression tasks usually should consider as many such kinds of group equivariance as possible.\nR(θ) = cos θ sin θ -sin θ cos θ = 1 -tan θ 2 0 1 • 1 0 sin θ 1 • 1 -tan θ 2 0 1(31)\nOn the other hand, the shear-transform-augmented convolutional filters can be considered as an example of the classic continuous shear wavelet [33], [62]. The shear wavelet can help achieve a highly sparse representation of multidimensional data [62], which also explains the superior performance it brings to the proposed WMCG-CNN.\nFor the image denoising task, we did not manage to improve CNN's performance against Dirac delta basis on SIDD datasets [63]. There are two possible reasons: 1) The filter basis we are using is not optimal for SIDD datasets. 2) The noise level is very low on the SIDD dataset. The current non-Dirac-delta basis we are using works best with a larger kernel size, which, as shown in the experiments on simulation datasets, does not work well when the noise level is low. We may leave it to future work to find bases superior to the Dirac delta bases on SIDD datasets. Meanwhile, it is possible that we can combine the non-Dirac-delta basis with the Dirac-delta basis into a certain architecture to achieve superior performance. Since it is not the focus of this paper, we will not discuss it here.\nWe also note that in MC integral and stochastic simulation, there are a lot of advanced techniques such as quasi-MC sampling [25], Markov chain MC [64], and multi-level MC [65]. There is a potential that these methods can help improve both MCG-CNN and WMCG-CNN further, and we will study this in future work.\nThe proposed WMCG-CNN shows higher flexibility and controllability than the conventional CNNs. The use of filter decomposition decouples the relationship between the filter size and the number of trainable parameters. For a certain convolutional kernel, the corresponding number of trainable parameters can be as small as only 1, or as large as any integer. In addition, we can choose a certain custom design basis as one likes to control the performance of the network. For example, in the experiment on the CIFAR10 dataset, we simply choose a single low-frequency wavelet basis and can still get a good result on the CIFAR10 dataset. It is noted that the choice of filter basis can greatly affect the performance of WMCG-CNN. For different datasets, the optimal basis type and basis size can be different." }, { "figure_ref": [], "heading": "IV. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose an efficient and flexible implementation of group-equivariant CNN based on filter-wise weighted Monte Carlo sampling, which allows a higher degree of diversity of transformations for a performance boost. Compared with parameter-sharing G-CNN, with a suitable filter basis, the proposed non-parameter-sharing WMCG-CNN can exploit deeper neural network architectures without causing heavy computational burden and achieves superior performance. The proposed WMCG-CNN is shown to be an efficient generalization of standard CNN. The utility of diverse transformations for tasks on natural images is demonstrated. The proposed WMCG-CNN shows superior efficiency on both image classification and image denoising tasks when using a suitable set of filter bases. It is possible to extend it for other computer vision tasks such as image segmentation and image reconstruction. However, the choice of filter basis is a key point for yielding good performance with the proposed method, which will be studied in future work." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "This work was supported by the Deutsche Forschungsgemeinschaft (DFG) under grant no. 428149221, by Deutsches Zentrum für Luft-und Raumfahrt e.V. (DLR), Germany under grant no. 01ZZ2105A and no. 01KD2214, and by Fraunhofer Gesellschaft e.V. under grant no. 017-100240/B7-aneg." } ]
Group-equivariant convolutional neural networks (G-CNN) heavily rely on parameter sharing to increase CNN's data efficiency and performance. However, the parameter-sharing strategy greatly increases the computational burden for each added parameter, which hampers its application to deep neural network models. In this paper, we address these problems by proposing a non-parameter-sharing approach for group equivariant neural networks. The proposed methods adaptively aggregate a diverse range of filters by a weighted sum of stochastically augmented decomposed filters. We give theoretical proof about how the group equivariance can be achieved by our methods. Our method applies to both continuous and discrete groups, where the augmentation is implemented using Monte Carlo sampling and bootstrap resampling, respectively. We demonstrate that our methods serve as an efficient extension of standard CNN. Experiments on group equivariant tests show how our methods can achieve superior performance to parametersharing group equivariant networks. Experiments on image classification and image denoising tasks show that in certain scenarios, with a suitable set of filter bases, our method helps improve the performance of standard CNNs and build efficient lightweight image denoising networks. The code will be available at https://github.com/ZhaoWenzhao/MCG CNN.
Adaptive aggregation of Monte Carlo augmented decomposed filters for efficient group-equivariant convolutional neural network
[ { "figure_caption": "( 17 )17where we have the transform parameter vectors a = [α a , β a , θ a , s a , r a ], and b = [α b , β b , θ b , s b , r b ].", "figure_data": "", "figure_id": "fig_0", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "(x, M (a)) = j w (l) co,ci,j ψj (x, M (a)) with ψj (x, M (a)) an orthogonal basis function with x ∈ R 2 and a ∈ R d the transform parameter vector, w (l)", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 1 .Fig. 2 .12Fig. 1. Examples of filter bases in 1-dimension and 2-dimension space. (a) the discrete Dirac delta basis, (b) the Fourier Bessel basis; (c) the Mexican hat basis.", "figure_data": "", "figure_id": "fig_2", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. (a) The mGEs of the first hidden CNN layer of different residual networks for the first 10-epoch training on ImageNet dataset. (b) The histogram of the learned weights for the FB basis of order 0 in the first hidden CNN layer of ResNet18-k5-WMCG-shear-0.25π.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "and real camera image denoising experiment. We build a new network DudeNeXt by replacing every plain hidden 3 × 3 CNN layer in DudeNet with the bottleneck block shown in Fig. 2(b). We further denote \"DudeNeXt-k5-WMCG-nb9\" as the network created by replacing the hidden 3 × 3 convolution layer in DudeNeXt with WMCG-CNN of the 5 × 5 FB basis size and each convolutional filter decomposed by 9 bases. We follow the same training strategy as in [50]. We use Adam optimizer with an initial learning rate of 1.0 × 10 -3 and a batch size of 128. The networks are trained for 70 epochs. During the 70 epochs of training, the learning rate decreases exponentially from 1.0 × 10 -3 to 1.0 × 10 -5", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Denoising results of a real noisy image by Nikon D800 ISO1600 from CC. (a) noisy image; (b) Dude; (c) MWDCNN; (d) Restormer; (e) NAFNet; (f) DudeNeXt; (g) DudeNeXt-B-k5-WMCG-nb9.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. Denoising results of a real noisy image by Nikon D800 ISO6400 from CC. (a) noisy image; (b) Dude; (c) MWDCNN; (d) Restormer; (e) NAFNet; (f) DudeNeXt; (g) DudeNeXt-B-k5-WMCG-nb9.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig. 6. An example of shear transformation in real life. The image is from the CBSD432 dataset. As shown in the white rectangular box, the horizontal lines of bricks undergo a shear transform along the vertical direction.", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "AVERAGE PSNR (DB) OF DIFFERENT METHODS ON THE GRAYSCALE IMAGE DATASETS SET12 AND BSD68 WITH DIFFERENT NOISE LEVELS σ FROM 15 TO 50.", "figure_data": "Params(M)MACs(G)Set12BSD68Averageσ1525355015253550MWDCNN-B [55]5.243.7532.6030.3927.2331.3929.1626.20DnCNN-B [49]0.670.6832.7030.3528.7827.1331.6029.1427.6526.1929.19DnNeXt-B0.640.6632.7630.3828.8627.1831.6529.1827.7026.2429.24DnNeXt-B-k5-WMCG-nb90.641.2632.7430.4128.8927.3031.5629.1627.7226.3129.26DnNeXt-B-k7-WMCG-nb90.642.1732.5730.3928.9627.3731.2129.0627.7426.3329.20", "figure_id": "tab_3", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "AVERAGE PSNR (DB) OF DIFFERENT METHODS ON THE COLOR IMAGE DATASETS CBSD68 AND KODAK24 WITH DIFFERENT NOISE LEVELS σ FROM 15 TO 50.", "figure_data": "Params(M)MACs(G)CBSD68Kodak24Averageσ1525355015253550CBM3D [56]33.5230.7128.8927.3834.2831.6829.9028.4630.60DnCNN [49]0.560.5733.9831.3129.6528.0134.7332.2330.6429.0231.20FFDNet [60]0.850.2233.8031.1829.5727.9634.5532.1130.5628.9931.09DudeNet [50]1.081.1134.0131.3429.7128.0934.8132.2630.6929.1031.25MWDCNN [55]5.253.7634.1831.4529.8128.1334.9132.4030.8729.2631.38MWDCNN-B [55]5.253.7634.1031.4429.8028.1534.8332.3930.8329.2331.35DudeNet-B [50]1.081.1133.9631.3229.6928.0534.7132.2330.6629.0531.21DudeNeXt-B1.071.0434.1531.4629.8028.1234.9032.4030.8129.1731.35DudeNeXt-B-k5-WMCG-nb91.072.0434.1931.5329.9028.2734.9632.4930.9429.3531.45", "figure_id": "tab_4", "figure_label": "V", "figure_type": "table" }, { "figure_caption": "PSNR (DB) OF DIFFERENT METHODS ON THE REAL-WORLD NOISY IMAGE DATASET CC [53] BY CUSTOMER CAMERAS.", "figure_data": "SettingCBM3D [56]TID [57]DnCNN [49]DudeNet [50]MWDCNN [55]Restormer [58]NAFNet-width64 [59]DudeNeXtDudeNeXt-k5DudeNeXt-k5-WMCG-nb9Canon 5D ISO = 320039.7637.2237.2636.6636.9735.7135.9936.6936.6536.9136.4034.5434.1336.7036.0135.9736.9636.4537.0237.1136.3734.2534.0935.0334.8034.7135.1635.0035.2235.15Nikon D600 ISO = 320034.1832.9933.6233.7233.9133.9132.8133.6734.0534.4635.0734.2034.4834.7034.8835.3635.0134.5234.5635.6437.1335.5835.4137.9837.0238.5839.0037.7838.5838.89Nikon D800 ISO=160036.8134.4937.9538.1037.9337.6937.2938.2038.1238.3037.7635.1936.0839.1537.4937.6438.6538.5138.5838.8137.5135.2635.4836.1438.4436.1836.9337.0736.8037.30Nikon D800 ISO=320035.0533.7034.0836.9337.1037.9638.3437.2436.9537.7234.0731.0433.7035.8036.7235.8836.3336.2935.8335.9934.4233.0733.3137.4937.2537.8438.4337.8037.1737.76Nikon D800 ISO=640031.1329.4029.8331.9432.2432.9832.3132.3232.3432.3631.2229.8630.5532.5132.5632.6032.6732.1932.6232.7730.9729.2130.0932.9132.7632.8532.2232.5532.8132.90Average35.1933.3633.8635.7235.7435.7235.8735.7535.8036.14Params (M)0.561.085.2526.11115.981.071.991.07MACs (G)0.571.113.763.411.011.042.042.04", "figure_id": "tab_5", "figure_label": "VI", "figure_type": "table" } ]
Wenzhao Zhao; Barbara D Wichtmann; Steffen Albert; Angelika Maurer; Frank G Zöllner; Ulrike Attenberger; Jürgen Hesser
[ { "authors": "R Wang; R Walters; R Yu", "journal": "", "ref_id": "b0", "title": "Data augmentation vs. equivariant networks: A theory of generalization on dynamics forecasting", "year": "2022" }, { "authors": "F Quiroga; F Ronchetti; L Lanzarini; A F Bariviera", "journal": "Springer", "ref_id": "b1", "title": "Revisiting data augmentation for rotational invariance in convolutional neural networks", "year": "2020" }, { "authors": "R Kondor; S Trivedi", "journal": "PMLR", "ref_id": "b2", "title": "On the generalization of equivariance and convolution in neural networks to the action of compact groups", "year": "2018" }, { "authors": "K Fukushima", "journal": "Biological cybernetics", "ref_id": "b3", "title": "Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position", "year": "1980" }, { "authors": "Y Lecun; B Boser; J S Denker; D Henderson; R E Howard; W Hubbard; L D Jackel", "journal": "Neural computation", "ref_id": "b4", "title": "Backpropagation applied to handwritten zip code recognition", "year": "1989" }, { "authors": "P Krüger; H Gottschalk", "journal": "", "ref_id": "b5", "title": "Equivariant and steerable neural networks: A review with special emphasis on the symmetric group", "year": "2023" }, { "authors": "L He; Y Chen; Y Dong; Y Wang; Z Lin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b6", "title": "Efficient equivariant network", "year": "2021" }, { "authors": "C Lyle; M Kwiatkowksa; Y Gal", "journal": "", "ref_id": "b7", "title": "An analysis of the effect of invariance on generalization in neural networks", "year": "2019" }, { "authors": "T Cohen; M Welling", "journal": "PMLR", "ref_id": "b8", "title": "Group equivariant convolutional networks", "year": "2016" }, { "authors": "T S Cohen; M Welling", "journal": "", "ref_id": "b9", "title": "Steerable cnns", "year": "2016" }, { "authors": "I Sosnovik; M Szmaja; A Smeulders", "journal": "", "ref_id": "b10", "title": "Scale-equivariant steerable networks", "year": "2019" }, { "authors": "I Sosnovik; A Moskalev; A Smeulders", "journal": "", "ref_id": "b11", "title": "Disco: accurate discrete scale convolutions", "year": "2021" }, { "authors": "M Sangalli; S Blusseau; S Velasco-Forero; J Angulo", "journal": "Springer", "ref_id": "b12", "title": "Scale equivariant neural networks with morphological scale-spaces", "year": "2021" }, { "authors": "W Zhu; Q Qiu; R Calderbank; G Sapiro; X Cheng", "journal": "Journal of machine learning research", "ref_id": "b13", "title": "Scalingtranslation-equivariant networks with decomposed convolutional filters", "year": "2022" }, { "authors": "L Gao; G Lin; W Zhu", "journal": "", "ref_id": "b14", "title": "Deformation robust roto-scale-translation equivariant cnns", "year": "2021" }, { "authors": "D Romero; E Bekkers; J Tomczak; M Hoogendoorn", "journal": "PMLR", "ref_id": "b15", "title": "Attentive group equivariant convolutional networks", "year": "2020" }, { "authors": "Z Sun; T Blu", "journal": "", "ref_id": "b16", "title": "Empowering networks with scale and rotation equivariance using a similarity convolution", "year": "" }, { "authors": "Z Yang; Y Yu; C You; J Steinhardt; Y Ma", "journal": "PMLR", "ref_id": "b17", "title": "Rethinking biasvariance trade-off for generalization of neural networks", "year": "2020" }, { "authors": "P Nakkiran; G Kaplun; Y Bansal; T Yang; B Barak; I Sutskever", "journal": "Journal of Statistical Mechanics: Theory and Experiment", "ref_id": "b18", "title": "Deep double descent: Where bigger models and more data hurt", "year": "2021" }, { "authors": "D S Dummit; R M Foote", "journal": "Wiley Hoboken", "ref_id": "b19", "title": "Abstract algebra", "year": "2004" }, { "authors": "T S Cohen; M Geiger; M Weiler", "journal": "Advances in neural information processing systems", "ref_id": "b20", "title": "A general theory of equivariant cnns on homogeneous spaces", "year": "2019" }, { "authors": "E J Bekkers", "journal": "", "ref_id": "b21", "title": "B-spline cnns on lie groups", "year": "2019" }, { "authors": "K Atkinson", "journal": "John wiley & sons", "ref_id": "b22", "title": "An introduction to numerical analysis", "year": "1991" }, { "authors": "S Weinzierl", "journal": "", "ref_id": "b23", "title": "Introduction to monte carlo methods", "year": "2000" }, { "authors": "R E Caflisch", "journal": "Acta numerica", "ref_id": "b24", "title": "Monte carlo and quasi-monte carlo methods", "year": "1998" }, { "authors": "G P Lepage", "journal": "Journal of Computational Physics", "ref_id": "b25", "title": "A new algorithm for adaptive multidimensional integration", "year": "1978" }, { "authors": "P Przybyłowicz", "journal": "", "ref_id": "b26", "title": "Foundations of monte carlo methods and stochastic simulations-from monte carlo lebesgue integration to weak approximation of sdes", "year": "2022" }, { "authors": "A Kong; P Mccullagh; X.-L Meng; D Nicolae; Z Tan", "journal": "Journal of the Royal Statistical Society: Series B (Statistical Methodology)", "ref_id": "b27", "title": "A theory of statistical models for monte carlo integration", "year": "2003" }, { "authors": "T Kiria; G Pantsulaia", "journal": "Transactions of A. Razmadze Mathematical Institute", "ref_id": "b28", "title": "Calculation of lebesgue integrals by using uniformly distributed sequences", "year": "2016" }, { "authors": "P W Glynn; D L Iglehart", "journal": "Management science", "ref_id": "b29", "title": "Importance sampling for stochastic simulations", "year": "1989" }, { "authors": "Q Qiu; X Cheng; G Sapiro", "journal": "PMLR", "ref_id": "b30", "title": "Dcfnet: Deep neural network with decomposed convolutional filters", "year": "2018" }, { "authors": "H Ryan", "journal": "", "ref_id": "b31", "title": "Ricker, ormsby; klander, bntterwo-a choice of wavelets", "year": "1994" }, { "authors": "J.-P Antoine; P Carrette; R Murenzi; B Piette", "journal": "Signal processing", "ref_id": "b32", "title": "Image analysis with two-dimensional continuous wavelet transform", "year": "1993" }, { "authors": "Y Wang", "journal": "Geophysics", "ref_id": "b33", "title": "Frequencies of the ricker wavelet", "year": "2015" }, { "authors": "T C Hesterberg", "journal": "The american statistician", "ref_id": "b34", "title": "What teachers should know about the bootstrap: Resampling in the undergraduate statistics curriculum", "year": "2015" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b35", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "S Xie; R Girshick; P Dollár; Z Tu; K He", "journal": "", "ref_id": "b36", "title": "Aggregated residual transformations for deep neural networks", "year": "2017" }, { "authors": "D Hendrycks; A Zou; M Mazeika; L Tang; B Li; D Song; J Steinhardt", "journal": "", "ref_id": "b37", "title": "Pixmix: Dreamlike pictures comprehensively improve safety measures", "year": "2022" }, { "authors": "A Krizhevsky; G Hinton", "journal": "", "ref_id": "b38", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "Y Lecun; L Bottou; Y Bengio; P Haffner", "journal": "", "ref_id": "b39", "title": "Gradient-based learning applied to document recognition", "year": "1998" }, { "authors": "D Marcos; B Kellenberger; S Lobry; D Tuia", "journal": "", "ref_id": "b40", "title": "Scale equivariance in cnns with vector fields", "year": "2018" }, { "authors": "R Ghosh; A K Gupta", "journal": "", "ref_id": "b41", "title": "Scale steerable filters for locally scale-invariant convolutional neural networks", "year": "2019" }, { "authors": "D Hendrycks; N Mu; E D Cubuk; B Zoph; J Gilmer; B Lakshminarayanan", "journal": "", "ref_id": "b42", "title": "Augmix: A simple data processing method to improve robustness and uncertainty", "year": "2019" }, { "authors": "D Hendrycks; T Dietterich", "journal": "", "ref_id": "b43", "title": "Benchmarking neural network robustness to common corruptions and perturbations", "year": "2019" }, { "authors": "Z Liu; H Mao; C.-Y Wu; C Feichtenhofer; T Darrell; S Xie", "journal": "", "ref_id": "b44", "title": "A convnet for the 2020s", "year": "2022" }, { "authors": "Z Liu; Y Lin; Y Cao; H Hu; Y Wei; Z Zhang; S Lin; B Guo", "journal": "", "ref_id": "b45", "title": "Swin transformer: Hierarchical vision transformer using shifted windows", "year": "2021" }, { "authors": "W Zhao; Y Lv; Q Liu; B Qin", "journal": "IEEE Access", "ref_id": "b46", "title": "Detail-preserving image denoising via adaptive clustering and progressive pca thresholding", "year": "2018" }, { "authors": "W Zhao; Q Liu; Y Lv; B Qin", "journal": "IEEE Transactions on Image Processing", "ref_id": "b47", "title": "Texture variation adaptive image denoising with nonlocal pca", "year": "2019" }, { "authors": "K Zhang; W Zuo; Y Chen; D Meng; L Zhang", "journal": "IEEE transactions on image processing", "ref_id": "b48", "title": "Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising", "year": "2017" }, { "authors": "C Tian; Y Xu; W Zuo; B Du; C.-W Lin; D Zhang", "journal": "Knowledge-Based Systems", "ref_id": "b49", "title": "Designing and training of a dual cnn for image denoising", "year": "2021" }, { "authors": "H Li; J Cai; T N A Nguyen; J Zheng", "journal": "IEEE", "ref_id": "b50", "title": "A benchmark for semantic image segmentation", "year": "2013" }, { "authors": "R Franzen", "journal": "", "ref_id": "b51", "title": "Kodak lossless true color image suite", "year": "1999" }, { "authors": "S Nam; Y Hwang; Y Matsushita; S J Kim", "journal": "", "ref_id": "b52", "title": "A holistic approach to cross-channel image noise modeling and its application to image denoising", "year": "2016" }, { "authors": "H C Burger; C J Schuler; S Harmeling", "journal": "", "ref_id": "b53", "title": "Image denoising: Can plain neural networks compete with bm3d?", "year": "2012" }, { "authors": "C Tian; M Zheng; W Zuo; B Zhang; Y Zhang; D Zhang", "journal": "Pattern Recognition", "ref_id": "b54", "title": "Multistage image denoising with the wavelet transform", "year": "2023" }, { "authors": "K Dabov; A Foi; V Katkovnik; K Egiazarian", "journal": "IEEE Transactions on image processing", "ref_id": "b55", "title": "Image denoising by sparse 3-d transform-domain collaborative filtering", "year": "2007" }, { "authors": "E Luo; S H Chan; T Q Nguyen", "journal": "IEEE transactions on image processing", "ref_id": "b56", "title": "Adaptive image denoising by targeted databases", "year": "2015" }, { "authors": "S W Zamir; A Arora; S Khan; M Hayat; F S Khan; M.-H Yang", "journal": "", "ref_id": "b57", "title": "Restormer: Efficient transformer for high-resolution image restoration", "year": "2022" }, { "authors": "L Chen; X Chu; X Zhang; J Sun", "journal": "Springer", "ref_id": "b58", "title": "Simple baselines for image restoration", "year": "2022" }, { "authors": "K Zhang; W Zuo; L Zhang", "journal": "IEEE Transactions on Image Processing", "ref_id": "b59", "title": "Ffdnet: Toward a fast and flexible solution for cnn-based image denoising", "year": "2018" }, { "authors": "E Andres", "journal": "Springer", "ref_id": "b60", "title": "The quasi-shear rotation", "year": "1996" }, { "authors": "K Guo; G Kutyniok; D Labate", "journal": "", "ref_id": "b61", "title": "Sparse multidimensional representations using anisotropic dilation and shear operators", "year": "2006" }, { "authors": "A Abdelhamed; S Lin; M S Brown", "journal": "", "ref_id": "b62", "title": "A high-quality denoising dataset for smartphone cameras", "year": "2018" }, { "authors": "M K Cowles; B P Carlin", "journal": "Journal of the American Statistical Association", "ref_id": "b63", "title": "Markov chain monte carlo convergence diagnostics: a comparative review", "year": "1996" }, { "authors": "M B Giles", "journal": "Acta numerica", "ref_id": "b64", "title": "Multilevel monte carlo methods", "year": "2015" } ]
[ { "formula_coordinates": [ 2, 98.64, 698.37, 201.44, 22.65 ], "formula_id": "formula_0", "formula_text": "g 1 • g 2 = (x 1 , M (a 1 )) • (x 2 , M (a 2 )) = (x 1 + M (a 1 )x 2 , M (a 1 )M (a 2 )),(1)" }, { "formula_coordinates": [ 2, 48.96, 726.45, 251.1, 29.04 ], "formula_id": "formula_1", "formula_text": "g 1 = (x 1 , M (a 1 )), g 2 = (x 2 , M (a 2 )) with x 1 , x 2 ∈ R 2 , a 1 , a 2 ∈" }, { "formula_coordinates": [ 2, 401.88, 137.13, 161.24, 21.84 ], "formula_id": "formula_2", "formula_text": "S 1 (s) = 1 s 0 1 ,(2)" }, { "formula_coordinates": [ 2, 401.88, 170.25, 161.24, 21.84 ], "formula_id": "formula_3", "formula_text": "S 2 (r) = 1 0 r 1 ,(3)" }, { "formula_coordinates": [ 2, 397.56, 202.01, 165.56, 23.2 ], "formula_id": "formula_4", "formula_text": "A 1 (α) = 2 α 0 0 1 ,(4)" }, { "formula_coordinates": [ 2, 397.8, 236.49, 165.32, 21.96 ], "formula_id": "formula_5", "formula_text": "A 2 (β) = 1 0 0 2 β ,(5)" }, { "formula_coordinates": [ 2, 384.12, 269.61, 179, 29.04 ], "formula_id": "formula_6", "formula_text": "R(θ) = cos θ sin θ -sin θ cos θ .(6)" }, { "formula_coordinates": [ 2, 371.52, 413.01, 191.6, 17.04 ], "formula_id": "formula_7", "formula_text": "T (g 1 • g 2 , x) = T (g 1 , T (g 2 , x)).(7)" }, { "formula_coordinates": [ 2, 312, 443.58, 251.14, 22.54 ], "formula_id": "formula_8", "formula_text": "T g : f → f ′ where f ′ (T (g, x)) = f (x)." }, { "formula_coordinates": [ 2, 312, 527.37, 251.04, 34.53 ], "formula_id": "formula_9", "formula_text": "L V1 (X 1 ) → L V2 (X 2 ) between two function spaces L V1 (X 1 ) : {f : X 1 → V 1 } and L V2 (X 2 ) : {f : X 2 → V 2 }. For g ∈ G, we have T g and T ′" }, { "formula_coordinates": [ 2, 374.64, 593.22, 188.48, 17.67 ], "formula_id": "formula_10", "formula_text": "∀g ∈ G, φ(T g (f )) = T ′ g (φ(f ))(8)" }, { "formula_coordinates": [ 2, 359.64, 656.22, 203.48, 17.79 ], "formula_id": "formula_11", "formula_text": "(ψ * f )(x) = ψ(-x + x ′ )f (x ′ )dx ′ ,(9)" }, { "formula_coordinates": [ 2, 354.48, 734.7, 204.57, 18.97 ], "formula_id": "formula_12", "formula_text": "(ψ * f )(g) = G ψ(g -1 • g ′ )f (g ′ )dµ(g ′ ) (10" }, { "formula_coordinates": [ 2, 559.05, 735.95, 4.19, 9.03 ], "formula_id": "formula_13", "formula_text": ")" }, { "formula_coordinates": [ 3, 124.32, 544.89, 175.88, 18.22 ], "formula_id": "formula_14", "formula_text": "I(f ) = R d f (x)dµ p (x),(11)" }, { "formula_coordinates": [ 3, 126.6, 579.65, 173.6, 31.21 ], "formula_id": "formula_15", "formula_text": "Q N (f ) = 1 N N i=1 f (ξ i ),(12)" }, { "formula_coordinates": [ 3, 158.88, 630.81, 141.21, 17.04 ], "formula_id": "formula_16", "formula_text": "Q N (f ) → I(f ) when N → +∞." }, { "formula_coordinates": [ 3, 94.68, 654.57, 201.33, 25.92 ], "formula_id": "formula_17", "formula_text": "(E I(f ) -Q N (f ) 2 ) 1/2 = σ(f )/ √ N , (13" }, { "formula_coordinates": [ 3, 296.01, 664.07, 4.19, 9.03 ], "formula_id": "formula_18", "formula_text": ")" }, { "formula_coordinates": [ 3, 363.36, 179.33, 199.88, 34.35 ], "formula_id": "formula_19", "formula_text": "f (l+1) co (x) = ci w (l) co,ci (ψ * f (l) ci )(x) = ci R 2 w (l) co,ci ψ(u -x)f (l) ci (u)du(14)" }, { "formula_coordinates": [ 3, 349.92, 240.65, 213.32, 22.57 ], "formula_id": "formula_20", "formula_text": "f (l+1) co (x) = ci u w (l) co,ci ψ(u -x)f (l) ci (u)(15)" }, { "formula_coordinates": [ 3, 312, 280.01, 251.04, 31.36 ], "formula_id": "formula_21", "formula_text": "R 2 , c i ∈ [1, C l ] and c o ∈ [1, C l+1 ]. f (l) ci (x)" }, { "formula_coordinates": [ 3, 351.12, 349.49, 212.12, 34.36 ], "formula_id": "formula_22", "formula_text": "(l+1) co (g) = ci w (l) co,ci (ψ * f (l) ci )(g) = ci G w (l) co,ci ψ(g -1 • g ′ )f (l) ci (g ′ )dµ(g ′ )(16)" }, { "formula_coordinates": [ 3, 332.28, 410.33, 193.87, 27.73 ], "formula_id": "formula_23", "formula_text": "f (l+1) co (x, a) = ci R d R 2 w (l) co,ci 2 -α b -β b • ψ(-x + M (-a)u, M (-a)M (b))f (l) ci (u, b)dudb" }, { "formula_coordinates": [ 3, 340.92, 492.05, 222.32, 27.73 ], "formula_id": "formula_24", "formula_text": "f (l+1) co (x, a) = ci b u w (l) co,ci 2 -α b -β b • ψ(-x + M (a)u, M (-a)M (b))f (l) ci (u, b)(18)" }, { "formula_coordinates": [ 3, 312, 607.37, 251.01, 23.89 ], "formula_id": "formula_25", "formula_text": "[η 1 α , η 2 α ), [η 1 β , η 2 β ), [-η θ , η θ ), [-η s , η s ) and [-η r , η r ), respectively." }, { "formula_coordinates": [ 3, 328.92, 683.21, 234.32, 27.97 ], "formula_id": "formula_26", "formula_text": "f (l+1) co (x, a n ′ ) = ci n u w (l) co,ci 2 -α bn -β bn • ψ(-x + M (-a n ′ )u, M (-a n ′ )M (b n ))f (l) ci (u, b n ) (19)" }, { "formula_coordinates": [ 4, 61.2, 279.41, 239, 37.28 ], "formula_id": "formula_27", "formula_text": "f (l+1) co (x, a co ) = ci u w (l) co,ci 2 -α bc o,ci -β bc o,ci • ψ(-x + M (-a co )u, M (-a co )M (b co,ci ))f (l) ci (u, b co,ci ),(20)" }, { "formula_coordinates": [ 4, 54, 650.49, 248.79, 57.76 ], "formula_id": "formula_28", "formula_text": "I(x, a co ) = R×G w • ψ(g -1 • g ′ )f (l) (g ′ )dµ(g ′ )dµ w (w) = R R d R 2 w2 -α b -β b ψ(-x + M (-a co )u, M (-a co )M (b)) f (l) (u, b)dudbdw(21)" }, { "formula_coordinates": [ 4, 312, 56.97, 294.39, 121.66 ], "formula_id": "formula_29", "formula_text": "C w = R wd(w). Thus I(x, a co ) = C w • R d R 2 2 -α b -β b ψ(-x + M (-a co )u, M (-a co )M (b)) f (l) (u, b)dudb (22) which is group equivariant. 2) Let q(x, a co , b) = R 2 2 -α b -β b ψ(-x + M (-a co )u, M (-a co )M (b))f (l) (u, b)du, so we have I(x, a co ) = R R d wq(x, a co , b)dbdw(23)" }, { "formula_coordinates": [ 4, 325.08, 237.17, 238.16, 18.58 ], "formula_id": "formula_30", "formula_text": "I(x, a co ) = lim C l →∞ 1 C l ci w (l) co,ci q(x, a co , b co,ci ) (24)" }, { "formula_coordinates": [ 4, 321.48, 276.09, 241.76, 48.04 ], "formula_id": "formula_31", "formula_text": "q(x, a co , b co,ci ) = lim NH →+∞ lim NW →+∞ 1 NH NW u 2 -α bc o,ci -β bc o,ci • ψ(-x + M (-a co )u, M (-a co )M (b co,ci ))f (l) ci (u, b co,ci ),(25)" }, { "formula_coordinates": [ 4, 312, 355.01, 251.24, 49.88 ], "formula_id": "formula_32", "formula_text": "f (l+1) co (x, a co ) = 1 C l NH NW ci u w (l) co,ci 2 -α bc o ,c i -β bc o,ci • ψ(-x + M (-a co )u, M (-a co )M (b co,ci ))f (l) ci (u, b co,ci ),(26) where" }, { "formula_coordinates": [ 4, 312, 395.25, 251.26, 37.77 ], "formula_id": "formula_33", "formula_text": "c i ∈ {1, 2, . . . , C l }, u = (u 1 , u 2 ) with u 1 ∈ {1, 2, . . . , N H } and u 2 ∈ {1, 2, . . . , N W }. Here we include coefficient 1 C l NH NW so that f (l+1) co" }, { "formula_coordinates": [ 4, 324.12, 461.49, 239.12, 29.58 ], "formula_id": "formula_34", "formula_text": "I(x, a co ) = lim C l →∞ lim NH →+∞ lim NW →+∞ f (l+1) co (x, a co )(27)" }, { "formula_coordinates": [ 5, 65.52, 288.77, 234.68, 37.4 ], "formula_id": "formula_35", "formula_text": "f (l+1) (c o , x, a co ) = ci u 2 -α bc o,ci -β bc o,ci W (l) co,ci ( -x + M (-a co )u, M (-a co )M (b co,ci ))f (l) ci (u, b co,ci ),(28)" }, { "formula_coordinates": [ 5, 63.12, 473.56, 237.08, 25.6 ], "formula_id": "formula_36", "formula_text": "ψ(σ x , σ y , x, y) = 1 2πσxσy [2 -x 2 σ 2 x -y 2 σ 2 y ]e -x 2 2σ 2 x -y 2 2σ 2 y(29)" }, { "formula_coordinates": [ 5, 249.96, 501.53, 48.34, 15.37 ], "formula_id": "formula_37", "formula_text": "f x = 1 √ 2πσx" }, { "formula_coordinates": [ 6, 98.64, 579.66, 201.56, 17.67 ], "formula_id": "formula_38", "formula_text": "mGE = E( φ(T g (f )) -T ′ g (φ(f )) )(30)" }, { "formula_coordinates": [ 11, 88.08, 624.17, 212.12, 48.88 ], "formula_id": "formula_39", "formula_text": "R(θ) = cos θ sin θ -sin θ cos θ = 1 -tan θ 2 0 1 • 1 0 sin θ 1 • 1 -tan θ 2 0 1(31)" } ]
[ { "figure_ref": [ "fig_0" ], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b3", "b4", "b3", "b5", "b6" ], "table_ref": [], "text": "In recent years, there have been an increasing interest towards understanding features internally encoded by Convolutional Neural Networks (CNNs) deployed in critical applications, e.g. Covid-19 detection from X-Ray Images [1], pedestrian detection [2], etc.\nThis task of analyzing the features encoded internally in a model has been referred to by the interpretation and explanation terms, interchangeably [3], [4], [5]. While [4] and [5] indicate existing discordant definitions regarding interpretation and explanation in the literature, these works do not elaborate on the differences between them. Moreover, these works follow the common practice of using these terms interchangeably. [4] suggests semi-interpretability as a transition between local interpretability methods, that take as input a single image and justify predictions from it (model explanation); and global interpretability methods, which explain the network/model as a whole (model interpretation). In addition, it considers any feedback related to these tasks as an explanation. In contrast, [6], [7] refer to the same tasks as local explanation and global explanation, respectively.\nAs can be noted, despite surveying a common set of existing methods, these studies consider different perspectives to define the explanation and interpretation tasks. As a result, there is no unified agreement on the exact definition of specific terms which lends itself to confusion. To address this weakness, we will begin by providing a specific definition for model explanation and model interpretation tasks as follows: Definition 1 (Model Interpretation). Given a set of internal representation R from an existing [pretrained] model example x i , a model F , and an output prediction f i produced by the model F , the model explanation task aims at justifying why a given prediction/decision f i has been made for that input by the model. In practice, this relates to indicating what characteristics of the input example were taken into account by the model for making the produced prediction (Fig. 1 " }, { "figure_ref": [], "heading": "Model Interpretation Model Explanation", "publication_ref": [], "table_ref": [], "text": "Dataset" }, { "figure_ref": [], "heading": "down).", "publication_ref": [ "b7", "b8", "b9", "b10", "b11", "b8", "b11", "b6", "b2", "b3", "b4", "b12", "b13", "b3", "b3", "b4", "b3", "b4" ], "table_ref": [], "text": "To date, a considerable number of surveys/taxonomies have been put forward along the research line of model explanation [8], [9], [10], [11], [12]. [9] debates on the definition of explanation along with the evaluation protocols followed by methods from this research line. This is complemented by the proposal of fundamental concepts that could assist their categorization. [12] conducts a meta survey on the existing model explanation surveys. More recently, [7] discussed the shortcomings of existing taxonomies from this research line and proposed a new one.\nIn comparison to the model explanation research line, less attention has been paid to the model interpretation methods [3], [4], [5]. There are also some studies about interpretation of Generative Adversarial Networks (GANs) [13], [14]. In this study, we focus on methods for visual interpretation of convolutional neural networks. Despite the fact that these works have investigated some aspects related to the model interpretation task, they suffer from the following weaknesses. The first weakness is related to the non-standard use of terminology which was mentioned earlier. As a second weakness, they encompass both model explanation and interpretation methods as interpretation methods. However, these two groups tasks have different characteristics and goals. Third, a side-effect introduced by the wide coverage of both explanation and interpretation methods is that these surveys need to rely on very coarse factors in order to be able to position the covered methods with respect to each other. For example, [4] defines local/global interpretability factors indicating whether the method provides the output for a given individual sample or whole of the dataset. As another example, [4] and [5] define a factor indicating whether the method is applied during the training phase or after the model being trained. This factor is refereed by different terminologies namely passive/active methods [4] and Specific/Agnostic models [5]. Finally, as fourth weakness, existing surveys consider a small range of interpretation methods. This constitutes a significant gap given the growing number of interpretation-related efforts occurring in recent years.\nTo address these weaknesses, we propose a framework for categorizing model interpretation methods. The framework introduces a set of six specific factors targeting the interpretation capabilities that these methods provide. Then, these factors are used as axes for the positioning of existing interpretation methods. First, among these factors we consider feedback modality as the means used by interpretation methods to provide feedback on the extracted insights, i.e. the relevant features. Second, we further analyze the level of semanticity of the provided feedback. This is a factor usually overlooked in existing surveys. Third, we cover a wider range of the interpretation methods, giving attention to the latest methods and diagnose active research lines. Finally, we conduct a discussion within and across the groups defined by the proposed factors. In doing so, we uncover research gaps in each group as well as in the visual model interpretation research line as whole. Additionally, we suggest some potential solutions for addressing the identified gaps.\nThis paper is organized as follows: Section 2 introduces the framework. In this section, a set of six factors are defined. This is followed by the grouping of covered interpretation methods on the proposed factors and their detailed description. Section 3 provides a discussion over the covered model interpretation methods respective to each defined factor. Furthermore, we touch on the evaluation protocols followed for the validation of each method. Section 4 provides an overview on the surveys in both, model explanation and interpretation, research lines which are used for positioning our work. Finally, the paper is concluded in Section 5." }, { "figure_ref": [], "heading": "A FRAMEWORK FOR MODEL INTERPRETATION METHODS", "publication_ref": [ "b14", "b15", "b16", "b17", "b18", "b19", "b20", "b21", "b22", "b23", "b24", "b14", "b21", "b17", "b22", "b19", "b23", "b15", "b20", "b16", "b18", "b25", "b15", "b26", "b27", "b15", "b16", "b28" ], "table_ref": [], "text": "We begin this section by describing the factors that will serve as axes for the categorization of model interpretation methods. Then, based on these factors we gradually categorize the methods. At the end of this section, we summarize the factors of the discussed interpretation methods in Table 1.\nOur framework characterizes existing interpretation methods based on the following factors.\n• Interpretation Capability Integration. This factor describes the point on which interpretation capabilities are added to a given base model. Two options are possible. On the one hand, interpretation capabilities can be provided after the base model has been trained, i.e. in a Post-Hoc manner [15], [16], [17], [18], [19], [20]. On the other hand, specific mechanisms can be added to the base model at design time, i.e. prior training, so that the resulting model is interpretable post-training. Thus, producing a model that is interpretable-by-design or inherently-interpretable [21], [22], [23], [24], [25].\n• Task Specificity. This factor refers to whether the interpretation mechanism depends, i.e. task-specific, or not, i.e. task-agnostic, on characteristics related to the task addressed by the based model. For the case of the classification task this factor could indicate whether the interpretation mechanisms are dependent on each individual class of interest (class-specific) [15], [22], [18], [23], [20], [24] or whether it is general across the dataset (class-agnostic) [16], [21], [17], [19].\n• Feedback Semanticity. A concept can be defined as an idea associated with properties that are semantically meaningful for humans [26]. In computer vision problems, this semantically meaningful property can be presented in different forms such as annotation masks with text labels [16], bounding box with assigned captions [27], or part-level annotations [28]. This factor describes whether the feedback provided by a given interpretation method can be associated to a semantically meaningful concept [16], [17], [29].\nThis also includes whether such meaningful semantics can be assigned/mapped to the internal units of the base model." }, { "figure_ref": [], "heading": "•", "publication_ref": [ "b29", "b14", "b30", "b15", "b16", "b28", "b14", "b15", "b16", "b20", "b21", "b17", "b18", "b22", "b19", "b23", "b31", "b32", "b33", "b14", "b17", "b18", "b19", "b21", "b34", "b35", "b22" ], "table_ref": [], "text": "Annotation Dependance. This factor describes the level of annotation required by the interpretation methods in order to operate. For the case of the image classification task [30], some methods depend on image-level annotations [15], [31] originally used to train the base model, while others on additional detailed pixel-level annotations [16], [17], [29].\n• Architecture Coverage. This factor indicates the level to which the architecture of the base model being interpreted is considered when analyzing the encoded representation. In this regard, interpretation methods may consider the whole architecture [15], [16], [17] or focuses only on specific parts from it [21], [22], [18], [19], [23], [20], [24].\n• Feedback Modality. This factor describes the modality being used by interpretation methods to provide feedback from the insight extracted from the information internally encoded in the base model. This modality can be a quantitative measurement such as contribution of identified relevant features on the model performance [32] or different forms of visualization. We refer to this as interpretation visualization. Examples of different visualizations of the interpretation feedback are synthetic images [33], [34], average of visualizations [15], extracted superpixel [18], image patches [19], [20], heatmap visualization [22], [35], [36], or examples of input images [23].\nIn what follows, using the proposed framework, we first divide existing efforts based on the Interpretation Capability Integration factor, i.e. as either Post-Hoc or Interpretable-by-Design. Then, the rest of the factors will be discussed within each of these two categories." }, { "figure_ref": [], "heading": "Post-Hoc Interpretation Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Class-Specific", "publication_ref": [ "b36", "b37", "b33", "b36", "b37", "b36", "b32", "b36", "b37", "b38", "b39", "b14", "b38", "b14", "b40", "b14", "b40", "b17", "b41", "b42", "b43", "b44", "b14", "b19", "b36", "b37", "b38", "b40", "b17", "b30", "b41", "b43", "b36", "b38", "b14", "b40", "b30", "b19", "b37", "b17", "b41", "b43", "b36", "b37", "b14", "b41", "b38", "b19", "b17", "b40", "b43", "b40", "b14", "b43", "b30", "b37", "b38", "b14", "b30" ], "table_ref": [], "text": "Class-Specific methods provide interpretability, as the name suggests, at the level of classes. Put it differently, the methods identify features in the latent representation for each of the classes of interest. There is a group of post-hoc interpretation methods that apply the approach of internal representation inversion. These methods aim to generate synthetic images from the internal representations to show visually the features encoded by the models. These methods can be classified as Class-Specific [37] [38] or Class-Agnostic [34]. The Class-Agnostic methods will be discussed in Section 2.1.2.\nAn example of the internal representation inversion in Class-Specific category is [37]. The method designs an image reconstruction loss function to reveal class-relevant features learned by a model. We refer to this method as Class Scoring Model. The proposed approach tries iteratively to estimate a natural image from an initially randomized image such that the output score of the given class for that image is maximized. The resulting image depicts content relevant to a target class learned by the model. [38] extended the Class Scoring Model [37] and Feature Inversion [33] (discussed in Sec. 2.1.2) by adding a non-parametric patch prior to their regularization term to improve the reconstructed images. Also, they consider the activations from fully connected layers, while the Feature Inversion and the Class Scoring Model utilize the activations from convolutional filters and output logits in their optimization procedure, respectively. As output, the Class Scoring Model [37] and [38] generate visualizations of internal representations. The visualizations reveal some patterns similar to those present in the dataset seen by the model, which are understandable by humans. However, the visualizations produced by these methods suffer from noise and unclear patterns that lend themselves to confirmation bias and introduce subjectivity.\nAnother example of this category is [39]. It conducts association rule mining via the Apriori algorithm [40], as a means to identify frequent visual patterns encoded within a model. To do so, it utilizes the activations computed by a fully-connected layer from cropped image patches fed to the model. These patches are then grouped into two categories: the target class and the background (including patches from other classes) which is followed by the creation of a binary transaction database from them. Each transaction contains the indices of neurons in the fully-connected layer, along with an extra item indicating the index of one of the two categories (binary transaction database). Finally, visual patterns are identified by extracting frequent itemsets of the indices in the fully-connected layer from the transaction dataset.\nAiming to identify internal relevant features of each class of interest, VEBI [15], opposite to [39], utilizes the feature maps produced by all layers of a CNN model. VEBI enables model interpretation through two steps: 1) class-specific relevant feature identification, and 2) visual feedback generation. The identification of relevant features is formulated as a µ-LASSO problem where indicators ω c are obtained for the aggregated internal activations produced by each image x i from a given class c. Visual feedback of these relevant features is produced via average visualizations produced from image crops extracted from regions where the identified relevant features have high activation.\nIn contrast to [15], TCAV [41] proposes Concept Activation Vectors (CAV) that enables model interpretation with a partial coverage of the architecture that defines the base model. To compute CAVs, the dataset is re-grouped to define a binary setting in which one group is composed by images illustrating visual patterns of interest in one class (target class), and the other by a set of random images as other class. Then, a linear classifier is trained to classify the activations of these images as computed by the neurons of a given layer. The resulting classifier, called CAV, is a vector with the same length as the number of neurons in the layer serving as input. This vector highlights the representation of the class of interest in the considered layer. Then, the method uses the CAVs and directional derivatives to calculate the sensitivity of the target class to the CAV. To do so, it computes the difference between output logits of the target class for the original activations and the modified activations, i.e., the summation of the CAV activations and the original activations. This quantitatively measures the sensitivity of the model with respect to the representation of any class in the given layer of interest. Finally, visual feedback is provided via examples of the target class whose activations of the target convolutional layer has higher similarity to the obtained CAV.\nIn contrast to VEBI [15] and similar to TCAV [41], ACE [18] enables model interpretation with a partial coverage of the architecture that defines the base model. It utilizes the k-means algorithm to cluster the feature maps of a given layer for a subset of image superpixels belonging to a given class. The superpixels corresponding to each cluster refer to similar visual patterns depicted in the input images.\nInstead of applying a clustering approach as in ACE, Invertible Concept-based Explanation (ICE) [42] extends ACE by decomposing the feature maps computed from the last convolutional layer via Non-negative Matrix Factorization (NMF) [43]. Following NMF, the feature maps A∈R h×w×d are decomposed into three components namely dimensionreduced feature maps A ∈ h×w×d , dimension reducer matrix V ∈ d ×d and the residual error U . During the training phase, the reducer matrix V is trained on the images from a given class in order to decrease the complexity of the feature maps' channels d. Afterwards, the parameters of the reducer matrix are fixed and considered as class-relevant vector bases representing directions for different representations in the latent space. At test time, given computed feature maps for a set of test images, the reducer matrix is applied to generate new feature maps with lower number of channels (i.e., A ∈ h×w×d ) for each feature maps pertaining to an image. Then, Global Average Pooling is applied on each channel of the new feature map and the resulting value is considered as its score. The images with higher scores are selected. Then, for each selected image, the channel with the highest score is chosen and binarized. The binarized map is resized to the size of the image and overlaid over it to illustrate the extracted visual pattern. The fidelity of the learned reducer matrix is evaluated by measuring the effect that using inverted feature maps, computed via NMF, have on classification accuracy. In addition, the consistency of the patterns depicted in the generated visualizations is evaluated in a user study.\nConcept Attribution [44] identifies class-specific internal units in two stages. In the first stage, it learns a global input feature occluder for a given class which changes the prediction on the image with the lowest input feature perturbation. In the second stage, it aims to assign a class-specific weight to each convolutional filter via the obtained global input feature occluder. To do so, it aggregates the difference between activations computed from the original input images and the modified (occluded) ones. These differences are then considered as the weights of their respective convolutional filters. To enable visualization of the filters with the highest score, it utilizes the technique of internal representation visualization [45] to synthesize images which maximise the activations generated by these filters. These synthetic images illustrate the features encoded by the identified filter.\n[31] provides interpretation by extracting class-specific subnetworks which include critical internal units relevant to a given target class. Here we refer to this method as Critical Subnetworks. To do so, the method assigns gates/weights to each internal convolutional filter in the base model. These gates are expected to represent importance weights of those filters for a given class. During the training phase of these gates, the output feature maps are multiplied by their corresponding gate resulting in new feature maps which are passed to the higher layers. The method learns these gates by minimizing binary cross-entropy (BCE) loss such that the extracted subnetwork has an accuracy close to that of original network while it deactivating class-irrelevant filters. In addition, to extract each subnetwork for a given class, a binary dataset is created including images from the target class (i.e., positive class) and images from the rest of the classes (i.e., negative class).\n[20] has proposed a framework, called PACE, which is defined by a set of autoencoders whose latent representation vectors are trained specifically with respect to each class of interest. The encoder components transform the feature maps of the input images computed by a part of the model (i.e., one convolutional layer), into the latent space vectors. Then, the decoder components project the vectors back to the space of the convolutional feature map. The learning of relevant features per class occurs in these latent space vectors. This is achieved by measuring the similarity matrices between representations of the latent space vectors and the encoder representations. The similarity matrices w.r.t the learned representations of a given class can be treated as explanation masks to recognize the relevant region in the input images after suitable resizing.\nDiscussion. While VEBI [15] and PACE [20] utilize the images of all classes to identify/learn relevant features in the latent representations, the Class Scoring Model [37], [38], [39], TCAV [41], ACE [18], Critical Subnetworks [31], ICE [42], and Concept Attribution [44] require to be run in separate stages considering image examples from one class at a time.\nIn addition, regarding the Annotation Dependency factor, the Class Scoring Model [37], [39], VEBI [15], TCAV [41], Critical Subnetworks [31] and PACE [20] rely only on imagelevel labels. In contrast, [38], ACE [18], ICE [42], and Concept Attribution [44] are completely independent of any image label and/or pre-defined annotations.\nFurthermore, regarding the Feedback Modality factor, the methods utilize different modalities to enable interpretation of the internally-encoded representations. For example, generating synthetic images depicting the internal representations in the Class Scoring Model [37] and [38], creating average visualizations of image crops w.r.t an internal unit in VEBI [15]. ICE [42] highlights regions of input images using binarized heatmaps. Other works provide visualizations in the form of similar visual patterns in the dataset (i.e. exemplars) such as image patches in [39], PACE [20], clusters of superpixels in ACE [18], synthesised images maximizing the activations in TCAV [41] and Concept Attribution [44]. Also, providing examples of the dataset in TCAV [41] is another modality to provide interpretation feedback. Examples of the feedback modality of each method can be seen in Fig. 3, 5-7.\nRegarding the Feedback Semanticity, none of the methods guarantees that the provided interpretation feedback will have a semantic meaning.\nRegarding the Architecture Coverage factor, while VEBI [15], Concept Attribution [44], and Critical Subnetworks [31] enable interpretation by considering the feature maps of all the convolutional layers, the other reviewed works only consider the feature maps of a small part of the model, thus reducing the level of interpretation of the model. Furthermore, with the exception of [38], [39], VEBI [15], and Critical Subnetworks [31], none of the discussed works are able to link the internal units of the base model with the identified/learned relevant representations, thus reducing their intelligibility." }, { "figure_ref": [ "fig_4", "fig_5" ], "heading": "Class-Agnostic", "publication_ref": [ "b32", "b33", "b32", "b32", "b33", "b32", "b33", "b45", "b46", "b14", "b46", "b31", "b47", "b15", "b16", "b28", "b49", "b50", "b18", "b19", "b51", "b46", "b47", "b33", "b16", "b28", "b31", "b48", "b18", "b51", "b32", "b33", "b36", "b37", "b33", "b16", "b28", "b45", "b18", "b48", "b46", "b47", "b51", "b31", "b15", "b16", "b28", "b45", "b46", "b15", "b16", "b28", "b46", "b45", "b15", "b16" ], "table_ref": [], "text": "In contrast to Class-Specific methods, Class-Agnostic methods enable interpretation by identifying relevant latent elements without exploiting or imposing class-specific constraints.\nEarly works of this category followed the internal representation inversion approach, more specifically Feature inversion [33] and Network Inversion [34]. [33] put forward the feature inversion approach to reconstruct an image from the internal representation of a given convolutional filter. The resulting image aims to reveal features learned by a convolutional filter. To do so, the method applies an internal representation reconstruction loss function which considers the Euclidean distance between internal representations of a given image and the reconstructed one plus a regularizer enforcing a natural image prior. This loss function is minimized using the gradient descent technique.\nDifferent from Feature Inversion [33] which minimizes an internal representation reconstruction error for a specific target image in the dataset, the Network Inversion method [34] minimizes an image reconstruction error. To do so, the loss function in Network Inversion measures the intensity difference between an original image and the reconstructed one by a network. The network is fed by the internal representation of a given convolutional layer and outputs a reconstructed image which is the input to the defined loss function. In this method, the network weights are trained in such a way that the loss error between the original images and their reconstructed counterparts is minimized.\nThe methods explained above namely; Feature Inversion [33] and Network Inversion [34] have studied the encoded features by generating visualizations from inverted internal representations. There are other Class-Agnostic Post-Hoc methods proposing different methodologies to interpret the encoded features. For instance, even thought it is not a model interpretation method per se, the methodology used by [46] for investigating the emergence of object detectors in models is quite related. More concretely, given a convolutional filter, the method, first, collects the images that are highly activated by that convolutional filter. Second, given each of the images in the set, the method randomly selects multiple patches of the images and pushes them through the model. Third, the difference between activations produced by the original image and its patches is computed and considered as discrepancy map. Next, the average of all discrepancy maps are calculated. Finally, a region of interest is highlighted by the average map for all the collected images. Also, in this work, convolutional filters are annotated by a semantic label in a user-based procedure.\nTaking the analysis above a bit deeper, [47] identifies sparse sets of internal units encoding visual attributes. To do so, it utilizes activation maps of all the images in the dataset produced by all the layers of a CNN model. Then, it formulates a µ-LASSO problem to learn an indicator ω j for each annotated visual attribute j. The indicator ω j represents the indices of internal units encoding the visual attribute j. Being the method that served as inspiration to VEBI [15], led to both methods having some similarities. However, the main difference between [47] and VEBI, lies in the fact that the former is independent of class labels related to the original classification task. Therefore, it is capable of identifying class-agnostic units which represent visual attributes.\nLinear Probes [32] investigates the dynamics of convolutional layers. Feature maps are extracted from each convolutional layer, and a linear classifier is fitted to each layer separately in order to determine the class label. This is done in order to investigate the linear separability of each layer. By studying the accuracy of each classifier, it was observed that the degree of linear separability of the feature maps of the layers increases as we reach the deeper layers.\nDifferent from the Linear Probes that map an internal representation to a class label, [48] proposed a Reversed Linear Probing approach that maps concept labels into a quantized internal representation. To do so, first, the method utilizes the k-means algorithm to cluster the internal representations of a given convolutional layer, pertained to the input images, in order to quantize the representations into cluster indices. Second, in order to introduce the concepts in the form of discrete attribute vectors (i.e., presence or absence of a concept), the method utilizes a set of pretrained models. The output prediction, the predicted classes, of all the pre-trained models for a given input image are concatenated to form an attribute vector. As a result, there is an attribute vector, as concept label, and a cluster indices vector for each image and its internal representation. Finally, the method trains a linear model to map attribute vectors, as concepts, to cluster indices as the quantized form of the internal representations for the given images. Consequently, it can analyze the concepts encoded in the internal representation of a given input image. Hence, an image is linked with the concepts vector predicted by the linear model for its corresponding quantized representation.\nBoth Linear Probing and Reversed Linear Probing provide interpretation in the form of a set of classifiers to understand the internal representations. However, they are not able to assign a human-understandable concept to the internal units, thus reducing the level to which they enable interpretation of the base models.\n[49] enables interpretation by assigning a quantitative factor, called Selectivity Index, to each convolutional filter in a given pre-trained model. The Selectivity Index is defined based on the activations produced by a convolutional filter for a pre-defined number of cropped images whose activations are higher than other cropped images in the dataset. The patches of the images used for each convolutional filter covers a variety of class labels. Hence, the Selectivity Index can be used to quantitatively analyze each convolutional filter either in a Class-Specific or Class-Agnostic manner. This is done by calculating the relative frequency of each class via activations of its image patches in the set. To measure this frequency, first the relative activation of the image patches w.r.t a filter is obtained. It is defined by the fraction of the activations of the patch from the maximum activation computed by the filter from all the patches in the set. Second, a normalized summation of the relative activations of the image patches for each class is computed individually. Finally, the number of classes whose summation of relative frequency is higher than a threshold is selected as class label(s) for the investigated filter. In this line, each internal unit can be linked to either a specific class label or multiple class labels. The average of the extracted patches of the images for each unit is then calculated to show the patterns learned by that convolutional filter. Different from the above mentioned methods, there is a group of the Class-Agnostic Post-hoc methods that measure the alignment between internal representations and annotation masks. One example of this groups is Network Dissection [16]. This method is usually applied in conjunction with the Broden dataset which is composed by images from several existing datasets. This provides Broden with a vast variety of semantic concepts and their corresponding annotation masks for texture, objects, and object-parts. Using this external dataset, the dissection method measures the alignment between thresholded feature maps computed in the convolutional filters of a given base model with annotation masks from Broden. Then, the semantic label whose annotation masks have the highest overlap with the feature maps is assigned to the filter that produced the activations. As a result, a list is produced indicating the classes/semantic concepts from the Broden dataset that were matched by the internal activations of a given base model.\nInspired by Network Dissection, [17] proposed Net2Vec, a method for quantifying and interpreting the level to which meaningful semantic concepts are encoded by filters in CNNs. Different from Network Dissection which aims to link a single unit with an annotated semantic concept, in Net2Vec this assignment is done in a many-to-one manner. More specifically, the feature map M k (x i ) produced by the weighed sum of multiple k filters are linked to a single concept. Compositional Explanations [29] extended Network Dissection to find logical compositions of abstract concepts encoded by the convolutional filters of the last convolutional layer. The intuition is that a convolutional filter may not be just a detector for one concept, but rather a composition of complex detectors characterizing multiple concepts. This is also different from Net2Vec which finds a combination of convolutional filters encoding only one concept. Compositional Explanations modified the Intersectionover-Union step, used by Network Dissection to measure overlap w.r.t. semantic concepts, to consider logical composition operations such as disjunction (Or), conjunction (And), and negation (Not) across different concepts. The resulting method is different from Network Dissection where only logical conjunction (And) between an internal representation and one concept is considered. Therefore, Network Dissection assigns each internal unit to a single concept, while Compositional Explanations assigns a convolutional filter to a composition of pre-defined concepts. Since, the problem of finding the best logical composition of concepts requires an exhaustive search, the method utilizes the Beam search algorithm to find an approximated solution to the problem.\n[19] proposed an interpretation method, which is inspired by topic modeling methods [50] [51]. Thus, throughout this paper, we refer to it as Topic-based interpretation. The method from [19] discovers topics within the activations of a given layer which are shared across all the classes of interest. These topics at the same time represent visual patterns in the dataset. However, the visual patterns covered by the identified topics do not necessarily possess a semantic meaning. Thus reducing their interpretation capability.\nSimilar to PACE [20], from the Post-Hoc Class-Specific category, [52] utilizes generative models to provide a level of interpretation for the encoded features. More specifically, it applies a discrete variational autoencoder (DVAE) on the feature maps of a given layer to learn a binary compressed representation that drives the predictions made by the base model. Given the binary nature of the compressed representation, the method applies an intervention mechanism such as a flip on the encoder output to modify the reconstructed image. Then, the originally reconstructed image is qualitatively compared with the modified one to detect whether there is any bias in the representations internally learned by the model. While this method is capable of generating visualizations from learned compressed representations, these visualizations do not necessarily possess a semantic meaning.\nDiscussion. Regarding the Annotation Dependency factor, [47] depends on annotations of visual attributes, while the attributes used in Reversed Linear Probing [48] are in the binary vectors created by concatenating class labels predicted from a set of models. Network Dissection [34], Net2Vec [17], and Compositional Explanations [29] require expensive pixel-level annotations for their respective procedures. This dependency enables these methods to link internal units with a semantic meaning. In contrast, Linear Probes [32], Selectivity Index [49], Topic-based interpretation [19], and [52] utilize image-level class labels in their interpretation procedure.\nRegarding the Feedback Modality factor utilized in the methods, Feature Inversion [33] and Network Inversion [34] are able to generate a synthetic image for each given input image. This is similar to those of Class Scoring Model [37] and [38], which can illustrate the patterns learned by internal units. Unlike these methods, Network Dissection [34], Net2Vec [17], and Compositional Explanation [29] highlight the regions for a set of input images whose activations, computed by the given convolutional filter(s), have the highest overlap with the annotations of a given semantic concept. In addition, [46] and Topic-based interpretation [19] produce visualizations of the image patches corresponding to the learned relevant features. Selectivity Index [49] generates visualizations from the average of images patches corresponding to the identified internal relevant units. On the contrary, instead of producing image patch visualizations, [47], Reversed Linear Probing [48], and [52] present a set of image exemplars containing similar patterns respective to identified internal units, learned clusters and decoders, respectively. Using a different modality, Linear Probes [32] reports the accuracy of the learned classifiers as the feedback of analyzing the linear separability of the internal representations encoded in the layers. With the exception of Network Dissection [16], Net2Vec [17], and Compositional Explanation [29], none of the above mentioned methods guarantee or provide a quantitative procedure that the highlighted/extracted regions in the image data necessarily possess a semantic meaning. Examples of the feedback modality of each method can be seen in Fig. 34567.\nTaking the capability of providing semantic feedback (i.e., Feedback Semanticity factor) into account, it was observed that [46], [47], Network Dissection [16], Net2Vec [17], and Compositional Explanations [29] effectively address the task of linking internal units of a base model with semantic meaning. This capability was not possible in the methods from Sec. 2.1.1. Moreover, excepting these works, none of other reviewed works in the Post-Hoc Class-Agnostic category are able to associate semantic labels to the internal units of a given pre-trained model, nor assigning a semantic labels to the provided feedback, i.e., the generated visualization feedback of the interpretation.\nFinally, regarding Architecture Coverage factor, [47] considers feature maps produced from all convolutional layers to provide interpretation. Along this line, [46], Network Dissection [16], and Net2Vec [17], although, provide insights for each convolutional filter and layer, respectively, they should be run separately on one or small sets of the filters at each time. The possible reason is that their methodology is able to provide interpretation for limited parts of a model instead of considering their relation. In contrast, other reviewed works in this category cover partially the representations in the architectures. Hence, these methods are not able to provide a complete interpretation of the base models." }, { "figure_ref": [], "heading": "Interpretable-by-Design Methods", "publication_ref": [], "table_ref": [], "text": "Methods belonging to this category, follow the idea of designing learning algorithms so that the resulting model after training has specific properties that make it interpretable/explainable without the need of additional [posthoc] processes. Similar as before, we group the discussion of methods from this type based on the Task Specitivity factor. Then, additional factors are discussed gradually within each group." }, { "figure_ref": [ "fig_5" ], "heading": "Class-Specific", "publication_ref": [ "b52", "b53", "b53", "b54", "b53", "b53", "b54", "b21", "b21", "b22", "b21", "b23", "b34", "b55", "b21", "b20", "b21", "b56", "b34", "b35", "b34", "b52", "b53", "b54", "b21", "b23", "b34", "b35", "b56", "b22", "b21", "b22", "b53", "b21", "b23", "b34", "b35", "b22", "b53", "b54", "b54", "b52", "b53", "b15", "b52", "b53", "b54", "b21", "b23", "b34", "b35", "b22" ], "table_ref": [], "text": "One of the early deep Class-Specific Interpretable-by-Design methods was proposed by [53] with their proposal of Capsule Networks. Each layer contains a group of neurons called capsule. The activity vector of these capsules encodes spatial information as well as the probability of an object or object-part being present. This is done by introducing an iterative routing-by-agreement mechanism which preserves the spatial relations among encoded features. In this mechanism, lower-level capsules model lower-level features coming from the input and link its output to capsules in the following layers. In order to determine which higher capsule should be routed to, the method defines a weight matrix representing the agreement relation between the lower capsules and the higher ones. Therefore, the lower capsule computes a prediction vector of the higher capsule by multiplying its own output by the weight matrix. Although, in theory, the method is expected to preserve the spatial relations among the encoded features, it does not provide any feedback modality to illustrate the learned spatial relations, nor measuring their alignment with semantic concepts.\nDifferent from Capsule Networks, which introduce a new module/component (capsule layers) for inducing model interpretability, [54] proposes Interpretable Convolutional Filters. This method adds a new term to the training loss, called filter loss, to regularize the representations learned by each convolutional filter in a given layer. This filter loss pushes a convolutional filter to focus its activation on a specific spatial location of a class. To do so, it defines a set of masks with the same spatial resolutions of the feature maps computed by a given filter. Each mask follows a Gaussian distribution in a specific location as the ideal distribution of activations computed by that filter. Then, the filter loss is defined as the negative mutual information between a set of feature maps computed by the filter and the masks of the target class. Minimizing this loss guarantees that the convolutional filter encodes a distinct \"part\" of a class. Moreover, it constrains the activations of a filter to a single part of an object, rather than repetitively appear on different regions covering the object.\nTaking the Interpretable Convolutional Filter [54] into account, [55] enables interpretation of a CNN through the construction of class-specific decision trees along with the modification of the last convolutional layer to focus on the object parts. The decision tree is constructed based on a bottom-up hierarchical clustering approach. Here, we refer to this method as Interpretable CNN Decision Tree. In order to simplify the construction of the tree, the method re-trains the last convolutional layer using the filter loss proposed in [54] prior to following the procedure for constructing the tree. This step fine-tunes the last convolutional layer to recognize specific object parts. Next, the receptive fields of each convolutional filter are computed. Then the part from the images which frequently appears in the receptive filed is considered as the part label for the filter.\nRegarding the construction of the tree, in the first step, each node encodes the specific rationale for the prediction for each image. In the second step, in an iterative manner, two nodes with the highest similarity are merged to create a new node in the second level of the tree. Then, for each new node, a linear loss function is defined to learn a sparse weight vector representing the convolutional filters. The resulting sparse vector shows the contribution of the filters to the predictions at a specific level of the tree for the set of images merged in the newly created node. Therefore, each node of the tree encodes the internal representations of the convolutional layer into elementary concepts of object parts. The method additionally measures the contribution of each of the parts to the prediction at each level of the tree using the learned sparse vector. Hence, each node in each level of the tree is considered as a partial description of the decision mode of the CNN on that level of the tree for a given prediction. Finally, given a testing image the CNN computes an output score for the prediction. Then, the decision tree estimates the decision mode of the prediction at each level. Since each node in the tree has encoded a set of convolutional filters such that each one represents a specific part, thus each estimated decision mode explains the contribution value of the parts, presented at that level, to the prediction. In addition, it is able to highlight the image examples as well as image patches encoded in each node via visualizations of the receptive field whose convolutional filter indices have been indexed in that node.\nDifferent from Interpretable Convolutional Filter [54] and Interpretable CNN Decision Tree [55] which use a loss term to steer the last convolutional layer to learn interpretable representations, [22] has introduced a network architecture which includes a new interpretable module. The architecture called prototypical-part network (ProtoPNet), includes a prototype layer between the last convolutional layer and the fully connected layers. This layer includes class-specific trainable vectors which are responsible for learning prototypical parts of their target class.\nDuring the training phase, given an input image, the model is able to identify several parts of the input with high similarity with trainable prototypical vectors of some classes. To do so, the method follows an iterative training algorithm composed by two stages. In the first stage, keeping the classifier parameters fixed, the method optimizes jointly the parameters of the convolutional filters and the prototypical vectors in the prototype layer. The proposed loss function computes L 2 distance between the feature maps of the last convolutional layer and each prototypical vector in order to cluster the representation of the images around the similar prototypical parts of the their groundtruth classes. In the second stage, keeping the parameters of the convolutional filters and the prototype layer fixed, the method optimizes the parameters of the classifier in order to classify the input images via the learned prototypical vectors. Moreover, the method can generate a heatmap visualization of the learned prototypical parts for each input image.\nInstead of learning prototypical parts of the input images in ProtoPNet [22], Concept Whitening [23] put forward a built-in module that is composed by whitening and orthogonal transformations. These operations aim at aligning the latent space of the internal units with similar visual patterns emerging in a predefined set of images. In this method, two types of data are considered for the training phase. First, D = {x i , y i } n i=1 is a dataset that includes n samples and their labels for training the based model. Second, the m auxiliary datasets D 1 , D 2 , D 3 , ..., D m (D m ⊂ D) where each one contains instances that depict a common visual pattern for optimizing the orthogonal transformation matrix. Similar to the ProtoPNet training algorithm [22], Concept Whitening has a training algorithm including two stages. In the first stage, the parameters of the base model are optimized. In the second stage, keeping the base model's parameters fixed, the parameters of the transformation module are optimized to separate the latent spaces of the last convolutional layer such that each direction in the latent space encodes a specific visual part in the dataset.\nInspired from ProtoPNet, [24] combined the prototype layer structure of ProtoPNet and Attention-based MIL pooling. Their method, known as ProtoMIL, is capable of learning representations for a bag of instances. [35] have proposed TesNet architecture to improve the diversity and discriminative properties of the prototype layer in ProtoPNet. To do so, first, an orthonormal constrain is added to the loss function to push prototypical vectors, within prototype layer, from different classes apart from each other. Second, the method applies Projection Metric [56], a distance metric on the Grassmann manifold, to separate the prototypes of each pair of classes. These two constraints help to minimize the correlation between prototypes within and between each pair of classes. This method utilizes as same two-stage training algorithm as that of ProtoPNet [22].\nDifferent from existing case-based and prototype-based methods namely; Case-based reasoning [21], ProtoPNet [22], Proto Tree [57], and TesNet [35], which use spatially rigid prototypes, Deformable ProtoPNet [36] have recently proposed the use of spatially flexible prototypes. This property enables prototypical vectors withing prototype layers to adaptively change their relative spatial positions w.r.t to the input images. As a result, the prototypical vectors will be robust to variations in pose and context, i.e., detect object features with a higher tolerance to spatial transformations, as well as improve the richness of their visualizations. This spatial flexibility property is defined by an offset function which adds some offset to the location of input patches of feature maps, thus enabling each prototypical part to move around when it is applied on a spatial location. Moreover, following a similiar orthogonality loss as in TesNet [35], Deformable ProtoPNet defines an orthogonality loss between all the prototypical vectors within a class to avoid overlapping between them. This is different from TesNet which applies this property among each pair of prototypical vectors in a class-agnostic manner.\nDiscussion. Regarding the Annotation Dependency factor, Capsule Networks [53], Interpretable Convolutional Filter [54], Interpretable CNN Decision Tree [55], Pro-toPNet [22] and its Class-Specific extensions namely, Pro-toMIL [24], Tesnet [35], and Deformable ProtoPNet [36] as well as its Class-Agnostic extension ProtoTree [57] (reviewed in Sec. 2.2.2) depend only on image-level labels. In contrast, CW [23] requires, in addition to the image label, an extra dataset for the procedure of latent space alignment, i.e., training the transformation module.\nFurthermore, it should be noted that ProtoPNet [22] and the extensions mentioned above apply a built-in module after the convolutional layers to provide flexibility in the feature learning process taking place in those layers. In a similar, in the context of capsule networks, specific components are present between capsule layers that determine the way activations are routed across layers. In contrast, CW [23] enforces each convolutional filter of a given layer to align its latent representation to specific input images. Complementary to the previous efforts, Interpretable Convolutional Filter [54] defines a loss term to regularize the representations of convolutional filters, instead of injecting a direct transformation module.\nRegarding the Feedback Modality, the output of each prototypical vector in the ProtoPNet [22], ProtoMIL [24], TesNet [35], and Deformable ProtoPNet [36] can be visualized for the input images which have the closest patch to those encoded by the prototypical vectors. This visualization can be generated in two forms: image patch and heatmap visualization. The heatmap visualization is generated by superimposing the output of the similarity map computed between a feature map and a prototypical vector on the input image. In addition, these methods highlight the patches of input images whose feature maps have the closest distance (highest similarity) to one of the learned prototypical vectors. Fig. 567show examples of the feedback modality in each of the discussed method in this category.\nIn contrast, CW [23] provides only exemplar images, illustrating similar patterns, which produce the highest activation for a given convolutional filter. Moreover, Interpretable Convolutional Filter [54] and Interpretable CNN Decision Tree [55] compute the receptive fields from a spatial locations in the feature maps with the highest activation value from the filters to highlight the encoded visual parts in the filters. This is done in order to show how each convolutional filter have been aligned to a specific visual pattern. Interpretable CNN Decision Tree [55], additionally, illustrate the examples of the dataset showing similar patterns.\nDifferent from the above, Capsule Networks [53] do not provide any feedback from learned representations in the capsule layers. A common practice to produce visualizations of the representation is by attaching a decoder and reconstructing the image when a given part of the representation is ablated.\nRegarding the Feedback Semanticity, prototype-based methods only generate visualization of the learned prototypes. With the exception of Interpretable Convolutional Filter [54] that takes advantage of Network Dissection [16] to quantitatively evaluate the semantic meanings of filters representations, there is no guarantee that the extracted patterns align w.r.t semantic concepts.\nFinally, concerning the Architecture Coverage factor, these methods only provide partial interpretability. The capsule units in the capsule layer in Capsule Networks [53], the interpretable filters in the last convolutional layer in Interpretable Convolutional Filter [54] and Interpretable CNN Decision Tree [55], prototype layer in ProtoPNet [22], Pro-toMIL [24], TesNet [35], and Deformable ProtoPNet [36] as well as filters wrapped by the transformation module in CW [23] are the only units that become interpretable -the features encoded in the rest of the units that define the architecture (convolutional and fully-connected layers) are still opaque." }, { "figure_ref": [], "heading": "Class-Agnostic", "publication_ref": [ "b20", "b21", "b20", "b57", "b21", "b20", "b21", "b23", "b34", "b35", "b57", "b56", "b21", "b56", "b57", "b20", "b20", "b57", "b56", "b21", "b23", "b34", "b35", "b20", "b57", "b56" ], "table_ref": [], "text": "One example of this category is the Case-based reasoning through prototypes as proposed in [21]. The method utilizes an autoencoder to reduce the dimensionality of the input and to learn useful features for prediction. This is followed by a prototype layer which introduces a set of prototypical vectors shared across classes. This is opposite to ProtoPNet [22] which aims to learn representations which are very close or identical to an exemplar from the training set. We mentioned ProtoPNet as a Class-Specific Interpretable-by-Design method, since the method enforces the network to learn prototypical parts that are specific to each class by involving the distances to the specific prototypical vectors in classification task. In contrast, in [21] the distances to all the prototypical vectors will contribute to the probability prediction for each class. To make the class-specific prototypical vectors shareable among classes, ProtoPShare [58] extends ProtoPNet [22] by introducing a data-dependent similarity metric. This metric identifies similar learned prototypical vectors among classes, this is followed by a pruning step in order to reduce the number of prototypical vectors. More specifically, after the prototype layer is trained, the introduced data-dependency similarity computes the inverse of the distance between outputs of each pair prototypical vectors for all training images. Then, one of the similar/closest prototypical vectors is removed, and the weights of the remaining prototypical vectors are shared among both classes. Finally, the classifier part is finetuned.\nDifferent from Case-based reasoning [21], ProtoP-Net [22], ProtoMIL [24], TesNet [35], Deformable ProtoP-Net [36], and ProtoPShare [58] that use a prototype layer, followed by a fully-connected layer as a classifier, Pro-toTree [57] utilizes a decision tree located after the final convolutional layer to learn a binary tree classifier of prototypical vectors shared among classes. Each node inside the tree is a trainable prototypical vector which is trained through a prototype learning procedure [22]. Following this procedure, leave nodes learn class distributions. Hence, a path from the root to a leave represents the classification rule. Additionally, during the training phase each internal node (a prototypical vector) can generate a visualization for the training input sample which has the highest similarity (i.e., lowest L 2 distance) w.r.t. it. Therefore, ProtoTree is able to provide a hierarchical visualization of the decisionmaking process followed by the model.\nDiscussion. Regarding the Annotation Dependency factor, similar to the Class-Specific Interpretable-by-Design methods, these methods are independent of any external annotations, and only rely on class labels.\nRegarding the Feedback Modality factor, ProtoTree [57] and ProtoPShare [58] are able to generate heatmap visualizations of the relevant units, i.e. learned prototypical vectors, as well as to extract image patches for the images which have the closest patch to one of the learned prototypical vectors in the tree. In contrast, Case-based reasoning [21] does not provide a feedback visualization from the learned representations. It uses only the reconstructed input images as means of visualization. Fig. 7 shows examples of the feedback modality in the ProtoTree and ProtoPShare methods.\nRegarding the Feedback Semanticity factor, it can be noted that similar to the Class-Specific Interpretable-by-Design methods, Case-based reasoning [21], ProtoPShare [58], and Pro-toTree [57] do not guarantee any alignment w.r.t. semantic concepts. The focus of these methods lies on justifying the prediction made by the model through the representations encoded in the prototype layer.\nFinally, concerning the Architecture Coverage factor, similar to ProtoPNet [22], ProtoMIL [24], TesNet [35], and Deformable ProtoPNet [36] the Class-Agnostic Interpretable-by-Design methods namely, Case-based reasoning [21], ProtoP-Share [58], and ProtoTree [57] still suffer from the weakness of partial interpretability of the base model." }, { "figure_ref": [ "fig_2" ], "heading": "DISCUSSION", "publication_ref": [], "table_ref": [], "text": "In this section, we provide a discussion of the different type of interpretation methods over the defined factors (Table 1). Here, we further extend the focused category-specific discussions provided earlier in Section 2 by addressing the overarching trends observed in Fig. 2. In this figure, the horizontal axis shows the qualitative properties while the vertical axis illustrates the number of works, covered in this study, which fall within a given property. The red and blue color indicate the Post-Hoc and Interpretable-by-Design categories, respectively." }, { "figure_ref": [], "heading": "TABLE 1", "publication_ref": [], "table_ref": [], "text": "An integration of the different aspects of the related works in terms of qualitative properties. The column Feedback Semanticity categorizes different feedback modalities into five categories: (1) Synthetic Images (SI), (2) Image Patches (IP), (3) Exemplar Images (EI), (4) Average Images Patches (AIP), and (5) Heatmap Visualization (HV)." }, { "figure_ref": [ "fig_2" ], "heading": "Interpretation Capability Integration", "publication_ref": [ "b58" ], "table_ref": [], "text": "This factor describes interpretability capabilities are injected at design time, i.e. Interpretable-by-Design, or in a Post-Hoc manner. According to Table 1, the history of methods following a Post-Hoc approach is much older than that of Interpretable-by-Design approaches. Also, as can be seen in Fig. 2, the majority of the reviewed interpretation methods have followed the Post-Hoc approach. Possible reasons for such a trend could be the following. First, the interpretation of deep models was identified as a problem of interest following the seminal work of [59] and the remarkable results it obtained in the ImageNet ILSVRC'12 challenge. As such, initial interpretation efforts were formulated for scenarios where the base models were already in place, i.e after the training phase. Second, the methodology followed by these methods do not need to touch the original model or its training procedure. Hence, these methods do not affect the inner-workings and performance of already pretrained methods. This reduces the design complexity of the algorithms in this approach which makes it simpler than Interpretable-by-Design approaches.\nThe methods following the Post-Hoc modality provide interpretations through a model approximation strategy. This raises questions regarding the fidelity or faithfulness of the provided interpretation feedback. More specifically, this issue is related to the level to which the provided interpretation feedback are faithful to the representations learned by the model. Moreover, due to their characteristic of operating on top of an existing model, Post-Hoc methods tend to require additional computations than their Interpretable-by-Design counterparts. As consequence of these weaknesses, the Interpretable-by-Design research line has recently become very active. The methods following this approach are able to reveal, to some level, the inference procedure followed by the model. However, the additional built-in modules and specific representation learning algorithms used by this type of methods increase their design complexity." }, { "figure_ref": [ "fig_2" ], "heading": "Task specificity", "publication_ref": [ "b17", "b30", "b36", "b38", "b41", "b43" ], "table_ref": [], "text": "In this section, we discuss the Task Specificity factor addressed by the model interpretation methods. As can be noted in Figure 2, the number of proposed methods in each of the Class-Specific and Class-Agnostic categories are roughly the same. However, it can be observed that the number of Class Agnostic Post-Hoc methods is sligthly higher than that of Class-Specific Post-Hoc. Moreover, we notice an opposite trend for the Interpretable-by-Design category. The reason for such trend lies on the methodology and goal of each of these categories. The majority of Class-Specific Post-Hoc methods, such as [18], [31], [37], [39], [42], [44], need to be run in separate stages considering a limited set of examples from one class at a time. This makes these methods computationally expensive. In contrast, Class-Agnostic Post-Hoc methods follow algorithms that do not require this splited processing of the data.\nIn the Interpretable-by-Design side, Class-Agnostic methods, have the inherent weakness of not being able to directly link the provided interpretation feedback with the classes of interest. To cope with such limitation, recently, the Interpretable-by-Design research line has been more headed towards learning class-relevant interpretable representations. Consequently, this group of methods provide better insights on the learned representations and their relationship w.r.t. the classes of interest. This characteristic is important for fine-grained classification problems where the subtle differences among the categories are of interest." }, { "figure_ref": [ "fig_2" ], "heading": "Annotation dependency and feedback semanticity", "publication_ref": [ "b15", "b16", "b28", "b46", "b47" ], "table_ref": [], "text": "This section investigates the capability of the interpretation methods in providing semantic feedback as well as their dependency on annotations.\nAccording to Fig. 2, the Feedback Semanticity and Annotation Dependency factors follow, more or less, similar trends. There are several observations that can be made from here.\nFirst, while majority of the visual model interpretation methods are not able to provide semantic feedback, they are also independent of any additional external annotations.\nSecond, there is a small group of methods that provide semantic feedback. To do so, these methods use additional annotations of different forms. For example, Network Dissection [16], Net2Vec [17], and Compositional Explanation [29] utilize pixel-level annotations for pre-defined object classes in the dataset. [47] and Reversed Linear Probing [48], on the other hand, rely on image-level attribute annotations.\nThird, all the Interpretable-by-Design methods are independent of any external annotations. This limits them from providing any semantically-meaningful feedback. Considering this, extending existing methods to provide semantic feedback and their quantitative evaluation w.r.t. those provided by Post-Hoc methods can be another research problem in the field. Also, the most recently proposed methods in the Post-Hoc category do not explore systemically the semantics of their provided interpretation. Hence, developing the methods in this category to provide semantic feedback is still an open problem." }, { "figure_ref": [ "fig_4", "fig_5", "fig_2", "fig_2" ], "heading": "Feedback Modality", "publication_ref": [ "b42", "b22", "b37", "b36", "b51", "b33", "b40", "b32", "b43", "b42", "b22", "b42", "b52" ], "table_ref": [], "text": "This factor describes the form of the feedback provided by a given interpretation method. Based on the discussion conducted in Section 2, we can categorize different feedback modalities into five categories: (1) Synthetic Images (SI) (Fig. 3), (2) Image Patches (IP) (Fig. 4 and5), (3) Exemplar Images (EI) (Fig. 6), (4) Average Images Patches (AIP) (Fig. 7.a and b), and (5) Heatmap Visualization (HV) (Fig. 7.c-f). Statistics on the use of these modalities is presented in Fig. 2. As can be seen, Exemplar Images (EI) and Images Patches (IP) are the most used modality to visualize the interpretation feedback among the visual model interpretation methods.\nConsidering the Post-Hoc category, none of the methods produces heatmap visualizations as part of their provided interpretation feedback. Also, while generating Image Patches (IP) is the most used feedback modality, Average Image Patches (AIP) is the second less common feedback modality used by the methods. This suggests that the identified/learned relevant features might not be analyzed with an appropriate level of depth. This observations arises from the difference between these two types of feedback modality. While IP-based feedback usually highlights the patches with the highest response, the AIP counterpart stress the consistency among the patches.\nRegarding the Interpretable-by-Design category, it is noticeable that none of the methods uses Synthetic Images (SI) or Average of Images Patches (AIP) as their feedback modality. Using these modalities to shed light on the features learned by the Interpretable-by-Design methods can provide a deeper intuition on them. As can be seen in Fig. 2, the majority of the Interpretable-by-Design methods can generate Image Patches. Therefore, visualizing the average of images patches, for example in prototype-based methods, can reveal whether the learned prototypical vectors consistently align around a specific visual pattern. Moreover, utilizing the activation maximization technique [43] at test time to generate synthetic images whose feature maps have closest distance (highest similarity) to a given learned prototypical vector can provide a general conceptual intuition on the encoded visual pattern. Furthermore, in the case that a method is not able to generate image patches nor a heatmap visualization, e.g. Concept Whitening [23], the activation 3. Example of Synthetic Images (SI) feedback modality illustrated in a) [38], b) Class Scoring Model [37], c) [52], d) Network Inversion [34], e) TCAV [41], f) Feature Inversion [33], g) Concept Attribution [44].\nmaximization technique [43] can be an alternative approach. Concept Whitening [23] tries to separate the direction of different encoded features in the latent space. As mentioned, the method illustrates only exemplar images for each direction in the latent space. Therefore, there is no guarantee that a clear and consistent visual pattern is illustrated in the exemplar images provided as interpretation feedback. Therefore, generating synthetic images for each direction can provide a better intuition on the encoded features in each direction of the latent space. This would ease the qualitative assessment on whether the latent space directions have encoded distinctive visual patterns. Similarly, the activation maximization technique [43] can be applied on the trained capsules layers in Capsule Networks [53]. Since the Capsule Networks aim at learning class-specific capsules using routing-by-agreement mechanisms, one can generate a synthetic image whose internal representation maximizes the output probability vectors of capsules relevant to a given class. Then, the relationship among the learned features can be evaluated from the synthetic images." }, { "figure_ref": [ "fig_2" ], "heading": "Explanation capability", "publication_ref": [ "b14", "b30", "b14", "b30", "b28", "b59" ], "table_ref": [], "text": "This section discusses whether the insights extracted via interpretation methods could be further exploited for model explanation purposes. More specifically, to justify the prediction made by the model for a specific input. While this capability is not a necessary requirement for model interpretation methods, its existence would further extend the value and applicability of the interpretation method that possess it. This explanation capability can be in a form of a visualization, e.g. as proposed in [15], [31]. We refer to this visualization modality as explanation feedback based on the relevant units extracted by the interpretation method for each input image respective to the decision made by the model. This is different from interpretation visualization in the Feedback Modality factor where the interpretation methods provide interpretation feedback, i.e., visual feedback of the identified/learned relevant features, regardless of the predictions made by the model.\nThe statistics of the Explanation Capability factor are presented in Fig. 2. From here we can draw the following observations and analysis.\nIt is noticeable that only two of the Post-Hoc interpretation methods, namely VEBI [15] and Critical Subnetwork [31], are able to explain the predictions of the model based on the insights extracted by the interpretation method.\nVEBI generates visual explanations by computing visual explanations for the relevant features (identified by the interpretation procedure) related to the class label predicted for a given input.\nCritical Subnetwork follows a Grad-Cam-like Explanation [29].\nmethod [60] to generate visual explanations. Given an extracted Class-Specific subnetwork, Critical Subnetwork computes the gradient of the output of the predicted class only w.r.t the convolutional filters identified as class-relevant units. Then, the gradient maps pertaining to each class-relevant convolutional filters are considered for visualization.\nRegarding Interpretable-by-Design methods, it can be seen that none of them generate visual explanations. Therefore, enhancing these methods with this capability could be point of action for future work. This capability, specifically for prototype-based methods, enables the discovery of critical learned Class-Specific or Class-Agnostic prototypical vectors which can be used for justifying the decision-making process of a model. For example, gradients of the model output w.r.t class-specific prototypical vectors can be computed to produce a saliency map that highlight important region(s) in the provided input." }, { "figure_ref": [ "fig_2" ], "heading": "Architecture coverage", "publication_ref": [ "b41", "b34", "b35", "b56", "b38" ], "table_ref": [], "text": "In this section, we focus the discussion on the portion of the architecture from where insights are extracted by interpretation methods. Studies have revealed that different features are encoded in units/neurons located at different levels of the architecture.\nHence, the produced interpretation feedback should ideally be in accordance with features encoded in all the layers or parts of the architecture of the model being interpreted. The statistics illustrated in Fig. 2 show that only a small set of interpretation methods, which are Post-Hoc, provide interpretation by considering all the convolutional layers of a given architecture. Table 1 shows that recent Class-Specific and Class-Agnostic Post-Hoc methods have a partial coverage of the architecture of the base model. Here a common practice is to focus on the last convolutional layer of a given architecture. Hence, the produced interpretation feedback is limited to a small part of the model. Furthermore, none of the Interpretable-by-Design methods consider the representations encoded in all the layers. Here again, a common practice is to focus on the representations encoded at the last convolutional layer. This, in turn, reduces the level of insights they are capable of producing in their interpretation feedback. In this regard, extending Interpretable-by-Design methods, for example investigating the possibility of apply- , g) ICE [42], h) TesNet [35], i) Deformable ProtoPNet [36], j) ProtoTree [57], and k) [39].\ning the prototype layers in all the parts of the architecture, can be a potential research direction in this field." }, { "figure_ref": [], "heading": "Evaluation protocol", "publication_ref": [ "b15", "b16", "b28", "b45", "b17", "b19", "b36", "b32", "b33", "b15", "b16", "b28", "b53", "b51", "b40", "b31", "b47", "b22", "b46", "b14", "b30", "b46", "b14", "b14", "b48", "b21", "b56", "b34", "b57", "b23", "b30", "b54", "b18", "b60", "b40", "b17", "b43", "b22", "b61" ], "table_ref": [], "text": "In this section, we discuss the evaluation protocols followed by the discussed methods to assess the performance of the produced interpretation feedback. These protocols cover both qualitative and quantitative evaluation.\nAccording to our study, we have observed that, with the exception of Network Dissection [16] and its extensions namely; Net2Vec [17] and Compositional Explanation [29], the other visual interpretation methods only provide qualitative examples as interpretation feedback (Section 3.4).\nIn the cases where a quantitative evaluation is conducted, the methods follow different approaches and goals to quantify the produced interpretation feedback.\nUser-based evaluation is one of the approaches conducted in [46], ACE [18], and PACE [20]. These user-studies are based on questionnaires which include questions about the occurrence of a semantic concept in the visual feedback provided by a given interpretation method. Then, a group of users are asked to assign scores to the presented visualizations. These scores aim to indicate the level of agreement of the users w.r.t. the predefined concepts. Finally, the interpretation capability of a method is quantified by aggregating the collected scores.\nA group of methods, namely Class Scoring Model [37], Feature Inversion [33], and Network Inversion [34] report the representation reconstruction error as a quantitative metric to measure the performance of the produced interpretation feedback.\nIn other cases where the aim is to quantify the semanticity of the internal representation, the alignment between annotation masks and internal activations is measured. This is specifically performed in Network Dissection [16], Net2Vec [17], Compositional Explanation [29], and Interpretable Convolutional Filters [54].\nIn some methods, such as [52] and TCAV [41], the cosine similarity between the representation internally-encoded by the base model and the representation learned/identified by the interpretation methods is measured. In other cases, namely Linear Probing [32], classifiers are trained on the internal representations of each layer, separately. Similarly, in Reversed Linear Probing [48] and CW [23] classifiers are trained on the representations defined by the learned relevant components to measure separability of the learned representations. Then, the classification accuracy of these two groups of classifiers are compared for as part of the evaluation.\nWorth noting is that these classifiers are different from the base model being interpreted.\nIn various cases [47], [15], [31] a quantitative evaluation is conducted to investigate the relevance of the units identified by the interpretation methods. However, such evaluation is designed with different approaches closely related to the proposed methodology. For instance, [47] and VEBI [15] apply a neuron-perturbation approach where the identified relevant units are systematically occluded by zeroing their output. Then, examples are pushed through the perturbed model and the changes in classification accuracy are tracked.\nThe assumption here, is that the occlusion of the relevant units should lead to significant drops in classification per- [15] and Selectivity Index [49]. The other images show the example of Heatmap Visualization (HV) feedback modality presented in c) ProtoPNet [22], d) ProtoTree [57], e) TesNet [35], f) ProtoPShare [58], and g) ProtoMIL [24].\nformance. Critical Subnetwork [31] and Interpretable CNN Decision Tree [55] compare the output accuracy between the original model and the extracted one, i.e. the extracted subnetwork or constructed decision tree, respectively. In this regard, the topic-based interpretation method [19] proposes the ConceptShAP metric, adapted from Shapley Values [61], to measure the completeness score of the learned topics for a model prediction. TCAV [41] measures the sensitivity of the output logits of the model by considering the difference between the output logit for the original activations and the activations aggregated by the obtained CAV. In this line, ACE [18], Concept Attribution [44], and CW [23] have adapted TCAV to assign an importance score to their produced interpretation feedback w.r.t. the model accuracy.\nAs can be seen, there is a clear diversity among the evaluation protocols followed by previous efforts. Furthermore, the followed protocols are tailored to the inner-workings of each interpretation method. This makes a uniform quantitative comparison among existing model interpretation methods problematic. To address this problem, [62] has recently proposed an evaluation protocol that aims at the quantitative comparison of visual model interpretation methods. More specifically, the proposed method measures the alignment between heatmaps produced from relevant units identified by model interpretation methods and additional annotated semantic elements. These annotations have differ-ent levels of semanticity, e.g. objects, parts, colors, textures, and come from the same dataset used to train the model being interpreted." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Model Explanation methods", "publication_ref": [ "b7", "b62", "b63", "b3", "b11", "b64" ], "table_ref": [], "text": "Model explanation methods aim to justify the prediction made by a model for a specific input [8], [63], [64]. Up to now, this research line has been significantly explored, thus introducing a wide terminology and a variety of approaches [4]. Recently, [12] conducted a meta-study on the latest 20 most cited taxonomies/surveys, covering up to 50 model explanation methods, in order to highlight the essential aspects of the state-of-the-art in model explanation. Compared to the model explanation research line, a systematic study of model interpretation methods has remained non-existent.\nDifferent from the above mentioned surveys, [65] discusses issues such as faithfulness raising in model explanation methods in high stakes applications. Accordingly, it encourages policy makers towards using interpretable machine learning models instead of following post-hoc explanation procedures. Furthermore, the work investigates some challenges in the design of interpretable machine learning. These challenges include architecture design, optimization algorithm construction, and scarcity of domain experts for the analysis of the feedback provided by interpretable machine learning models in high-stakes applications.\nHere, we have provided a framework to classify visual model interpretation methods according to the defined axes/factors. The proposed framework reveals the strengths and drawbacks of current model interpretation efforts. It also sheds light on possible research gaps in this line of research that can be explored further." }, { "figure_ref": [], "heading": "Model Interpretation Methods", "publication_ref": [ "b2", "b3", "b4", "b3", "b4", "b65" ], "table_ref": [], "text": "Model interpretation methods aim to analyze the internal representations learned by the model. One of the early related taxonomies in this research line was published in 2018 by [3], a review on several research directions in the area of visual interpretability of CNN models. These directions cover some works in the following areas: (1) visualizing the internal representations, (2) diagnosing representation flaws in CNNs, (3) disentangling internal representations into graphical models, (4) developing the architecture of CNNs to include built-in modules, such as R-CNN and Capsule Networks, to process different patterns in the internal representations. Although this was the first time that some of these research lines were reviewed, there are some gaps in that study. First, it is important to note that while it provides insights into the works proposed along each direction, it did not specify which were the factors to be studied over methods nor accomplished a comparison/discussion between works w.r.t their qualitative properties. Second, it considers model explanation methods as part of the research line of visual model interpretation. As stated earlier explanation methods have a different goal than their interpretation counterparts. Third, it covers works along the direction of diagnosing representation flaws in CNNs, such as bias detection in the learned representations. While a very important problem, detecting bias can be considered as a task that can be assisted by model interpretation methods but not a goal of interpretation methods per se.\nAnother survey in this research line, [4] has proposed a taxonomy to categorize methods based on three axes namely (1) relation between interpretation methods and the model being interpreted, (2) type of explanations provided by the covered methods, and (3) local vs. global interpretability. Most recently, [5] put forward a new survey on model interpretation to classify methods based on three axes: (1) representations of interpretations, e.g., the input feature importance or the influece of the training samples, (2) type of the base model that the interpretation method is used for, e.g., differentiable models, GANs, and NLP models, and (3) relation between interpretation methods and the model being interpreted (similar to [4]).\nAlthough these works have provided a comprehensive survey of existing methods, they suffer from some weaknesses as well. First, from the point of terminology, these works use frequently the terms of explanation and interpretation interchangeably, which lends itself to confusion. Second, they cover a wide variety of explanation methods. Moreover, since [5] considers a variety of deep models, it provides a few examples of interpretation methods for each type of deep model. Hence, there are plenty of visual model interpretation methods that had not been covered by these surveys. Furthermore, it considers the methods using attention mechanisms for explaining deep models as interpretation methods. However, as [66] has showed, purely focusing on attention mechanisms might not be sufficient for this task.\nIn recent years, research along the visual model interpretation line has gained significant momentun. This in turn led to works introducing new methodologies in the field. Hence, in this work, we have provided a framework with specific factors which could serve as axes for positioning existing and future methods. More important, we provide a clear definition for the model interpretation task and covered in our study the methods compatible with it. We also provide inter-category and intra-category discussion in order to give deeper insights on the interpretation capability of existing methods. We expect this could serve as a foundation for the model interpretation research line and could help reveal active and passive areas of research. In these regard, we stress some of the research gaps in each category." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In recent years, research on methods for analyzing the representations internally encoded by Convolutional Neural Networks (CNNs) has been increased significantly. This increased interest, next to the continuously growing literature on the model explanation task, has produced the side effect of an increasing number of works with confusing use of terminology, e.g. \"interpretation\" vs \"explanation\". This not only leads to ambiguity and confusion but also hinders the identification of unexplored research areas/problems in the field. Here, we aim at making a clear distinction between these two tasks, i.e. model explanation and model interpretation, and conducted a detailed study of works addressing the latter.\nA key contribution of our study of interpretation methods is the proposed framework, defined by six qualitative factors, that can serve for the categorization of current and future interpretation methods. Accordingly, this document complements the description of existing interpretation methods with their positioning based on the proposed factors.\nFollowing the proposed framework, we highlighted several directions (e.g. reduced feedback semanticity, partial model coverage, etc. ) where research on model interpretation has received low attention. At the same time, we draw several pointers that could be followed to address such weaker directions. Finally, we discussed the evaluation protocols followed by each of the covered methods." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "This work is supported by the UAntwerp BOF DOCPRO4-NZ Project (id 41612) \"Multimodal Relational Interpretation for Deep Models\"." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [ "b36", "b37", "b38", "b40", "b14", "b17", "b43", "b30", "b41", "b19" ], "table_ref": [], "text": "Interpretation Capability Task Specificity Annotation Dependency Feedback Modality Feedback Semanticity Architecture Coverage Explanation Integration Capability Class Scoring Model [37] Post-Hoc Class-Specific Independent SI No Partial No [38] Post-Hoc Class-Specific Independent SI No Partial No [39] Post-Hoc Class-Specific Independent IP No Partial No TCAV [41] Post-Hoc Class-Specific Independent SI No All No VEBI [15] Post-Hoc Class-Specific Independent AIP No All Yes ACE [18] Post-Hoc Class-Specific Independent IP No Partial No Concept Attribution [44] Post-Hoc Class-Specific Independent SI No All No Critical Subnetworks [31] Post-Hoc Class-Specific Independent EI No All Yes ICE [42] Post-Hoc Class-Specific Independent IP No Partial No PACE [20] Post " } ]
With the continue development of Convolutional Neural Networks (CNNs), there is a growing concern regarding representations that they encode internally. Analyzing these internal representations is referred to as model interpretation. While the task of model explanation, justifying the predictions of such models, has been studied extensively; the task of model interpretation has received less attention. The aim of this paper is to propose a framework for the study of interpretation methods designed for CNN models trained from visual data. More specifically, we first specify the difference between the interpretation and explanation tasks which are often considered the same in the literature. Then, we define a set of six specific factors that can be used to characterize interpretation methods. Third, based on the previous factors, we propose a framework for the positioning of interpretation methods. Our framework highlights that just a very small amount of the suggested factors, and combinations thereof, have been actually studied. Consequently, leaving significant areas unexplored. Following the proposed framework, we discuss existing interpretation methods and give some attention to the evaluation protocols followed to validate them. Finally, the paper highlights capabilities of the methods in producing feedback for enabling interpretation and proposes possible research problems arising from the framework.
ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1 FICNN: A Framework for the Interpretation of Deep Convolutional Neural Networks
[ { "figure_caption": "Fig. 1 .1Fig. 1. Model Interpretation (top) investigates internal encoded features being critical for model functioning. Model explanation (down) indicates characteristics of an input sample affecting model decisions.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fand training data D, the main goal of the model interpretation task is to determine what a model has actually learned. That is, what informative characteristics or features from the training data D that the model encodes into the internal representation R. In practice, this is related to producing insights into what internal relevant features within R are critical for model functioning (Fig. 1 top). Definition 2 (Model Explanation). Given a specific input arXiv:2305.10121v1 [cs.CV] 17 May 2023", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Statistics of investigated visual model interpretation methods over different proposed qualitative factors.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig.3. Example of Synthetic Images (SI) feedback modality illustrated in a)[38], b) Class Scoring Model[37], c)[52], d) Network Inversion[34], e) TCAV[41], f) Feature Inversion[33], g) Concept Attribution[44].", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Example of Images Patches feedback modality illustrated in a) [46], b) Network Dissection [16], c) Net2Vec [17], and d) Compositional Explanation [29].", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. Example of Image Patches feedback modality illustrated in a) Topic-based interpretation [19], b) PACE[20], c) ACE[18], d) ProtoPNet[22], e) Interpretable Convolutional Filter[54], f) Interpretable CNN Decision Tree[55], g) ICE[42], h) TesNet[35], i) Deformable ProtoPNet[36], j) ProtoTree[57], and k)[39].", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .Fig. 7 .67Fig. 6. Example of Images feedback modality illustrated in a) Critical Subnetwork [31], b) [47], c) Reversed linear Probing [48], d) Interpretable CNN Decision Tree[55], and e) Concept Whithening[23].", "figure_data": "", "figure_id": "fig_6", "figure_label": "67", "figure_type": "figure" } ]
Hamed Behzadi-Khormouji Jos; Oramas
[ { "authors": "G Singh; K.-C Yow", "journal": "IEEE Access", "ref_id": "b0", "title": "An interpretable deep learning model for covid-19 detection with chest x-ray images", "year": "2021" }, { "authors": "P Feifel; F Bonarens; F Koster", "journal": "", "ref_id": "b1", "title": "Reevaluating the safety impact of inherent interpretability on deep neural networks for pedestrian detection", "year": "2021-06" }, { "authors": "Q.-S Zhang; S.-C Zhu", "journal": "Frontiers of Information Technology & Electronic Engineering", "ref_id": "b2", "title": "Visual interpretability for deep learning: a survey", "year": "2018" }, { "authors": "Y Zhang; P Ti Ňo; A Leonardis; K Tang", "journal": "IEEE Transactions on Emerging Topics in Computational Intelligence", "ref_id": "b3", "title": "A survey on neural network interpretability", "year": "2021" }, { "authors": "X Li; H Xiong; X Li; X Wu; X Zhang; J Liu; J Bian; D Dou", "journal": "Knowledge and Information Systems", "ref_id": "b4", "title": "Interpretable deep learning: Interpretation, interpretability, trustworthiness, and beyond", "year": "2022" }, { "authors": "A B Arrieta; N Díaz-Rodríguez; J Del; A Ser; S Bennetot; A Tabik; S Barbado; S García; D Gil-L Ópez; R Molina; Benjamins", "journal": "Information fusion", "ref_id": "b5", "title": "Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai", "year": "2020" }, { "authors": "A Søgaard", "journal": "", "ref_id": "b6", "title": "Shortcomings of interpretability taxonomies for deep neural networks", "year": "2022" }, { "authors": "F Gr Ün; C Rupprecht; N Navab; F Tombari", "journal": "", "ref_id": "b7", "title": "A taxonomy and library for visualizing learned features in convolutional neural networks", "year": "2016" }, { "authors": "L H Gilpin; D Bau; B Z Yuan; A Bajwa; M Specter; L Kagal", "journal": "IEEE", "ref_id": "b8", "title": "Explaining explanations: An overview of interpretability of machine learning", "year": "2018" }, { "authors": "R Guidotti; A Monreale; S Ruggieri; F Turini; F Giannotti; D Pedreschi", "journal": "ACM computing surveys (CSUR)", "ref_id": "b9", "title": "A survey of methods for explaining black box models", "year": "2018" }, { "authors": "H Behzadi-Khormouji; H Rostami", "journal": "Applied Intelligence", "ref_id": "b10", "title": "Fast multi-resolution occlusion: a method for explaining and understanding deep neural networks", "year": "2021" }, { "authors": "G Schwalbe; B Finzel", "journal": "", "ref_id": "b11", "title": "XAI method properties: A (meta-)study", "year": "2021" }, { "authors": "M Kahng; N Thorat; D H Chau; F B Viégas; M Wattenberg", "journal": "IEEE transactions on visualization and computer graphics", "ref_id": "b12", "title": "Gan lab: Understanding complex deep generative models using interactive visual experimentation", "year": "2018" }, { "authors": "D Bau; J.-Y Zhu; H Strobelt; B Zhou; J B Tenenbaum; W T Freeman; A Torralba", "journal": "", "ref_id": "b13", "title": "Gan dissection: Visualizing and understanding generative adversarial networks", "year": "2019" }, { "authors": "J Oramas; K Wang; T Tuytelaars", "journal": "", "ref_id": "b14", "title": "Visual explanation by interpretation: Improving visual feedback capabilities of deep neural networks", "year": "2019" }, { "authors": "B Zhou; D Bau; A Oliva; A Torralba", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b15", "title": "Interpreting deep visual representations via network dissection", "year": "2018" }, { "authors": "R Fong; A Vedaldi", "journal": "", "ref_id": "b16", "title": "Net2vec: Quantifying and explaining how concepts are encoded by filters in deep neural networks", "year": "2018" }, { "authors": "A Ghorbani; J Wexler; J Zou; B Kim", "journal": "Advances in Neural Information Processing Systems (NeurIPS)", "ref_id": "b17", "title": "Towards automatic concept-based explanations", "year": "2019" }, { "authors": "C Yeh; B Kim; S Ö Arik; C Li; P Ravikumar; T Pfister", "journal": "", "ref_id": "b18", "title": "On concept-based explanations in deep neural networks", "year": "2020" }, { "authors": "V Kamakshi; U Gupta; N C Krishnan", "journal": "IEEE", "ref_id": "b19", "title": "Pace: Posthoc architecture-agnostic concept extractor for explaining cnns", "year": "2021" }, { "authors": "O Li; H Liu; C Chen; C Rudin", "journal": "", "ref_id": "b20", "title": "Deep learning for casebased reasoning through prototypes: A neural network that explains its predictions", "year": "2018" }, { "authors": "C Chen; O Li; A Barnett; J Su; C Rudin", "journal": "Advances in neural information processing systems (NeurIPS)", "ref_id": "b21", "title": "This looks like that: deep learning for interpretable image recognition", "year": "2019" }, { "authors": "Z Chen; Y Bei; C Rudin", "journal": "Nature Machine Intelligence", "ref_id": "b22", "title": "Concept whitening for interpretable image recognition", "year": "2020" }, { "authors": "D Rymarczyk; A Kaczynska; J Kraus; A Pardyl; B Zielinski", "journal": "", "ref_id": "b23", "title": "Protomil: Multiple instance learning with prototypical parts for fine-grained interpretability", "year": "2021" }, { "authors": "W Xu; Y Xian; J Wang; B Schiele; Z Akata", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b24", "title": "Attribute prototype network for zero-shot learning", "year": "2020" }, { "authors": "J Genone; T Lombrozo", "journal": "Philosophical Psychology", "ref_id": "b25", "title": "Concept possession, experimental semantics, and hybrid theories of reference", "year": "2012" }, { "authors": "T.-Y Lin; M Maire; S Belongie; J Hays; P Perona; D Ramanan; P Dollár; C L Zitnick", "journal": "Springer", "ref_id": "b26", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "P Welinder; S Branson; T Mita; C Wah; F Schroff; S Belongie; P Perona", "journal": "", "ref_id": "b27", "title": "Caltech-UCSD Birds 200", "year": "2010" }, { "authors": "J Mu; J Andreas", "journal": "Advances in Neural Information Processing Systems(NeurIPS)", "ref_id": "b28", "title": "Compositional explanations of neurons", "year": "2020" }, { "authors": "Y Ma; B Niu; Y Qi", "journal": "SPIE", "ref_id": "b29", "title": "Survey of image classification algorithms based on deep learning", "year": "2021" }, { "authors": "Y Wang; H Su; B Zhang; X Hu", "journal": "IEEE Transactions on Image Processing", "ref_id": "b30", "title": "Interpret neural networks by extracting critical subnetworks", "year": "2020" }, { "authors": "G Alain; Y Bengio", "journal": "", "ref_id": "b31", "title": "Understanding intermediate layers using linear classifier probes", "year": "2016" }, { "authors": "A Mahendran; A Vedaldi", "journal": "", "ref_id": "b32", "title": "Understanding deep image representations by inverting them", "year": "2015" }, { "authors": "A Dosovitskiy; T Brox", "journal": "", "ref_id": "b33", "title": "Inverting visual representations with convolutional networks", "year": "2016" }, { "authors": "J Wang; H Liu; X Wang; L Jing", "journal": "", "ref_id": "b34", "title": "Interpretable image recognition by constructing transparent embedding space", "year": "2021" }, { "authors": "J Donnelly; A J Barnett; C Chen", "journal": "", "ref_id": "b35", "title": "Deformable protopnet: An interpretable image classifier using deformable prototypes", "year": "2022" }, { "authors": "K Simonyan; A Vedaldi; A Zisserman", "journal": "", "ref_id": "b36", "title": "Deep inside convolutional networks: Visualising image classification models and saliency maps", "year": "2014" }, { "authors": "D Wei; B Zhou; A Torrabla; W Freeman", "journal": "", "ref_id": "b37", "title": "Understanding intra-class knowledge inside cnn", "year": "2015" }, { "authors": "Y S Li; L Liu; C Shen; A Hengel", "journal": "International Journal of Computer Vision", "ref_id": "b38", "title": "Mining mid-level visual patterns with deep cnn activations", "year": "2017-02" }, { "authors": "R ", "journal": "VLDB", "ref_id": "b39", "title": "Fast algorithms for mining association rules in large databases", "year": "1994" }, { "authors": "B Kim; M Wattenberg; J Gilmer; C Cai; J Wexler; F Viegas", "journal": "PMLR", "ref_id": "b40", "title": "Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav)", "year": "2018" }, { "authors": "R Zhang; P Madumal; T Miller; K A Ehinger; B I Rubinstein", "journal": "", "ref_id": "b41", "title": "Invertible concept-based explanations for cnn models with non-negative concept activation vectors", "year": "2021" }, { "authors": "C Olah; A Satyanarayan; I Johnson; S Carter; L Schubert; K Ye; A Mordvintsev", "journal": "Distill", "ref_id": "b42", "title": "The building blocks of interpretability", "year": "2018" }, { "authors": "W Wu; Y Su; X Chen; S Zhao; I King; M R Lyu; Y.-W Tai", "journal": "", "ref_id": "b43", "title": "Towards global explanations of convolutional neural networks with concept attribution", "year": "2020" }, { "authors": "M Alexander; O Christopher; M Tyka", "journal": "", "ref_id": "b44", "title": "Inceptionism: Going deeper into neural networks", "year": "2015-10" }, { "authors": "B Zhou; A Khosla; A Lapedriza; A Oliva; A Torralba", "journal": "", "ref_id": "b45", "title": "Object detectors emerge in deep scene cnns", "year": "2015" }, { "authors": "V Escorcia; J Carlos Niebles; B Ghanem", "journal": "", "ref_id": "b46", "title": "On the relationship between visual attributes and convolutional networks", "year": "2015" }, { "authors": "I Laina; Y M Asano; A Vedaldi", "journal": "", "ref_id": "b47", "title": "Measuring the interpretability of unsupervised representations via quantized reverse probing", "year": "2022" }, { "authors": "I Rafegas; M Vanrell; L A Alexandre; G Arias", "journal": "Pattern Recognition Letters", "ref_id": "b48", "title": "Understanding trained cnns by indexing neuron selectivity", "year": "2020" }, { "authors": "D M Blei; A Y Ng; M I Jordan", "journal": "Journal of machine Learning research", "ref_id": "b49", "title": "Latent dirichlet allocation", "year": "2003-01" }, { "authors": "T L Griffiths; M Steyvers", "journal": "Proceedings of the National academy of Sciences", "ref_id": "b50", "title": "Finding scientific topics", "year": "2004" }, { "authors": "I Gat; G Lorberbom; I Schwartz; T Hazan", "journal": "", "ref_id": "b51", "title": "Latent space explanation by intervention", "year": "2022" }, { "authors": "S Sabour; N Frosst; G E Hinton", "journal": "", "ref_id": "b52", "title": "Dynamic routing between capsules", "year": "2017" }, { "authors": "Q Zhang; Y N Wu; S.-C Zhu", "journal": "", "ref_id": "b53", "title": "Interpretable convolutional neural networks", "year": "2018" }, { "authors": "Q Zhang; Y Yang; H Ma; Y Wu", "journal": "", "ref_id": "b54", "title": "Interpreting cnns via decision trees", "year": "2019" }, { "authors": "M Harandi; C Sanderson; C Shen; B C Lovell", "journal": "", "ref_id": "b55", "title": "Dictionary learning and sparse coding on grassmann manifolds: An extrinsic solution", "year": "2013" }, { "authors": "M Nauta; R Van Bree; C Seifert", "journal": "", "ref_id": "b56", "title": "Neural prototype trees for interpretable fine-grained image recognition", "year": "2021" }, { "authors": "D Rymarczyk; Ł Struski; J Tabor; B Zieli", "journal": "", "ref_id": "b57", "title": "Protopshare: Prototypical parts sharing for similarity discovery in interpretable image classification", "year": "2021" }, { "authors": "A Krizhevsky; I Sutskever; G E Hinton", "journal": "Communications of the ACM", "ref_id": "b58", "title": "Imagenet classification with deep convolutional neural networks", "year": "2017" }, { "authors": "R R Selvaraju; M Cogswell; A Das; R Vedantam; D Parikh; D Batra", "journal": "", "ref_id": "b59", "title": "Grad-cam: Visual explanations from deep networks via gradient-based localization", "year": "2017" }, { "authors": "L S Shapley", "journal": "Classics in game theory", "ref_id": "b60", "title": "A value for n-person games", "year": "1997" }, { "authors": "H Behzadi-Khormouji; J Oramas", "journal": "", "ref_id": "b61", "title": "A protocol for evaluating model interpretation methods from visual explanations", "year": "2023-01" }, { "authors": "S A Bargal; A Zunino; V Petsiuk; J Zhang; V Murino; S Sclaroff; K Saenko", "journal": "Springer", "ref_id": "b62", "title": "Beyond the visual analysis of deep model saliency", "year": "2020-07-18" }, { "authors": "L A Hendricks; R Hu; T Darrell; Z Akata", "journal": "", "ref_id": "b63", "title": "Grounding visual explanations", "year": "2018" }, { "authors": "C Rudin", "journal": "Nature Machine Intelligence", "ref_id": "b64", "title": "Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead", "year": "2019" }, { "authors": "S Jain; B C Wallace", "journal": "", "ref_id": "b65", "title": "Attention is not explanation", "year": "2019" } ]
[]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b4", "b4" ], "table_ref": [], "text": "Konkani belongs to the Indo-Aryan branch of the Indo-European family of languages. It is a member of the southern group of Indo-Aryan languages, and is most closely related to Marathi within this group (Miranda, 2018)). Konkani is mainly spoken in Goa and in some parts of the neighbouring states of Maharashtra, Karnataka, and Kerala, where Konkani speakers migrated after the Portuguese arrival in Goa. The 1991 Census of India records the number of Konkani speakers to be 1,760,607 out of which 602,626 (34.2 %) were from Goa,312,618 (17.8 %) were from Maharashtra,706,397 (40.1 %) from Karnataka, and 64,008 (3.6 %) from Kerala (Miranda, 2018). Konkani is written in different scripts in the regions where it is spoken." }, { "figure_ref": [], "heading": "Phonological Features of Konkani", "publication_ref": [ "b6", "b7", "b0", "b4", "b1" ], "table_ref": [ "tab_0", "tab_0" ], "text": "As regards the phonology of Konkani, different scholars have mentioned different numbers of vowels and consonants in the language. Also, there is no consensus on the exact specification of the vowels as regards their place in the vocal tract. Nasalisation is phonemic in Konkani (as shown by i and ii below).\ni hE ta > tSE b h urgE these.mas.pl. he.gen.mas.pl. children.mas.pl. 'These are his (male) children.' ii hẼ ta > tSẼ b h urgẼ this.neut.sg. he.gen.neut.sg. child.neut.sg. 'This is his child.' 1.2. Related Work on Konkani (Sardesai, 1986), (Sardesai, 1993), in her dialect-specific work refers to nine Konkani Vowel phonemes: Front [i, e, E]; Central [1 @, a]; and Back [u, o, O]. The author mentions that all these vowel phonemes can be nasalised. Her work is summarised in Table 1. (Almeida, 1989) makes a reference to eight oral vowels for Konkani: Front [i, e, E] and Back [u, o, O, T, a]. His classification of oral vowels is provided in Table 1. The author mentions that all vowels present in the language can be nasalised. Examples for both oral and nasal vowels are presented by him in his work. The author seems to consider vowel length to be phonemic in Konkani, which is not the case, at least for the Konkani varieties spoken in Goa. (Miranda, 2018) mentions nine Vowel phonemes for Konkani, along with their corresponding nasal counterparts. These are: [i, e, E, @, 2, a, u, o, O]. (Fadte et al., 2022) provide a vowel chart for Konkani based on their acoustic analysis of vowels. They also provide the properties of vowel pairs which have different phonetic realisations but the same written representation in the script. Their vowel classification work is presented in Table 2, which includes equivalent vowels in different scripts, namely Devanagari, Roman and Kannada. Their work also acknowledges that all oral vowels could be nasalised." }, { "figure_ref": [], "heading": "Author and Year", "publication_ref": [ "b6", "b0", "b4", "b1", "b2" ], "table_ref": [], "text": "Vowels Classification (Sardesai, 1986) i, e, E, u, o, O, a, @, 1 Dialectspecific (Almeida, 1989) i, e, E, u, o, O, a, T General (Miranda, 2018) i, e, E, u, o, O, a, @ 2 General (Fadte et al., 2022) i, e, E, u, o, O, a, @, 1 General Table 2: Oral vowels of Konkani positions of the tongue in case of the nasal vowels. They have showed that the position of the tongue is generally lowered for Back vowels, fronted for Low vowels, and raised for Front vowels. (Feng and Castelli, 1996) have presented work on the nasalisation of 11 French vowels. They show that the first two resonance frequencies are at about 300 and 1000 Hz.\n(Carignan, 2014) is an acoustic study of three French oralnasal vowel pairs /a//ã/, /E/-/Ẽ/, and /o/-/õ/. His study shows that the oral articulation of French nasal vowels is not arbitrary." }, { "figure_ref": [], "heading": "Objectives of the Present Study", "publication_ref": [ "b6", "b1" ], "table_ref": [], "text": "As mentioned earlier, there are differences among Konkani scholars on the exact number of Vowel phonemes in the language. In the absence of more accurate descriptions of the Vowel system of the language and nasalisation of vowels, this study makes an attempt to target an important aspect of Konkani vowel phonemes namely, their nasalisation using acoustic analysis. For this study, we have taken into consideration the nine vowel phonemes mentioned in the Standard variety of the language which are the same as the ones mentioned in (Sardesai, 1986) and later cited in (Fadte et al., 2022).\nThis work is arranged into four sections. Section 1. -the Introduction section above highlights the linguistic features of the language and states the objective of the study. Section 2. discusses the methodology followed in the experiment.\nThe results of the experiment are presented in section 3.. Section 4. concludes the paper with the scope for further studies." }, { "figure_ref": [], "heading": "Hypothesis", "publication_ref": [], "table_ref": [], "text": "Given that nasalisation is phonemic in the language, each vowel phoneme of the language will have a nasal counterpart.\nIn other words, the status of nasalisation as being phonemic in the language will become more explicit through this study." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [ "b1" ], "table_ref": [], "text": "This section presents the details of the experimental work carried out and the methodology that was used for the experiment. (Fadte et al., 2022)'s methodology was followed for carrying out this experiment." }, { "figure_ref": [], "heading": "Recording Script", "publication_ref": [ "b1" ], "table_ref": [ "tab_1" ], "text": "The recording script of this work was based on the Vowel phonemes mentioned in the classification provided by (Fadte et al., 2022). The Vowel phonemes in the script were arranged according to their classification which was established using the minimal and near-minimal pairs. A Phoneme is the smallest distinctive/contrastive unit in the sound system of a language. It is that unit of sound (a phone) that can distinguish one word from another in a particular language. The inventory of phonemes of a language is created using the minimal pairs (or near-minimal pairs in the absence of minimal pairs) of the language. Minimal pairs are pairs of words or phrases in a particular language that differ in only one phonological element and have distinct meanings. Near minimal pairs are pairs which have one or more additional differences elsewhere in the word besides the crucial position. Thus minimal pairs are an important tool that helps in establishing phonemes of a particular language. Pronunciation of phones is shown using square brackets whereas phonemes established using the minimal pairs are written in between slashes. To give an example, the Konkani words 3, and the entire script can be accessed from here. 1 The recording script consisted of 74 unique sentences, 37 for oral and 37 for nasal vowels, respectively. At least two different sets of minimal pairs were used for each vowel phoneme. " }, { "figure_ref": [], "heading": "Speakers' Detail", "publication_ref": [], "table_ref": [], "text": "Three male and three female native speakers of Konkani were selected for this experiment. Speakers belonged to different geographical locations and spoke diverse regional dialects (details of these can be accessed from here 1 ).This ensured that phone variability across regions was captured. All the speakers selected for the recording were literate." }, { "figure_ref": [], "heading": "Data Elicitation and Recording", "publication_ref": [], "table_ref": [], "text": "The reading material consisting of sentences having minimal pairs was provided to the speakers as a printed copy.\n1 https://github.com/shashwatup9k/ dhvani-konkani\nThe speakers were given some time to familiarise themselves with the meaning of the sentences. Then, they were instructed to read the sentences in the most natural way they could. Each sentence in the recording script was pronounced thrice by the speakers. This was done in order to capture any dialectal variation in the pronunciation of the target phonemes. It also helped to detect speaker-specific errors in phone production. The recording was performed in a closed room with less ambient noise. The audio was recorded using a Zoom-H6 recorder at a sampling rate of 48 kHz and was stored in non-lousy WAV format." }, { "figure_ref": [ "fig_1" ], "heading": "Annotation", "publication_ref": [], "table_ref": [], "text": "Phoneme-level annotation of audio data was then carried out. Only the vowel phonemes which were to be used for analysis were annotated. A total of 1135 vowel phones were annotated in the data set. The frequency distribution of phones is presented in Figure 1. " }, { "figure_ref": [], "heading": "Formant Extraction", "publication_ref": [], "table_ref": [], "text": "A Praat script was used to perform formant extraction on annotated data. This script extracts the formant details from the mid-temporal interval of the phoneme, which is stored in a text file. For formant extraction, values for speakers' frequency were set to standard values, i.e. 5 kHz for male and 5.5 kHz for female speakers. The data was stored in a text file and later converted to a CSV file for plotting results and analysis." }, { "figure_ref": [], "heading": "Data Verification", "publication_ref": [], "table_ref": [], "text": "After formant details were extracted, they were plotted using a box plot for verifying visually any outliers that may have occurred due to wrong annotation. This step helped in identifying certain incorrect annotations, which were then corrected. As discussed in section 2.5., formant extraction was performed again, and boxplots were replotted. Box plots of the F1 and F2 formant for male speakers are shown in Figure 3. After the above corrections, a few outliers can still be seen.\nFigure 3: Box plot of F1 and F2 formants for oral and nasal vowels for male speakers Final data verification was done with the help of a linguist who simultaneously listened to the phones and viewed the spectrogram to verify the label given to them. Errors in the phoneme production were noted down and are discussed in detail in section 2.7. below." }, { "figure_ref": [], "heading": "Substitution Analysis", "publication_ref": [], "table_ref": [ "tab_2", "tab_2" ], "text": "The verified data showed some deviations from the expected production of the phonemes. Table 4 presents the phoneme substitution that occurred during the elicitation process. All these deviations were not used for the formant analysis.\nThe close-mid central vowel [T] with a frequency of 12 (see Table 4) occurring in place of the schwa ([@]), is its allophone which occurs in the environment wherein it is followed by the open-mid back (rounded) vowel [O] as in the words [g@VO] 'bison' n.mas.sg, [b@rO] 'good' adj.mas.sg. " }, { "figure_ref": [], "heading": "Experimental Results and Analysis", "publication_ref": [ "b3", "b1" ], "table_ref": [ "tab_4", "tab_4" ], "text": "A formant analysis of Vowel phonemes was performed as part of this study. R script was written with the use of the phonR package (McCloy, 2016) to plot the experimental results. F1-F2 plots for oral and nasal vowels of male speakers are presented in Figure 4. A well-defined grouping of vowels in formant space is observed. From Figure 4, it is clearly seen that the Front oral vowels /i/, /e/ and /E/ occupy nonintersecting space in the formant chart. In the same figure, we can see that the three extreme vowels (/i/, /a/, and /u/) occupy three corners in formant space. Other vowels also do not have many intersections in formant space. F1-F2 plots Figure 4: Formant chart for oral and nasal vowels for male speakers for oral and nasal vowels of female speakers are presented in Figure 5, which have similar features as seen in the male formant chart. A comparative chart for additional details can be accessed here 1 . Apart from the formant charts, we have Figure 5: Formant chart for oral and nasal vowels for female speakers also listed the average values of F1, F2, and F3 formants for male and female speakers. These are presented in Table 5. The average values for the F1 and F2 for oral vowels are similar to those reported by (Fadte et al., 2022). Since no previous work was reported for the nasal values, formant values provided in Table 5 may be considered as the first reporting of such work." }, { "figure_ref": [], "heading": "Conclusion and Future Work", "publication_ref": [ "b1" ], "table_ref": [], "text": "This work provides a comparative study of the Konkani Vowel phonemes (i.e. oral and nasal vowels). The results have shown that all oral vowel sounds in Konkani can be nasalised. It is observed that the different vowels in the formant chart are in their expected position as per (Fadte et al., 2022) The average F1, F2, and F3 values for nasal vowels are reported for the first time through experimentation. This work can be helpful for the linguistic study of vowels and speech synthesis systems specific to Konkani language. Although oral and nasal studies have been presented, other phones and combinations of phones, like consonants, diphthongs have not been explored in this work or rather there have not been acoustic studies done related to the properties of such phones in Konkani language. We wish to explore these in our future work." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "Atul Kr. Ojha would like to acknowledge the support of the Science Foundation Ireland (SFI) as part of Grant Number SFI/12/RC/2289_P2, Insight SFI Centre for Data Analytics." } ]
Konkani is a highly nasalised language which makes it unique among Indo-Aryan languages. This work investigates the acoustic-phonetic properties of Konkani oral and nasal vowels. For this study, speech samples from six speakers (3 male and 3 female) were collected. A total of 74 unique sentences were used as a part of the recording script, 37 each for oral and nasal vowels, respectively. The final data set consisted of 1135 vowel phonemes. A comparative F1-F2 plot of Konkani oral and nasal vowels is presented with an experimental result and formant analysis. The average F1, F2 and F3 values are also reported for the first time through experimentation for all nasal and oral vowels. This study can be helpful for the linguistic research on vowels and speech synthesis systems specific to the Konkani language.
Empirical Analysis of Oral and Nasal Vowels of Konkani
[ { "figure_caption": "[na:k] 'nose' and [na:g] 'king cobra' differ only in the sounds [k] and [g]. Thus, the phones [k] and [g] in these words produce a difference in meaning. Using the [na:k] and [na:g] (minimal) pair we can now establish that the consonants [k] and [g] are phonemes of the language. These phonemes will therefore be written as /k/ and /g/. The recording script was created with Konkani sentences consisting of minimal pairs that aimed at establishing the vowel phonemes. A few examples of minimal pairs targeting vowel phonemes used in the recording script are provided in Table", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: Frequency distribution of phones in datasetAnnotation was performed using the Praat software(Paul and Weenink, 1992). The start and end of a phone boundary was marked as perceived by the ear of the annotator and with the help of a spectrogram in the Praat tool. A sample of an audio signal, spectrogram and phoneme level annotation done in the Praat tool is shown in Figure2.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Annotations and spectrogram of phones", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Comparison of Konkani Vowel classifications", "figure_data": "1.3. Related Work on Other Languages(Shosted et al., 2012) have presented work on Hindi nasal vowels, where F1-F2 values are used for calculating the", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Example of oral and nasal vowels in Konkani", "figure_data": "", "figure_id": "tab_1", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Substitution Analysis", "figure_data": "Phoneme Substitution Frequency % of total substitutioniNone00.01i2812.4e ẽ E Ẽ 1 1a E e E Ẽ e E @ @ @ @ T1 3 6 12 4 6 20 6 3 7 10 10.4 1.3 2.7 5.3 1.8 2.7 8.9 2.7 1.3 3.1 4.4 0.4@ @ a ã u ũ o o O ÕT 1 1 @ ã o ã a ũ u O o Õ Õ O12 4 8 6 1 1 8 12 3 19 12 8 2 1 325.3 1.8 3.6 2.7 0.4 0.4 3.6 5.3 1.3 8.4 0.9 3.1 0.9 0.4 14.2", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "vowel classification. It is also seen that nasalisation changes the F1-F2 values for the vowel phonemes.", "figure_data": "femalemalePhoneme F1F2F3F1F2F3i353 2518 3055 331 2229 2730e535 2240 2773 428 1943 2470E641 2518 3055 331 1749 24501 @ a636 1339 2978 453 1245 2548 690 1224 3081 537 1193 2508 982 2518 3055 685 1319 2446u417 972 2904 362 930 2464o574 990 3072 450 914 2585O ĩ670 1089 2964 544 966 2452 393 2204 3049 328 2272 2747ẽ Ẽ 1673 1861 2699 477 1800 2512 708 2057 2724 556 1818 2434 630 1150 2890 514 1082 2515@ ã688 1326 2914 496 1341 2526 974 1461 2737 699 1271 2413ũ õ Õ402 999 2749 368 969 2442 691 1103 2941 480 838 2644 726 1138 2985 576 977 2497", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Average F1, F2, and F3 values for Vowel phonemes.", "figure_data": "", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" } ]
Swapnil Fadte; Edna Vaz; Atul Kr; Ojha; Ramdas Karmali; Jyoti D Pawar
[ { "authors": "Matthew Almeida; Christopher Sj ; Carignan", "journal": "Journal of phonetics", "ref_id": "b0", "title": "An acoustic and articulatory examination of the oral in nasal: The oral articulations of french nasal vowels are not arbitrary", "year": "1989" }, { "authors": "Swapnil Fadte; Edna Vaz Fernandes; Ramdas Karmali; Jyoti D Pawar", "journal": "ACM Transactions on Asian and Low-Resource Language Information Processing", "ref_id": "b1", "title": "Acoustic Analysis of Vowels in Konkani", "year": "2022" }, { "authors": "Gang Feng; Eric Castelli", "journal": "The Journal of the Acoustical Society of America", "ref_id": "b2", "title": "Some acoustic features of nasal and nasalized vowels: A target for vowel nasalization", "year": "1996" }, { "authors": "D R Mccloy", "journal": "", "ref_id": "b3", "title": "Normalizing and plotting vowels with phonR 1.0.7", "year": "2016" }, { "authors": "Rocky V Miranda", "journal": "Orient Black-Swan", "ref_id": "b4", "title": "People's Linguistic Survey of India, The Languages of Goa", "year": "2018" }, { "authors": "Boersma Paul; David Weenink", "journal": "", "ref_id": "b5", "title": "Praat: doing phonetics by computer", "year": "1992" }, { "authors": "Madhavi Sardesai", "journal": "", "ref_id": "b6", "title": "Some aspects of konkani grammar", "year": "1986" }, { "authors": "Madhavi Sardesai", "journal": "Goa Konkani Academy", "ref_id": "b7", "title": "Bhasabhas Article on Linguistics", "year": "1993" }, { "authors": "Ryan Shosted; Christopher Carignan; Panying Rong", "journal": "The Journal of the Acoustical Society of America", "ref_id": "b8", "title": "Managing the distinctiveness of phonemic nasal vowels: Articulatory evidence from hindi", "year": "2012" } ]
[]
2024-01-20
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b0", "b1", "b3", "b0", "b1", "b4", "b5", "b6", "b7", "b5", "b8", "b9", "b10", "b0", "b1", "b9", "b10", "b11", "b12", "b13", "b14", "b0" ], "table_ref": [], "text": "R ESTORATION tasks in imaging are widely encoun- tered in various disciplines, including cellular cameras, surveillance, experimental physics, and medical imaging. These inverse problems are broadly defined as the need to recover an unknown image given corrupted measurements of it. Such problems, e.g., colorization, super-resolution, and inpainting, are typically ill-posed, implying that multiple solutions can explain the unknown target image. In this context, uncertainty quantification aims to characterize the range of possible solutions, their spread, and variability. This has an especially important role in applications such as astronomy and medical diagnosis, where it is necessary to establish statistical boundaries for possible gray-value deviations. The ability to characterize the range of permissible solutions with accompanying statistical guarantees has thus become an important and useful challenge, addressed in this paper.\nPrior work on this topic [1], [2] has addressed the uncertainty assessment by constructing intervals of possible values for each pixel via quantile regression [3], or other heuristics such as estimations of per-pixel residuals. While this line of thinking is appealing due to its simplicity, it disregards O. Belhasin ([email protected]), D. Freedman ([email protected]), Ehud Rivlin ([email protected]) and M. Elad ([email protected]) are with Verily Life Sciences, Israel.\nO. Belhasin ([email protected]) and Y. Romano ([email protected]) are with the Department of Computer Science, Technion -Israel Institute of Technology, Haifa, Israel. Fig. 1. Comparison of PUQ's performance on the CelebA-HQ dataset in image colorization, super-resolution, and inpainting tasks using the E-PUQ procedure (Section IV-B1) applied on RGB image patches of varying size. As seen, our method provides tighter uncertainty regions with significantly smaller uncertainty volumes (×10 in super-res. and inpainting, and ×100 in colorization). The compared methods are im2im-uq [1] and Conffusion [2]. spatial correlations within the image, and thus provides an exaggerated uncertainty range. The study in [4] has improved the above by quantifying the uncertainty in a latent space, thus taking spatial dependencies into account. However, by relying on a non-linear, non-invertible and uncertainty-oblivious transformation, this method suffers from interpretability limitations -See Section II for further discussion.\nIn this paper, we propose Principal Uncertainty Quantification (PUQ) -a novel approach that accounts for spatial relationships while operating in the image domain, thus enabling a full and clear interpretation of the quantified uncertainty region. PUQ uses the principal components of the empirical posterior probability density function, which describe the spread of possible solutions. PCA essentially approximates this posterior by a Gaussian distribution that tightly encapsulates it. Thus, this approach reduces the uncertainty volume 1 , as demonstrated in Figure 1. This figure presents a comparison between our proposed Exact PUQ procedure (see Section IV-B1) and previous work [1], [2], showing a much desired trend of reduced uncertainty volume that further decreases as the size of the patch under consideration grows.\nOur work aims to improve the quantification of the uncertainty volume by leveraging recent advancements in generative models serving as stochastic solvers for inverse problems. While our proposed approach is applicable using any such solver (e.g., conditional GAN [5]), we focus in this work on diffusion-based techniques, which have recently emerged as the leading image synthesis approach, surpassing GANs and other alternative generators [6]. Diffusion models offer a systematic and well-motivated algorithmic path towards the task of sampling from a prior probability density function (PDF), P y , through the repeated application of a trained imagedenoiser [7], [8]. An important extension of these models allows the sampler to become conditional, drawing samples from the posterior PDF, P y|x , where x represents the observed measurements. This approach has recently gained significant attention [6], [9], [10], [11], yielding a fascinating viewpoint to inverse problems, in which a variety of candidate high perceptual quality solutions to such problems are obtained.\nIn this work, we generalize the pixelwise uncertainty assessment, as developed in [1], [2], so as to incorporate spatial correlations between pixels. This generalization is obtained by considering an image-adaptive basis for a linear space that replaces the standard basis in the pixelwise approach. To optimize the volume of the output uncertainty region, we propose a statistical analysis of the posterior obtained from a diffusion-based sampler (e.g., [10], [11]), considering a series of candidate restorations. Our method may be applied both globally (on the entire image) or locally (on selected portions or patches), yielding a tighter and more accurate encapsulation of statistically valid uncertainty regions. For the purpose of adapting the basis, we compute and leverage the principal components of the candidate restorations. As illustrated in Figure 2 for a simple 2-dimensional PDFs, the pixelwise regions are less efficient and may contain vast empty areas, and especially so in cases where pixels exhibit strong correlation. Clearly, as the dimension increases, the gap between the standard and the adapted uncertainty quantifications is further amplified.\nOur proposed method offers two conformal prediction [12], [13], [14] based calibration options (specifically, using the Learn then Test [15] scheme) for users to choose from, with a trade-off between precision and complexity. These include (i) using the entire set of principal components, (ii) using a predetermined subset of them 2 . The proposed calibration procedures ensure the validity of the uncertainty region to contain the unknown true values with a user-specified confidence probability, while also ensuring the recovery of the unknown true values using the selected principal components when only a subset is used. Applying these approaches allows for efficient navigation within the uncertainty region of highly probable solutions.\nWe conduct various local and global experiments to verify our method, considering three challenging tasks: image colorization, super-resolution, and inpainting, all described in Section V, and all demonstrating the advantages of the proposed approach. For example, when applied locally on 8×8×3 patches, our experiments show a reduction in the guaranteed uncertainty volume by a factor of ∼10-100 compared to previous approaches, as demonstrated in Figure 1. Moreover, this local approach can have a substantially reduced computational complexity while retaining the statistical guarantees, by drawing far fewer posterior samples and using a small subset of the principal components. As another example, the global tests on the colorization task provide an unprecedented tightness in uncertainty volumes. This is accessible via a Fig. 2. An illustration of uncertainty regions (in red) of 2d posterior distributions and considering three different PDF behaviors, shown in blue, orange, and green. The uncertainty regions are formed from intervals, as defined in Equation (1), where l(x) and û(x) represent the 0.05 and 0.95 quantiles over the dashed black axes. The top row presents the uncertainty region in the pixel domain using standard basis vectors that ignores the spatial correlations, while the lower row presents the regions using the principal components as the basis. The uncertainty volume, defined in Equation ( 3), is indicated in the top left corner of each plot. The 90% coverage guarantee, outlined in Equation ( 2) with w i := 1/2, is satisfied by all. As can be seen, the lower row regions take spatial dependencies into account and are significantly smaller than the pixelwise corresponding regions in the upper row.\nreduced set of drawn samples, while also allowing for efficient navigation within the solution set. In summary, our contributions are the following:\n1) We introduce a novel generalized definition of uncertainty region that leverages an adapted linear-space basis for better posterior coverage. 2) We propose a new method for quantifying the uncertainty of inverse problems that considers spatial correlation, thus providing tight uncertainty regions. 3) We present two novel calibration procedures for the uncertainty quantification that provide statistical guarantees for unknown data to be included in the uncertainty region with a desired coverage ratio while being recovered with a small error by the selected linear axes. 4) We provide a comprehensive empirical study of three challenging image-to-image translation tasks: colorization, super-resolution, and inpainting, demonstrating the effectiveness of the proposed approach in all modes." }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [ "b15", "b16", "b17", "b18", "b4", "b6", "b7", "b19", "b5", "b5", "b8", "b9", "b10", "b10", "b20", "b21", "b22", "b23", "b24", "b25", "b26", "b27", "b28", "b29", "b30", "b31", "b11", "b12", "b13", "b32", "b14", "b33", "b0", "b1", "b34", "b1", "b32", "b3", "b3", "b14" ], "table_ref": [], "text": "Inverse problems in imaging have been extensively studied over the years; this domain has been deeply influenced by the AI revolution [16], [17], [18], [19], [5]. A promising recent approach towards image-to-image translation problems relies on the massive progress made on learned generative techniques. These new tools enable to model the conditional distribution of the output images given the input, offering a fair sampling from this PDF. Generative-based solvers of this sort create a new and exciting opportunity for getting high perceptual quality solutions for the problem in hand, while also accessing a diverse set of such candidate solutions. Recently, Denoising Diffusion Probabilistic Models (DDPM) [7], [8] have emerged as a new paradigm for image generation, surpassing the state-of-the-art results achieved by GANs [20], [6]. Consequently, several conditional diffusion methods have been explored [6], [9], [10], [11], including SR3 [10] -a diffusion-based method for image super-resolution, Palette [11] -a diffusion-based unified framework for image-to-image translation tasks, and more (e.g. [21], [22], [23], [24], [25], [26], [27]). Note that current conditional algorithms for inverse problems do not offer statistical guarantees against model deviations and hallucinations.\nMoving to uncertainty quantification, the field of machine learning has been seeing rich work on the derivation of statistically rigorous confidence intervals for predictions [28], [29], [30], [31], [32]. One key paradigm in this context is conformal prediction (CP) [12], [13], [14] and risk-controlling methods [33], [15], [34], which allow to rigorously quantify the prediction uncertainty of a machine learning model with a user-specified probability guarantee. Despite many proposed methods, only a few have focused on mitigating uncertainty assessment in image restoration problems, including im2imuq [1] and Conffusion [2]. The work reported in [35] is closely related as it introduced a generalized, and thus improved, calibration scheme for Conffusion [2]. All these works have employed a risk-controlling paradigm [33] to provide statistically valid prediction intervals over the pixel domain, ensuring the inclusion of ground-truth solutions in the output intervals. However, these approaches share the same limitation of operating in the pixel domain while disregarding spatial correlations within the image or the color layers. This leads to an unnecessarily exaggerated volume of uncertainty.\nAn exception to the above is [4], which quantifies uncertainty in the latent space of GANs. Their migration from the image domain to the latent space is a rigid, global, nonlinear, non-invertible and uncertainty-oblivious transformation. Therefore, quantification of the uncertainty in this domain is quite limited. More specifically, rigidity implies that this approach cannot adapt to the complexity of the problem by adjusting the latent space dimension; Globality suggests that it cannot be operated locally on patches in order to better localize the uncertainty assessments; Being non-linear implies that an evaluation of the uncertainty volume (see Section III) in the image domain is hard and next to impossible; Noninvertability of means that some energy is lost from the image in the analysis and not accounted for, thus hampering the validity of the statistical guarantees; Finally, note that the latent space is associated with the image content, but does not represent the prime axes of the uncertainty behavior. Note that due to the above, and especially the inability to provide certified volumes of uncertainty, an experimental comparison of our method to [4] is impossible.\nInspired by the above contributions, we propose a novel alternative uncertainty quantification approach that takes spatial relationships into account. Our work provides tight uncertainty regions, compared to prior work, with user-defined statistical guarantees through the use of a CP-based paradigm. Specifically, we adopted the Learn then Test [15] that provides statistical guarantees for controlling multiple risks." }, { "figure_ref": [], "heading": "III. PROBLEM FORMULATION", "publication_ref": [ "b0", "b1", "b0", "b1", "b35" ], "table_ref": [], "text": "Let P x,y be a probability distribution over X × Y, where X and Y represent the input and the output space, respectively, for the inverse problem at hand. E.g., for the task of image colorization, Y could represent full-color high-quality images, while X represents their colorless versions to operate on. We assume that X , Y ⊂ [0, 1] d ⊂ R d , where, without loss of generality, d is assumed to be the dimension of both spaces. In the context of examining patches within output images, we define Y patch as the patch space of the output images. For simplicity, we use the same notation, d, for Y and Y patch , while it is clear that the dimension of Y patch is smaller and controlled by the user through the patch size to work on.\nGiven an input measurement x ∈ R d , we aim to quantify the uncertainty of the possible solutions to the inverse problem, as manifested by the estimated d-dimensional posterior distribution, Py|x . The idea is to enhance the definition of pixelwise uncertainty intervals by integrating the spatial correlations between pixels to yield a better structured uncertainty region. To achieve this, we propose to construct uncertainty intervals using a designated collection of orthonormal basis vectors for R d instead of intervals over individual pixels. We denote this collection by B(x) = {v 1 (x), v2 (x) . . . vd (x)}, where vi (x) ∈ R d . These vectors are instance-dependent, thus best adapted to their task. An intuitive example of such a basis is the standard one, B(x) = {e 1 , e 2 . . . e d }, where e i ∈ R d is the one-hot vector with value 1 in the i th entry. In our work, we use a set of principal components of Py|x , which will be discussed in detail in Section IV.\nSimilar to [1], [2], we use an interval-based method centered around the conditional mean image, i.e., an estimate of E[y|x] ∈ R d , denoted by μ(x). Formally, we utilize the following interval-valued function that constructs prediction intervals along each basis vector around the estimated conditional mean:\nT (x; B(x))i := vi(x) T μ(x) -l(x)i, vi(x) T μ(x) + û(x)i .(1)\nIn the above, i ∈ {1, 2 . . . d} is a basis vector index, and l(x) i ∈ R + and û(x) i ∈ R + are the lower and upper interval boundaries for the projected values of candidate solutions emerging from Py|x . That is, if ŷ ∼ Py|x is such a solution, vi (x)\nT ŷ is its i-th projection, and this value should fall within T (x; B(x)) i with high probability. Returning to the example of the standard basis, the above equation is nothing but pixelwise prediction intervals, which is precisely the approach taken in [1], [2]. By leveraging this generalization, the uncertainty intervals using these basis vectors form a d-dimensional hyperrectangle, referred to as the uncertainty region.\nImportantly, we propose that the interval-valued function, T , should produce valid intervals that contain a user-specified fraction of the projected ground-truth values within a risk level of α ∈ (0, 1). In other words, more than 1-α of the projected ground-truth values should be contained within the intervals, similar to the approach taken in previous work in the pixel domain. To achieve this, we propose a holistic expression that aggregates the effect of all the intervals, T (x; B(x)). This expression leads to the following condition:\nE d i=1 ŵi(x) • 1 vi(x) T y ∈ T (x; B(x))i > 1 -α, (2\n)\nwhere y ∈ R d is the unknown ground-truth and ŵi (x) ∈ [0, 1] s.t. d i=1 ŵi (x) = 1 are the weight factors that set the importance of covering the projected ground-truth values along each interval. In Section IV we discuss the proposed holistic expression and a specific choice of these weights. As an example, we could set α = 0.1 and ŵi (x) := 1/d, indicating that more than 90% of the projected ground-truth values onto the basis vectors are contained in the intervals, as illustrated in a 2d example in Figure 2 for different kinds of Py|x .\nAs discussed above and demonstrated in Figure 2, if the orthonormal basis in Equation ( 1) is chosen to be the standard one, we get the pixel-based intervals that disregard spatial correlations within the image, thus leading to an exaggerated uncertainty region. In this work, we address this limitation by transitioning to an instance-adapted orthonormal basis of R d that allows the description of uncertainty using axes that are not necessarily pixel-independent, thereby providing tighter uncertainty regions. While such a basis could have been defined analytically using, for example, orthonormal wavelets [36], we suggest a learned and thus a better-tuned one. The choice to use a linear and orthonormal representation for the uncertainty quantification comes as a natural extension of the pixelwise approach, retaining much of the simplicity and efficiency of treating each axis separately. Note that the orthogonality enables the decomposition of y around μ(x) via its projected values, y = μ(x) + d i=1 vi (x) T (y -μ(x)) vi (x), which we refer to as the exact reconstruction property.\nTo evaluate the uncertainty across different uncertainty regions, we introduce a new metric called the uncertainty volume, V(x; T (x; B(x))), which represents the d th root of the uncertainty volume with respect to intervals T (x; B(x)), defined in the following equation: (3)\n≈ exp 1 d d i=1 log û(x)i + l(x)i + ϵ -ϵ ,\nwhere ϵ > 0 is a small hyperparameter used for numerical stability. In Section V we demonstrate that our approach results in a significantly reduction in these uncertainty volumes when compared to previous methods.\nWhen operating in high dimensions (e.g. on the full image), providing uncertainty intervals for all the d-dimensions poses severe challenges, both in complexity and interpretability. In this case, constructing and maintaining the basis vectors becomes infeasible. Moreover, the uncertainty quantification using these intervals may be less intuitive compared to the conventional pixelwise approach because of the pixel-dependency between the basis vectors, which makes it difficult to communicate the uncertainty to the user. To mitigate these challenges, we propose an option of using K ≪ d basis vectors that capture the essence of the uncertainty. In Section IV, we discuss how to dynamically adjust K to provide fewer axes.\nWhile reducing the number of basis vectors benefits in interpretability and complexity, this option does not fulfill the exact reconstruction property. Therefore, we propose an extension to the conventional coverage validity of Equation ( 2) 2) and contain the correct ratio over solutions for unseen input instances. In the bottom row, the procedure also statistically guarantees Equation ( 4) by ensuring a small recovery error of solutions to unseen input instances, as demonstrated by the small variance around the single PC.\ny x ŷi ∼ Py|x\nthat takes into account the reconstruction error of the decomposed ground-truth images. Specifically, the user sets a ratio of pixels, q ∈ R, and a maximum acceptable reconstruction error over this ratio, β ∈ (0, 1). This approximation allows us to reduce the number of basis vectors used to formulate B(x), such that the reconstruction will be valid according to the following condition:\nE Qq    K j=1 vj(x) T ycvj(x) -yc i    d i=1 ≤ β ,(4)\nwhere y c := y -μ(x) is the ground-truth image centered around μ(x), and Qq (•) is the empirical quantile function defined by the smallest z satisfying 1 d d i=1 1{z i ≤ Qq (z)} ≥ q. In Section IV, we discuss this expression for assessing the validity of the basis vectors. As an example, setting q = 0.9 and β = 0.05 would mean that the maximal reconstruction error of 90% of the ground-truth pixels is no more than 5% of the [0, 1] dynamic range." }, { "figure_ref": [], "heading": "IV. PUQ: PRINCIPAL UNCERTAINTY QUANTIFICATION", "publication_ref": [ "b0" ], "table_ref": [], "text": "In this section, we present Principal Uncertainty Quantification (PUQ), our method for quantifying the uncertainty in inverse problems while taking into account spatial dependencies between pixels. PUQ uses the principal components (PCs) of the solutions to the inverse problem for achieving its goal. In Appendix A, we provide an intuition behind the choice of the PCs as the basis. Our approach can be used either globally across the entire image, referred to as the global mode; or locally within predefined patches or segments of interest, referred to as the local mode. Local uncertainty quantification can be applied to any task, where the dimensionality of the target space is fully controlled by the user. In contrast, global quantification is particularly advantageous for tasks that exhibit strong spatial correlations between pixels.\nOur proposed method consists of two phases. In the first, referred to as the approximation phase, a machine learning system is trained to predict the PCs of possible solutions, denoted by B(x) = {v 1 (x), v2 (x), . . . , vK (x)} (where K ≤ d), as well as a set of importance weights, ŵ(x) ∈ R K , referring to the vectors in B(x). In addition, the system estimates the necessary terms in Equation ( 1), which include the conditional mean, μ(x) ∈ R d , and the lower and upper bounds, l(x) ∈ R K and ũ(x) ∈ R K3 , for the spread of projected solutions over B(x). All these ingredients are obtained by a diffusion-based conditional sampler as described in Figure 3. More details on this computational process are brought in Section IV-A.\nThe above-described approximation phase is merely an estimation, as the corresponding heuristic intervals of Equation (1) may not contain the projected ground-truth values with a desired ratio. Additionally, the basis vectors may not be able to recover the ground-truth pixel values within an acceptable threshold when K < d, or the basis set may contain insignificant axes in terms of variability. Therefore, in the second, calibration phase, we offer two calibration procedures on an held-out set of calibration data, denoted by S cal := {(x i , y i )} n i=1 . These assess the validity of our proposed uncertainty region over unseen data, which is composed by the intervals defined in Equation (1). The choice between the two calibration procedures depends on the user, taking into account the trade-off between precision and complexity. The steps of our proposed method are summarized in Algorithm 1, and the two calibration strategies are as follows:\n(1) Exact PUQ (E-PUQ -Section IV-B1): In the setting of an exact uncertainty assessment, while assuming that d PCs can be constructed and maintained in full, the exact reconstruction property is satisfied. Consequently, the calibration procedure is straightforward, involving only scaling of the intervals until they contain the user-specified miscoverage preference, denoted by α ∈ (0, 1), of the projected ground-truth values falling outside the uncertainty region. This is similar to the approach taken in previous work over the pixel domain.\n(2) Dimension-Adaptive PUQ (DA-PUQ -Section IV-B2, RDA-PUQ -Appendix D): In the setting of an approximate uncertainty assessment, while allowing for a small recovery error of projected ground-truth instances to full-dimensional instances, either due to complexity or interpretability reasons (see Section III), the exact reconstruction property is no longer satisfied. Hence, in addition to the scaling procedure outlined above, we must verify that the K PCs can decompose the ground-truth pixel values with a small error. In this calibration process, we also control the minimum number of the first k(x) PCs out of the K PCs, such that a small reconstruction error can be guaranteed for unseen data. This number is dynamically determined per input image, so that instances with greater pixel correlations are assigned more PCs than those with weaker correlations. As manually determining K might be challenging, we introduce the Reduced Dimension-Adaptive PUQ (RDA-PUQ) procedure that also controls that value as part of the calibration -see Appendix D. Apply DA-PUQ / RDA-PUQ using the calibration data 6: end if ▷ Inference 7: Provide statistically valid uncertainty axes and intervals in terms of Equation ( 2) and Equation ( 4), applied to an unseen input instance x" }, { "figure_ref": [ "fig_4" ], "heading": "Algorithm 1 Generating PUQ Axes and Intervals", "publication_ref": [], "table_ref": [], "text": "In Section V we demonstrate a significant decrease in the uncertainty volume, as defined in Equation (3) for each procedure, whether applied globally or locally, compared to prior work. On the one hand, the E-PUQ procedure is the simplest and can be applied locally to any task, and globally to certain tasks where the computation of d PCs is feasible. On the other hand, the DA-PUQ and RDA-PUQ procedures are more involved and can be applied both globally or locally to any task, while these are particularly effective in cases in which pixels exhibit strong correlations, such as in the image colorization task. Our method is visually illustrated in Figure 4, showing a sampling methodology and a calibration scheme using the full PCs or only a subset of them." }, { "figure_ref": [], "heading": "A. Diffusion Models for the Approximation Phase", "publication_ref": [ "b0" ], "table_ref": [], "text": "The approximation phase, summarized in Algorithm 1 in RED, can be achieved in various ways. In this section, we describe the implementation we used to obtain the results in Section V. While we aim to construct the uncertainty axes and intervals in the most straightforward way, further exploration of more advanced methods to achieve the PCs is left for future work.\nIn our implementation, we leverage the recent advances in stochastic regression solvers for inverse problems based on diffusion models, which enable to train a machine learning model to generate high-quality samples from Py|x . Formally, we define f θ : X × Z → Y as a stochastic regression solver for an inverse problem in global mode, where Z is the noise seed space. Similarly, in local mode, we consider\nf θ : X × Z → Y patch . Given an input instance x ∈ R d , we propose to generate K samples, denoted by {f θ (x, z i )} K i=1\n, where, f θ (x, z i ) ∼ Py|x . These samples are used to estimate the PCs of possible solutions and their importance weights using the SVD decomposition of the generated samples. The importance weights assign high values to axes with large variance among projected samples, and low ones to those with small variance. In Section IV-B, we elaborate on how these weights are used in the calibration phase. Additionally, the samples are utilized to estimate the conditional mean, μ(x), and the lower and upper bounds, l(x) and ũ(x), necessary for Equation (1). l(x) and ũ(x) are obtained by calculating quantiles of the projected samples onto each PC, with a userspecified miss-coverage ratio α ∈ (0, 1).\nTo capture the full spread and variability of Py|x , it is necessary to generate at least K = d samples to feed to the SVD procedure, which is computationally challenging for high-dimensional data. As a way out, we suggest working locally on patches, where d is small and fully controlled by the user by specifying the patch size to work on. However, for tasks with strong pixel correlation, such as image colorization, a few PCs can describe the variability of Py|x with a very small error. Therefore, only a few samples (i.e., K ≪ d) are required for the SVD procedure to construct meaningful PCs for the entire image, while capturing most of the richness in Py|x . We formally summarize our sampling-based methodology, in either global or local modes, in Algorithm 2." }, { "figure_ref": [], "heading": "B. Calibration Phase", "publication_ref": [ "b0", "b1", "b0", "b1", "b14" ], "table_ref": [], "text": "In order to refine the approximation phase and obtain valid uncertainty axes and intervals that satisfy the guarantees of Equation ( 2) and Equation ( 4), it is necessary to apply a calibration phase, as summarized in Algorithm 1 in BLUE. This phase includes two different options based on particular conditions on the number of PCs to be constructed and maintained during the calibration procedure or during inference, when applied either globally or locally. Below we outline each of these options in more details.\n1) Exact PUQ: The Exact PUQ (E-PUQ) procedure provides the complete uncertainty of the d-dimensional posterior distribution, P y|x . In this case, the exact reconstruction property discussed in Section III is satisfied, and Equation ( 4) is fulfilled with 0% error (β = 0) across 100% (q = 1.0) of the pixels. Therefore, the calibration is simple, involving only a scaling of intervals to ensure Equation ( 2) is satisfied with high probability, similar to previous work [1], [2].\nFormally, for each input instance x and its corresponding ground-truth value y ∈ R d in the calibration data, we use the estimators obtained in the approximation phase to get d PCs Algorithm 2 Approximation Phase via Sampling\nInput: Instance x ∈ X . Conditional stochastic generative model f θ : X → Y or f θ : X → Ypatch. Maximal PCs / samples number K ≤ d. Misscoverage ratio α ∈ (0, 1). ▷ Generate samples drawn from Py|x 1: for i = 1 to K do 2: ŷi(x) ← f θ (x, zi) 3: end for ▷ Compute conditional mean 4: μ(x) ← 1 K K i=1 ŷi(x)\n▷ Apply SVD decomposition and extract the PCs and weights\n5: Ŷ (x) ← [ŷ1(x), ŷ2(x) . . . ŷK (x)] ∈ R d×K 6: Ŷ (x) -μ(x) • 1 T K = V (x) Σ(x) Û (x) T 7: B(x) ← {v1(x), v2(x) . . . vK (x)}, where vi(x) = [ V (x)]i 8: ŵ(x) ← σ1(x) 2 , . . . , σK (x) 2 /c ∈ R K , where σi(x) = [ Σ(x)]i and c = K j=1 σj(x)\n2 . ▷ Compute α/2 and 1 -α/2 empirical quantiles of projected samples onto each PC 9: for i = 1 to K do 10:\nl(x)i ← Qα/2 ({vi(x) T (ŷj(x) -μ(x))} K j=1 ) 11: ũ(x)i ← Q1-α/2 ({vi(x) T (ŷj(x) -μ(x))} K j=1 ) 12:\nend for Output: K PCs B(x), importance weights ŵ(x), conditional mean μ(x), lower and upper bounds l(x) and ũ(x).\nof possible solutions B(x), their corresponding importance weights ŵ(x), the conditional mean μ(x), and the lower and upper bounds, denoted by l(x) and ũ(x). We then define the scaled intervals to be those specified in Equation ( 1), with the upper and lower bounds defined as û(x) := λũ(x) and l(x) := λ l(x), where λ ∈ R + is a tunable parameter that controls the scaling. Notably, the size of the uncertainty intervals decreases as λ decreases. We denote the scaled uncertainty intervals by T λ (x; B(x)). The following weighted coverage loss function is used to guide our design of λ:\nL(x, y; λ) := d i=1 ŵi(x) • 1 vi(x) T y ̸ ∈ T λ (x; B(x))i . (5\n)\nThis loss is closely related to the expression in Equation ( 2), and while it may seem arbitrary at first, this choice is a direct extension to the one practiced in [1], [2]. In Appendix B we provide an additional justification for it, more tuned to the realm discussed in this paper.\nOur goal is to ensure that the expectation of L(x, y; λ) is below a pre-specified threshold, α, with high probability over the calibration data. This is accomplished by a conformal prediction based calibration scheme, and in our paper we use the LTT [15] procedure, which guarantees the following:\nP E[L(x, y; λ)] ≤ α ≥ 1 -δ ,(6)\nfor a set of candidate values of λ, given as the set Λ. δ ∈ (0, 1) is an error level on the calibration set and λ is the smallest value within Λ satisfying the above condition, so as to provide the smallest uncertainty volume over the scaled intervals, as defined in Equation ( 3), which we denote by V λ.\nPut simply, the above guarantees that more than 1 -α of the ground-truth values projected onto the full d PCs of Py|x are contained in the uncertainty intervals with probability at least 1 -δ, where the latter probability is over the randomness of the calibration set. The scaling factor takes into account the weights to ensure that uncertainty intervals with high variability contain a higher proportion of projected ground-truth values than those with low variability. This is particularly important for tasks with strong pixel correlations, where the first few PCs capture most of the variability in possible solutions. We describe in detail the E-PUQ procedure in Algorithm 3. L(x, y; λ) ← 7:" }, { "figure_ref": [], "heading": "Algorithm 3 Exact PUQ Procedure", "publication_ref": [ "b14", "b1", "b7", "b14", "b6", "b14" ], "table_ref": [], "text": "d i=1 ŵi(x) • 1 vi(x) T y ̸ ∈ T λ (x; B(x))i 8:\nend for 9: end for 10: Λ ← Extract valid λs from LTT [15] \nλ ← arg min λ∈ Λ 1 n n i=1 V λ (xi; B(x)) Output: Given a new instance x ∈ X , obtain valid uncertainty intervals for it, T λ(x; B(x)).\n2) Dimension-Adaptive PUQ: The E-PUQ procedure assumes the ability to construct and maintain d PCs, which can be computationally challenging both locally and globally. Furthermore, an uncertainty quantification over these axes may be less intuitive, due to the many axes involved, thus harming the method's interpretability (see discussion in Section III).\nTo address these, we propose the Dimension-Adaptive PUQ (DA-PUQ) procedure, which describes the uncertainty region with fewer axes, K ≤ d. The use of only a few leading dimensions, e.g., K = 3, can lead to a more interpretable uncertainty region, enabling an effective visual navigation within the obtained uncertainty range.\nWhile this approach does not satisfy the exact reconstruction property (see Section III), the decomposed ground-truth values can still be recovered through the K PCs with a small userdefined error in addition to the coverage guarantee. By doing so, we can achieve both the guarantees outlined in Equation (2) and Equation ( 4) with high probability.\nTo satisfy both the coverage and reconstruction guarantees while enhancing interpretability, we use a dynamic function, k(x) : X → N, and a scaling factor to control the reconstruction and coverage risks. The function k(x) determines the number of top PCs (out of K) to include in the uncertainty region, focusing on the smallest number that can satisfy both Equation ( 2) and ( 4), so as to increase interpretability.\nFormally, for each input instance x and its corresponding ground-truth value y ∈ R d in the calibration data, we use the estimators obtained in the approximation phase to estimate K ≤ d PCs of possible solutions, denoted by B(x), their corresponding importance weights, denoted by ŵ(x), the conditional mean denoted by μ(x), and the lower and upper bounds denoted by l(x) and ũ(x), respectively. We then introduce a threshold λ 1 ∈ (0, 1) for the decay of the importance weights over the PCs of solutions to x. The adaptive number of PCs to be used is defined as follows:\nk(x; λ1) := min 1≤k≤K k s.t. k i=1 ŵi(x) ≥ λ1 .(7)\nObviously, the importance weights are arranged in a descending order, starting from the most significant axis and ending with the least significant one. Furthermore, let q ∈ (0, 1) be a specified ratio of pixels, and β ∈ (0, 1) be a maximum allowable reconstruction error over this ratio. The reconstruction loss function to be controlled is defined as:\nL1(x, y; λ1) := Qq       k(x;λ 1 ) j=1 vj(x) T ycvj(x) -yc i    d i=1    ,(8)\nwhere Qq (•) selects the empirical q-quantile of the reconstruction errors, and y c = y -μ(x) is the ground-truth image centered around μ(x). In Appendix C, we discuss further this specific loss function for controlling the capability of the linear subspace to capture the richness of the complete d-dimensional posterior distribution.\nAt the same time, we also control the coverage risk over the k(x) PCs, with α ∈ (0, 1) representing a user-specified acceptable misscoverage rate and λ 2 ∈ R + representing the calibration factor parameter. To control this coverage risk, we define the coverage loss function to be the same as in Equation ( 5), but limited to the k(x) PCs, that is:\nL2(x, y; λ1, λ2) :=(9)\nk(x;λ 1 )\ni=1 ŵi(x) • 1 vi(x) T y ̸ ∈ T λ 2 (x; B(x))i .\nFinally, using the reconstruction loss function of Equation (8) and the coverage loss function of Equation ( 9), we seek to minimize the uncertainty volume, defined in Equation (3), for the scaled intervals where any unused axes (out of d) are fixed to zero. We denote this uncertainty volume as V λ1,λ2 . The minimization of V λ1,λ2 is achieved by minimizing λ 1 and λ 2 , while ensuring that the guarantees of Equation (2) and Equation (4) hold with high probability over the calibration data. This can be provided, for example, through the LTT [15] calibration scheme, which guarantees the following:\nP E[L1(x, y; λ1)] ≤ β E[L2(x, y; λ1, λ2)] ≤ α ≥ 1 -δ ,(10)\nwhere λ1 and λ2 are the minimizers for the uncertainty volume among valid calibration parameter results, Λ, obtained through the LTT procedure. In other words, we can reconstruct a fraction q of the ground-truth pixel values with an error no greater than β, and a fraction of more than 1 -α of the projected ground-truth values onto the first k(x; λ1 ) PCs of P y|x are contained in the uncertainty intervals, with a probability of at least 1 -δ. A detailed description of the DA-PUQ procedure is given in Algorithm 4.\nThe above-described DA-PUQ procedure reduces the number of PCs to be constructed to K ≤ d while using k(x; λ1 ) ≤ K PCs, leading to increased efficiency in both time and space during inference. However, determining manually the smallest K value that can guarantee both Equation (2) and Equation ( 4) can be challenging. To address this, we propose an expansion of the DA-PUQ procedure; the Reduced Dimension-Adaptive PUQ (RDA-PUQ) procedure that also controls the maximum number of PCs, K, required for the uncertainty assessment. This approach is advantageous for inference as it reduces the number of samples required to construct the PCs using Algorithm 2, while ensuring both the coverage and reconstruction guarantees of Equation (2) and Equation ( 4) with high probability. The RDA-PUQ procedure is fully described in Appendix D. Maximal PCs number K ≤ d. Approximation phase estimators B, ŵ, μ, ũ, l. Recovered pixels ratio q ∈ (0, 1). Reconstruction error β ∈ (0, 1). Misscoverage ratio α ∈ (0, 1). Calibration error level δ ∈ (0, 1). For an effective calibration, α, β, δ should be close to 0 while q should be close to 1.\n1: for (x, y) ∈ Scal do 2: B(x), ŵ(x), μ(x), ũ(x), l(x) ← Apply Algorithm 2 to\nx, with the choice of K samples 3:\nfor λ1 ∈ Λ1 do ▷ Compute adaptive dimensionality, Equation (7) 4:\nk(x; λ1) ← min k k : K i=1 ŵi(x) ≥ λ1 ▷ Compute reconstruction loss, Equation (8) 5: yc ← y -μ(x) 6: L1(x, y; λ1) ← Qq k(x;λ 1 ) j=1 vj(x) T ycvj(x) -yc i d i=1 7:\nfor λ2 ∈ Λ2 do ▷ Scale uncertainty intervals end for 13: end for 14: Λ ← Extract valid λs from LTT [15] applied on {(L1(x, y; λ1), L2(x, y; λ1, λ2)) : (x, y) ∈ Scal, λ1 ∈ Λ 1 , λ2 ∈ Λ 2 } at risk levels (β, α) and error level δ, referring to Equation (10) ▷ Compute the minimizers for the uncer. volume, Equation (3\n) 15: λ1, λ2 ← arg min λ 1 ,λ 2 ∈ Λ 1 n n i=1 V λ 1 ,λ 2 (xi; B(xi))\nOutput: Given a new instance x ∈ X , obtain valid uncertainty intervals for it, T λ2 (x; B(x)) over k(x; λ1) ≤ K PCs." }, { "figure_ref": [ "fig_6" ], "heading": "V. EMPIRICAL STUDY", "publication_ref": [ "b36", "b9", "b0", "b1" ], "table_ref": [], "text": "This section presents a comprehensive empirical study of our proposed method PUQ, applied to three challenging tasks: image colorization, super-resolution, and inpainting, over the CelebA-HQ dataset [37]. Our approximation phase starts with a sampling from the posterior, applied in our work by the SR3 conditional diffusion model [10]. Figure 5 presents typical sampling results for these three tasks, showing the expected diversity in the images obtained.\nThe experiments we present herein verify that our method satisfies both the reconstruction and coverage guarantees and demonstrate that PUQ provides more confined uncertainty regions compared to prior work, including im2im-uq [1] and Conffusion [2]. Through the experiments, we present superiority in uncertainty volume, as defined in Equation (3), and in interpretability through the use of only a few PCs to assess the uncertainty of either a patch or a complete image. All the experiments were conducted over 100 calibration-test splits. For in-depth additional details of our experiments and the settings used, we refer the reader to Appendix E. Additionally, an ablation study has been conducted, as elaborated in Appendix H. This study presents an analysis of user-defined parameters: α, β, q, and δ, aiming to provide a comprehensive insight into their selection. Furthermore, we have investigated the tradeoff between precision and complexity in Appendix I to offer a complete understanding of our method's performance." }, { "figure_ref": [], "heading": "A. Evaluation Metrics", "publication_ref": [ "b9", "b14", "b8", "b7", "b0" ], "table_ref": [], "text": "Before presenting the results, we discuss the metrics used to evaluate the performance of the different methods. Although our approach is proved to guarantee Equation ( 6) for E-PUQ and Equation (10) for DA-PUQ, (through LTT [15]), we assess the validity and tightness of these guarantees as well.\nEmpirical coverage risk. We measure the risk associated with the inclusion of projected unseen ground-truth values in the uncertainty intervals. In E-PUQ, we report the average coverage loss, defined in Equation ( 5). In the case of DA-PUQ and RDA-PUQ, we report the value defined by Equation (9).\nEmpirical reconstruction risk. We measure the risk in recovering unseen ground-truth pixel values using the selected PCs. In the case of E-PUQ, this risk is zero by definition. However, for DA-PUQ and RDA-PUQ, we report the average reconstruction loss, defined by Equation (8).\nInterval-Size. We report the calibrated uncertainty intervals' sizes of Equation (1), and compare them with baseline methods. For E-PUQ, we compare intervals over the full basis set of PCs with the intervals in the pixel domain used in previous work. In the DA-PUQ and RDA-PUQ procedures, we apply dimensionality reduction to K ≪ d dimensions. To validly compare the intervals' sizes of these methods to those methods over the full d dimensions, we pad the remaining d -K dimensions with zeros as we assume that the error in reconstructing the ground-truth from the dimensionally reduced samples is negligible.\nUncertainty Volume. We report these volumes, defined in Equation ( 3), for the calibrated uncertainty regions and compare them with previous work. A smaller volume implies a higher level of certainty in probable solutions to P y|x . In E-PUQ, we compare volumes over the full basis set of PCs, whereas for the DA-PUQ and RDA-PUQ procedures, we pad the remaining dimensions with zeros." }, { "figure_ref": [ "fig_7", "fig_7", "fig_8", "fig_7", "fig_7", "fig_7", "fig_7", "fig_7", "fig_8" ], "heading": "B. Local Experiments on Patches", "publication_ref": [ "b0", "b1", "b0", "b1" ], "table_ref": [ "tab_1" ], "text": "We apply our proposed methods on RGB patches of increasing size -1x1, 2x2, 4x4, and 8x8 -for image colorization, super-resolution, and inpainting tasks. The obtained results are illustrated in Figure 6 and Figure 7, where Figure 6 compares our exact procedure, E-PUQ, to baseline methods, and Figure 7 examines our approximation procedures, DA-PUQ and RDA-PUQ. In Table I we present a numerical comparison of uncertainty volumes across tasks at 8x8 patch resolution. We also provide visual representations of the uncertainty volume maps for patches at varying resolutions in Figure 8. The results shown in Figure 6 and Figure 7 demonstrate that our method provides smaller uncertainty volumes, and thus more confined uncertainty regions, when compared to previous work in all tasks and patch resolutions, and while satisfying the same statistical guarantees in all cases. More specifically, Figure 6 compares our exact procedure, E-PUQ, to baseline methods. Following this figure, one can see that using the E-PUQ procedure we obtained an improvement of ∼ ×100 in the uncertainty volumes in colorization and an improvement of ∼ ×10 in super-resolution and inpainting, when applied to the highest resolution of 8x8. Additionally, as the patch resolution increases, we observe a desired trend of uncertainty volume reduction, indicating that our method takes into account spatial correlation to reduce uncertainty. Note that even a patch size of 1 × 1 brings a benefit in the evaluated volume, due to the exploited correlation within the three color channels. E-PUQ reduces trivially to im2im-uq [1] and Conffusion [2] when applied to scalars (1×1×1 patches).\nIn Figure 7, we examine our approximation methods, DA-PUQ and RDA-PUQ, in which we set a relatively small reconstruction risk of β = 0.05. Observe the significantly smaller uncertainty volumes obtained; this effect is summarized in Fig. 6. Local Experiments: A comparison of E-PUQ (see Section IV-B1) with previous work -im2im-uq [1] and Conffusion [2]. These methods are applied locally on patches with α = δ = 0.1. Each column corresponds to a relevant metric (see Section V-A), and each row corresponds to a specific task. The uncertainty volume was computed with ϵ = 1e -10. Results indicate that our approach achieves superior uncertainty volume.\nTable I as well. Figure 7 also portrays the dimensionality of the uncertainty region used with our method using two overlapping bars. The outer bar in yellow refers to the number of PCs that need to be constructed, denoted as K in DA-PUQ and K in RDA-PUQ. The smaller this number is, the lower the test time computational complexity. The inner bar in green refers to the average number of the adaptively selected PCs, denoted as k(x). A lower value of k(x) indicates better interpretability, as fewer PCs are used at inference than those that were constructed. For example, in the colorization task, it can be seen that the RDA-PUQ procedure is the most computationally efficient methodology, requiring only K ≈ 12 PCs to be constructed at inference, while the DA-PUQ procedure is the most interpretable results, with uncertainty regions consisting of only k(x) ∈ {1, 2, 3} axes.\nIn all experiments demonstrated in Figure 6 and Figure 7, it is noticeable that the standard deviation of the interval-size of our approach is higher than that of the baseline methods. This effect happens because a few intervals along the first few PCs are wider than those along the remaining PCs. However, the majority of the interval sizes are significantly smaller, resulting in a much smaller uncertainty volume. Interestingly, the uncertainty intervals of the DA-PUQ and RDA-PUQ procedures in Figure 7 exhibit larger standard deviation compared to the E-PUQ procedure in Figure 6. We hypothesize that this is caused when only a few intervals (e.g., 2 intervals) are used for the calibration process while small miscoverage ratio is set by the user (α = 0.1). As an example in the case of using 2 intervals with all samples of the calibration set, it is necessary to enlarge all the intervals to ensure the coverage guarantee, resulting in wider intervals over the first few PCs.\nThe heat maps presented in Figure 8 compare the uncertainty volumes of our patch-based E-PUQ procedure to baseline methods. Each pixel in the presented heat maps corresponds to the value of Equation ( 3) evaluated on its corresponding patch. The results show that as the patch resolution increases, pixels with strong correlation structure, such as pixels of the background area, also exhibit lower uncertainty volume in their corresponding patches. This indicates that the proposed method indeed takes into account spatial correlation, leading to reduced uncertainty volume." }, { "figure_ref": [ "fig_7" ], "heading": "C. Global Experiments on Images", "publication_ref": [ "b0", "b1", "b0", "b1", "b0", "b1", "b0", "b1" ], "table_ref": [ "tab_2", "tab_2" ], "text": "We turn to examine the effectiveness and validity of DA-PUQ and RDA-PUQ when applied to complete images at a resolution of 128 × 128. In this case, the E-PUQ procedure does not apply, as it requires computing and maintaining d = 128 × 128 × 3 PCs. We present results for the colorization task hereafter, and refer the reader to Appendix G for a similar analysis related to super-resolution and inpainting.\nWhile all PUQ procedures can be applied locally for any task, working globally is more realistic in tasks that exhibit strong pixel correlation. Under this setting, most of the image variability could be represented via DA-PUQ or RDA-PUQ while (i) maintaining a small reconstruction risk, and (ii) using only a few PCs to assess the uncertainty of the entire images. We should note that the tasks of super-resolution and inpainting are less-matched to a global mode since they require a larger number of PCs for an effective uncertainty representation -more on this is discussed in Appendix G. Figure 9 visually demonstrates the performance of our approximation methods, also summarized in Table II. These results demonstrate that our method provides significantly smaller uncertainty volumes compared to our local results in Figure 7 and previous works, but this comes at the cost of introducing a small reconstruction risk of up to β = 0.1. Observe how our approximation methods improve interpretability: the uncertainty regions consist of only 2-5 PCs in the full dimensional space of the images. The DA-PUQ procedure produces the tightest uncertainty regions; see the uncertainty volumes in Table II. In addition, the mean interval-size with our procedures is very small and almost equal to zero, indicating that the constructed uncertainty regions are tight and narrow due to strong correlation structure of pixels. However, similar to the previous results, the standard deviation of interval-size is spread across a wide range. This is because few of the first PCs have wide intervals. The RDA-PUQ procedure is the most computationally efficient as it required to construct only ∼30 PCs during inference to ensure statistical validity.\nFigure 10 presents selected uncertainty regions that were provided by our proposed RDA-PUQ procedure when applied globally. As can be seen, the projected ground-truth images using only k(x) PCs results in images that are very close to the originals. This indicates that the uncertainty region can describe the spread and variability among solutions with small reconstruction errors. The first two axes of our uncertainty regions exhibit semantic content, which is consistent with a method that accounts for spatial pixel correlation. The fact that these PCs capture foreground/background or full-object content highlights a unique strength of our approach. We provide the importance weights of the first two PCs, indicating impressive proportions of variability among projected samples onto these components (see Section IV-A). For example, in the third row, we observe that 77% of the variability in Py|x is captured by v1 (x), which mostly controls a linear color range of the pixels associated with the hat in the image. In Figure 11 we visually compare samples that were generated from the corresponding estimated uncertainty regions, by sampling uniformly a high dimensional point (i.e., an image) with E-PUQ, im2im-uq [1] and Conffusion [2]. Each pixel in the maps corresponds to the uncertainty volume, defined in Equation ( 3), of its corresponding patch. These results expose the effectiveness of our method that incorporates spatial correlations, resulting in a reduction of the uncertainty volume. from the corresponding hyper-rectangle. For further details regarding this study, please refer to Appendix F. As can be seen, the samples extracted from our uncertainty region are of high perceptual quality, whereas im2im-uq [1] and Conffusion [2] produce highly improbable images. This testifies to the fact that our method provides much tighter uncertainty regions, |------Our -------| im2im-uq Conffusion Fig. 11. Global Experiments: Images sampled uniformly from the estimated global uncertainty regions, referring to the colorization task. Using RDA-PUQ results with high-perceptual images, while im2im-uq [1] and Conffusion [2] produce unlikely images. These results indicate that our uncertainty regions are significantly more confined than those of previous works.\nwhereas previous work results in exaggerated regions that contain unlikely images. In addition to the above, we present in Appendix J a visualization of the lower and upper corners of the uncertainty regions produced by our method, comparing them to those produced by previous work [1], [2].\nVI. CONCLUDING REMARKS This paper presents \"Principal Uncertainty Quantification\" (PUQ), a novel and effective approach for quantifying uncertainty in any image-to-image task. PUQ takes into account the spatial dependencies between pixels in order to achieve significantly tighter uncertainty regions. The experimental results demonstrate that PUQ outperforms existing methods in image colorization, super-resolution and inpainting, by improving the uncertainty volume. Additionally, by allowing for a small reconstruction error when recovering groundtruth images, PUQ produces tight uncertainty regions with a few axes and thus improves computational complexity and interpretability at inference. As a result, PUQ achieves stateof-the-art performance in uncertainty quantification for imageto-image problems.\nReferring to future research, more sophisticated choices that rely on recent advancements in stochastic image regression models could be explored, so as to improve the complexity of our proposed approximation phase. Further investigation into alternative geometries for uncertainty regions could be interesting in order to reduce the gap between the provided region of uncertainty and the high-density areas of the true posterior distribution. This includes an option to divide the spatial domain into meaningful segments, while minimizing the uncertainty volume, or consider a mixture of Gaussians modeling of the samples of the estimated posterior distribution. Additionally, exploring alternative diffusion models and various conditional stochastic samplers presents an interesting path for future investigation. This could involve comparing different conditional samplers, potentially offering an alternative approach to the utilization of FID scores. Fig. 12. A visual representation demonstrating the intuition behind utilizing principal components (PCs) as the basis, B(x), in Equation ( 1) for the colorization task. The left part illustrates that the PCs incorporate spatial correlation, with v1 (x) primarily controlling the hat color, v2 (x) governing the background color, and v3 (x) influencing the clothing color. On the right side, an illustration of the uncertainty region is presented, composed of these axes, where the origin is μ(x), and each image is defined by μ(x) + vi (x) T yc + a, where yc := y -μ(x), and a ∈ R is a controllable parameter that moves along the axis. This figure provides an intuition behind employing these vectors for the uncertainty quantification. We show the estimation of the first three PCs using our globally applied PUQ and visualize the uncertainty region formed by these axes. Our approach facilitates efficient exploration within the uncertainty region, thanks to the linear axes that incorporate spatial correlation, as illustrated by the visualization of v1 (x), v2 (x), and v3 (x)." }, { "figure_ref": [], "heading": "B. Coverage Loss Justification", "publication_ref": [ "b0", "b1", "b4" ], "table_ref": [], "text": "This section aims to justify our choice for the loss-function for tuning λ in Equation ( 5), and the weights used in it, ŵi (x).\nRecall, this expression is given as:\nL(x, y; λ) := d i=1 ŵi(x) • 1 vi(x) T y ̸ ∈ T λ (x; B(x))i .\nOur starting point is the given d-dimensional hyperrectangle obtained from the approximation phase, oriented along the d PC directions. This shape serves as our initially estimated uncertainty region. Given the calibration data, S cal := {(x i , y i )} n i=1 , our goal is to inflate (or deflate, if this body proves to be exaggerated) this shape uniformly across all axes so that it contains the majority of the ground truth examples.\nFocusing on a single pair from this dataset, (x, y), the degraded image x is used to ignite the whole approximation phase, while the ground truth y serves for assessing the obtained hyper-rectangle, by considering the projected coordinates {v i (x) T y c } d i=1 , where y c := y -μ(x). The following function measures a potential deviation in the i-th axis, hi(x, y) := max vi(x)\nT yc -û(x)i, 0\n+ max -vi(x) T yc + l(x)i, 0 .\nWritten differently, this expression is also given by hi(x, y) :=\n     vi(x) T yc -û(x)i if vi(x) T yc > û(x)i > 0 l(x)i -vi(x) T yc if vi(x) T yc < -l(x)i < 0 0 otherwise.\nIf positive, this implies that in this axis the example spills outside the range of the rectangle, and the value itself is the distance from it's border.\nThe following expression quantifies the weighted amount of energy that should be invested in projecting back the {v i (x) T y} d i=1 coordinates to the closest border point:\nEnergy(x, y) = d i=1 σi(x) 2 hi(x, y) 2 . (12\n)\nNote that in our weighting we prioritize high-variance axes, in which deviation from the boundaries is of greater impact. Naturally, we should tune λ, which scales û(x) i and l(x) i , so as to reduce this energy below a pre-chosen threshold, thus guaranteeing that the majority of ground truth images fall within the hyper-rectangle. While this expression is workable, it suffers from two shortcomings: (1) It is somewhat involved to compute; and (2) The threshold to use with it is hard to interpret and thus to choose. Therefore, similar to previous approaches [1], [2], we opted in this work for a binary version of Equation ( 11) of the form bi(x, y) := 1 if hi(x, y) > 0 0 otherwise.\nIn addition, we divide the energy expression, defined in Equation ( 12), by the sum of squares of all the singular values, and this way obtain exactly L(x, y; λ) as in Equation (5).\nObserve that, by definition, we get that 0 ≤ L(x, y; λ) ≤ 1, where the bottom bound corresponds to a point fully within the rectangle, and the upper bound for the case where the point is fully outside in all axes. Therefore, thresholding the expectation of this value with α ≪ 1 is intuitive and meaningful." }, { "figure_ref": [], "heading": "C. Reconstruction Loss Justification", "publication_ref": [], "table_ref": [], "text": "This section aims to discuss our choice for the loss function for tuning λ 1 in Equation ( 8), given by\nL1(x, y; λ1) := Qq       k(x;λ 1 ) j=1 vj(x) T ycvj(x) -yc i    d i=1    .\nRecall the process: We begin with K ≤ d PCs obtained from the approximation phase, and then choose k(x; λ 1 ) ≤ K of them as instance-specific number of PCs for the evaluation of the uncertainty. Given a calibration pair (x, y), x is used to derive k(x; λ 1 ), defining a low-dimensional subspace V (x) := [v 1 (x), . . . , vk (x;λ1) (x)] ∈ R k(x;λ1)×d . This, along with the conditional-mean, μ(x), represent P y|x as an affine subspace. The ground-truth image y is then projected onto this slab via:\nProjection(y) := μ(x) + V (x) V (x) T yc(14) = μ(x) + k(x;λ 1 ) j=1 vj(x) T ycvj(x) ,\nwhere y c := y -μ(x).\nThe parameter λ 1 should be tuned so as to guarantee that this projection entails a bounded error, dist(y, Projection(y)) in expectation. A natural distance measure to use here is the L 2 -norm of the difference, which aligns well with our choice to use SVD in the approximation phase. However, L 2 accumulates the error over the whole support, thus losing local interpretability. An alternative is using L ∞ which quantifies the worst possible pixelwise error induced by the low-dimensional projection,\ndist(y, Projection(y)) := k(x;λ 1 ) j=1 vj(x) T ycvj(x) -yc ∞ .\nWhile this measure is applicable in many tasks, there are cases (e.g., inpainting) in which controlling a small maximum error requires the use of a large number of PCs, k(x; λ 1 ). To address this, we propose a modification by considering the maximum error over a user-defined ratio of pixels, q ∈ (0, 1), a value close to 1. This is equivalent to determining the q-th empirical quantile, Qq , of the error values among the pixels, providing a more flexible and adaptive approach, which also aligns well with the rationale of uncertainty quantification, in which the statistical guarantees are given with probabilistic restrictions." }, { "figure_ref": [], "heading": "D. Reduced Dimension-Adaptive PUQ", "publication_ref": [ "b14", "b7", "b14" ], "table_ref": [], "text": "The DA-PUQ procedure (see Section IV-B2) reduces the number of PCs to be constructed to K ≤ d while using k(x; λ1 ) ≤ K PCs, leading to increased efficiency in both time and space during inference. However, determining manually the smallest K value that can guarantee both Equation (2) and Equation ( 4) with high probability can be challenging. To address this, we propose an expansion of the DA-PUQ procedure; the Reduced Dimension-Adaptive PUQ (RDA-PUQ) procedure that also controls the maximum number of PCs required for the uncertainty assessment. While this approach is computationally intensive during calibration, it is advantageous for inference as it reduces the number of samples required to construct the PCs using Algorithm 2.\nSpecifically, for each input instance x and its corresponding ground-truth value y in the calibration data, we use the estimators obtained in the approximation phase, to estimate Kλ3 PCs of possible solutions, denoted by B(x), their corresponding importance weights, denoted by ŵ(x), the conditional mean denoted by μ(x), and the lower and upper bounds denoted by l(x) and ũ(x), respectively. Note that these estimates are now depend on λ 3 , we omit the additional notation for simplicity. Then, for each choice of λ 3 , we use these Kλ3 -dimensional estimates exactly as in the DA-PUQ procedure to achieve both the coverage and reconstruction guarantees of Equation ( 2) and Equation ( 4) with high probability.\nSimilar to previous approaches, we aim to minimize the uncertainty volume, defined in Equation ( 3), for the scaled Kλ3 -dimensional intervals where any additional axis (d -Kλ3 axes) is fixed to zero. We denote the uncertainty volume in this setting as V λ1,λ2,λ3 . The minimization of V λ1,λ2,λ3 is achieved by minimizing λ 1 , λ 2 and λ 3 , while ensuring that the guarantees of Equation ( 2) and Equation ( 4) are satisfied with high probability. This can be provided using a conformal prediction scheme, for example, through the LTT [15] calibration scheme, which ensures that the following holds:\nP E[L1(x, y; λ1, λ3)] ≤ β E[L2(x, y; λ1, λ2, λ3)] ≤ α ≥ 1 -δ ,(15)\nwhere λ1 , λ2 and λ3 are the minimizers for the uncertainty volume among valid calibration parameter results, Λ, obtained through the LTT procedure. Note that the loss functions, L 1 and L 2 , in the above are exactly those of the DA-PUQ procedure, defined in Equation (8) and Equation ( 9), while replacing K with Kλ3 .\nIntuitively, Equation (15) guarantees that a fraction q of the ground-truth pixel values is recovered with an error no greater than β using no more than Kλ 3 principal components, and a fraction of more than 1-α of the projected ground-truth values onto the first k(x; λ1 ) principal components (out of Kλ 3 ) are contained in the uncertainty intervals, with a probability of at least 1 -δ. The RDA-PUQ procedure is formally described in Algorithm 5." }, { "figure_ref": [], "heading": "E. Experimental Details", "publication_ref": [ "b36", "b4", "b9", "b9", "b9", "b14", "b9", "b14" ], "table_ref": [], "text": "This section provides details of the experimental methodology employed in this study, including the datasets used, architectures implemented, and the procedural details and hyperparameters of our method.\n1) Datasets and Preprocessing: Our machine learning system was trained using the Flickr-Faces-HQ (FFHQ) dataset [38], which includes 70,000 face images at a resolution of 128x128. We conducted calibration and testing on the CelebA-HQ (CelebA) dataset [37], which also consists of face images Algorithm 5 Reduced Dimension-Adaptive PUQ Proc.\nInput: Calibration set Scal := {xi, yi} n i=1 . Scanned calibration parameter values Λ 1 ← [1 . . . λ1 max ], Λ 2 ← [1 . . . λ2 max ] and Λ 3 ← [1 . . . λ3 max ]. Maximal PCs number K ≤ d. Approximation phase estimators B, ŵ, μ, ũ, l. Recovered pixels ratio q ∈ (0, 1). Reconstruction error β ∈ (0, 1). Misscoverage ratio α ∈ (0, 1). Calibration error level δ ∈ (0, 1). 1: for (x, y) ∈ Scal do T λ 2 (x; B(x)) ← Equation (1) using μ(x), û(x), l(x) ▷ Compute weighted coverage loss, Equation (\nL2(x, y; λ1, λ2, λ3)\n← k(x;λ 1 ,λ 3 ) i=1 ŵi(x)• 1 vi(x) T y ̸ ∈ T λ 2 (x; B(x))i 13:\nend for and was resized to match the resolution of our training data. To this end, we randomly selected 2,000 instances from CelebA, of which 1,000 were used for calibration and 1,000 for testing.\nFor the colorization experiments, a grey-scale transformation was applied to the input images. For the super-resolution experiments, patches at a resolution of 32x32 were averaged to reduce the input image resolution by a factor of 4 in each dimension. For the inpainting experiments, we randomly cropped pixels from the input images during the training phase, either in squares or irregular shapes; while for the calibration and testing data, we cropped patches at a resolution of 64x64 at the center of the image.\n2) Architecture and Training: In all our experiments, we applied the approximation phase using recent advancements in conditional image generation through diffusion-based models, while our proposed general scheme in Algorithm 1 can accommodate any stochastic regression solvers for inverse problems, such as conditional GANs [5]. In all tasks, we utilized the framework for conditional diffusion-based models proposed in the SR3 work [10], using a U-Net architecture. For each of the three tasks, we trained a diffusion model separately and followed the training regimen outlined in the code of [10]. To ensure a valid comparison with the baseline methods, we implemented them using the same architecture and applied the same training regimen. All experiments, including the baseline methods, were trained for 10,000 epochs with a batch size of 1,024 input images.\n3) PUQ Procedures and Hyperparameters: Our experimental approach follows the general scheme presented in Algorithm 1 and consists of 2 sets of experiments: local experiments on patches and global experiments on entire images. For the local experiments, we conducted 4 experiments of the E-PUQ procedure (detailed in Section IV-B1) on RGB patch resolutions of 1x1, 2x2, 4x4, and 8x8. We used K = 3, K = 12, K = 48, and K = 192 PCs for each resolution, respectively. We set α = δ = 0.1 to be the userspecified parameters of the guarantee, defined in Equation ( 6). In addition, we conducted another 2 experiments of the DA-PUQ (detailed in Section IV-B2), and RDA-PUQ (detailed in Appendix D) procedures on RGB patch resolution of 8x8. We set q = 0.9, β = 0.05 and α = δ = 0.1, to be the userspecified parameters of the guarantees of both Equation (10) and Equation (15). In total, we conducted 18 local experiments across three tasks. For the global experiments, we used entire images at a resolution of 128x128, in which we applied the DA-PUQ and the RDA-PUQ procedures. As global working is suitable for tasks that exhibit strong pixel correlation, we applied these experiment only on the task of image colorization. We set q = 0.95, β = α = δ = 0.1, to be the userspecified parameters of the guarantees of both Equation (10) and Equation (15). Both locally and globally, for the DA-PUQ and RDA-PUQ experiments, we used K = 100 PCs in the colorization task and K = 200 PCs in super-resolution and inpainting. We note that in the RDA-PUQ experiments, we used K PCs during inference, as discussed in Appendix D. In all experiments we used ϵ = 1e -10 for the computation of the uncertainty volume, defined in Equation (3)." }, { "figure_ref": [], "heading": "F. Comparative Samples from Uncertainty Regions", "publication_ref": [ "b0", "b1" ], "table_ref": [], "text": "We provide here more details referring to the experiment involving a visualization of samples drawn from the uncertainty regions of baseline methods [1], [2] and our proposed approach. We note that the baseline methods lack such an experiment.\nThis experiment was conducted across entire images, showing that our uncertainty region is much tighter, containing highly probable image candidates, compared to the pixelwise baseline methods. These methods tend to generate exaggerated uncertainty regions that encompass a range of noisy images, diverging from the posterior distribution of images given a measurement. Our success in producing more confined regions, encompassing the ground truth within them, is a direct consequence of the incorporation of spatial correlations.\nTo justify this claim, we trained the identical architecture for each baseline method and applied the same training regime that was utilized in our approach, leveraging the official code of both methods. Each baseline method generates uncertainty intervals via pixel-based uncertainty maps, which is equivalent to our general definition of uncertainty intervals defined by Equation (1), while employing standard basis vectors. Therefore, we uniformly sampled values within the uncertainty intervals of each approach, including our own, and showcased the resulting images." }, { "figure_ref": [ "fig_7", "fig_4", "fig_4", "fig_6", "fig_7" ], "heading": "G. Additional Global Experiments", "publication_ref": [ "b0", "b1", "b0", "b1" ], "table_ref": [], "text": "In Section V we presented global studies of our DA-PUQ and RDA-PUQ, focusing on their deployment in the colorization task. Here, we extend this analysis by presenting additional global studies for super-resolution and inpainting, ensuring a more comprehensive assessment of our methods.\nIt is worth mentioning that the tasks of super-resolution and inpainting differ in nature from colorization. In superresolution and inpainting, the decay in the associated singular values of each posterior distribution occurs relatively slowly, indicating a more localized impact. This contrasts with the colorization task, where the decay in singular values is more rapid and pronounced, implying stronger pixel correlations. Consequently, constructing global representations of uncertainty regions in the colorization task is effective, with strong guarantees involving small reconstruction errors over a large number of pixels using far fewer axes.\nNevertheless, we have applied our DA-PUQ and RDA-PUQ globally to the tasks of super-resolution and inpainting, and the quantitative results are depicted in Figure 13. In both |------Our -------| im2im-uq Conffusion Fig. 17. Global Experiments: Images sampled uniformly from the estimated global uncertainty regions, referring to the inpainting task. Using DA-PUQ results with high-perceptual images, while im2im-uq [1] and Conffusion [2] produce unlikely images. These results indicate that our uncertainty regions are significantly more confined than those of previous works.\nstudies, we utilized 1500 samples for the calibration data and 500 samples for the test data. This is different from the global colorization study, where we used 1000 samples for both calibration and test data. This adjustment aims to narrow the gap between the true risks of unseen data and the concentration bounds employed in the calibration scheme, ultimately allowing us to provide more robust guarantees, including small coverage and reconstruction risks with high probability.\nAdditionally, we set α = β = δ = 0.1 in both studies. However, in the super-resolution study, we maintained q = 0.95, which is consistent with the setting used in the global colorization. In contrast, for inpainting, we chose q = 0.8, indicating a softer reconstruction guarantee applicable to 80% of the pixels within the missing window.\nThe results depicted in Figure 13 reveal that our method consistently yields significantly smaller uncertainty volumes compared to our local results presented in Section V and previous research. However, this reduction in uncertainty volume comes at the cost of introducing a reconstruction risk, reaching a maximum of β = 0.1, which applies to 95% of the pixels in super-resolution and 80% of the pixels in inpainting.\nObserve the improvement in interpretability that our DA-PUQ method brings to the table. Notably, the uncertainty regions generated by DA-PUQ consist of only ∼10 PCs within the full-dimensional space of the images. In contrast, the uncertainty regions produced by our RDA-PUQ experiments comprise ∼100 PCs, indicating a slower decay in the singular values of the posterior distribution associated with each uncertainty region.\nFigures 14 and16 showcase selected uncertainty regions provided by our proposed DA-PUQ when applied globally to the super-resolution and inpainting tasks, respectively. Notably, the projected ground-truth images using only k(x) PCs resemble the originals. This observation indicates that the uncertainty region effectively captures the spread and variability among solutions while maintaining satisfying reconstruction errors.\nIn the inpainting task presented in Figure 16, the first two axes of our uncertainty regions exhibit semantic content, an indicator to our method's ability to consider spatial pixel correlation. The PCs effectively capture features such as sunglasses, eyebrows, and forehead, highlighting the unique strength of our approach in terms of interpretability. However, in the super-resolution task depicted in Figure 14, localized PCs emerge, implying that only a few pixel values are affected in each axis of uncertainty.\nIn Figures 15 and17, we visually compare samples generated from the corresponding estimated uncertainty regions. These samples are obtained by uniformly sampling from the respective hyper-rectangle, then transforming to the image domain. For further details regarding this study, please refer to Appendix F. This visual comparison 4 shows that samples extracted from our uncertainty regions exhibit higher perceptual quality compared to those generated by im2imuq [1] and Conffusion [2]. This observation implies that our method provides tighter uncertainty regions, whereas previous work results in exaggerated uncertainty regions that contain improbable images." }, { "figure_ref": [ "fig_8", "fig_8", "fig_8" ], "heading": "H. Ablation Study", "publication_ref": [], "table_ref": [], "text": "We turn to introduce an ablation study on the user-specified parameters: α, β, q, and δ. These parameters are used in the context of the statistical guarantees provided by our proposed method, and our objective is to offer a comprehensive understanding of how to select these parameters and their resulting impact on performance. To elaborate, α ∈ (0, 1) is employed to ensure coverage, as indicated in Equation (2), while both β ∈ (0, 1) and q ∈ (0, 1) play a role in establishing the reconstruction guarantee, as defined in Equation (4). Additionally, the parameter δ ∈ (0, 1) is used for controlling the error rate associated with both guarantees over the calibration data.\nAn effective calibration process relies on these userspecified parameters, α, β, and δ approaching values close to zero, while q should ideally approach 1. The choice of these parameters is guided by the amount of available calibration data. In cases where a substantial calibration dataset is accessible, it becomes feasible to establish robust statistical assessments. This is manifested by the ability to employ smaller values for α, β, and δ, while favoring a higher value for q. For instance, achieving a 90% coverage rate (α = 0.1), with a reconstruction error threshold of 5% (β = 0.05) across 95% of the image pixels (q = 0.95) serves as an illustrative example of such robust assessments.\nIt is worth noting that our primary aim in this work is to enhance the interpretability of the uncertainty assessment within the context of the inverse problems. This is achieved through the methods we propose, DA-PUQ and RDA-PUQ. Consequently, we strive to provide the user with a more Fig. 18. An ablation study of DA-PUQ in a locally applied colorization task on 8x8 RGB patch resolution. We examine the user-defined parameters α, β, q, and δ, showcasing their impact on the mean uncertainty volume, mean dimensionality, coverage risk, and reconstruction risk. The default values are α = 0.05, β = 0.05, q = 0.95, and δ = 0.05 using K = 200 PCs. Our results are depicted in green, with threshold values for guarantees highlighted in dashed black. concise set of uncertainty axes, referred to as the selected axes denoted as B(x) = {v 1 (x), v2 (x), . . . , vk (x) (x)}. Our approach for selecting the reconstruction guarantee is geared towards a balance between precision and interpretability. On one hand, we aim to establish a robust and stringent reconstruction guarantee to accurately capture the uncertainty of the posterior distribution across the d dimensions. On the other hand, we aim to incorporate a softer reconstruction guarantee that results in providing fewer axes of uncertainty thus enhancing interpretability.\nFigure 18 illustrates the quantitative results of the ablation study conducted on DA-PUQ, where we investigate the influence of the user-defined parameters, α, β, q, and δ, on DA-PUQ's performance. It is important to note that the default settings in each study are the following: α = 0.05, β = 0.05, q = 0.95, and δ = 0.05, representing a spectrum of strengthening and softening parameter choices.\nAnalyzing the results, we observe that α primarily controls the coverage aspect. As α increases, the uncertainty intervals become narrower, leading to more tightly constrained uncertainty regions. This trend is evident in the reduction of the uncertainty volume metric. However, it is noteworthy that α has no impact on the reconstruction error, as the dimension and reconstruction risk remain relatively consistent across different choices of α.\nThe parameter β influences the reconstruction error, with even slight alterations affecting the number of selected axes, denoted as k(x). On the other hand, the parameter q has a relatively minor effect on performance. Adjusting q does impact the uncertainty volume, with smaller dimensions resulting in a reduction in uncertainty volume. Higher values for q lead to the selection of more PCs for the uncertainty assessment, involving more pixels in the reconstruction guarantee.\nReferring to the parameter δ, we observe minor changes in the coverage risk, while the reconstruction risk undergoes more significant changes. This suggests that errors in the uncertainty assessments tend to be more focused on the reconstruction guarantee rather than the coverage guarantee.\nIn terms of the precision and interpretability trade-off, the ideal scenario would involve selecting the smallest possible value for β, as demonstrated in Figure 18 with β = 0.02. However, such a stringent guarantee would require the use of approximately 188.3 PCs, which can harm interpretability. In this case, a softer guarantee, such as β = 0.04, results in the use of only around 5.13 PCs, striking a more balanced trade-off between precision and interpretability." }, { "figure_ref": [], "heading": "I. Precision and Complexity Trade-off", "publication_ref": [], "table_ref": [], "text": "We now discuss the trade-off between precision and complexity in our work. Precision here stands for the ability to accurately capture uncertainty within the posterior distribution across the d dimensions, as reflected by the reconstruction risk. Conversely, complexity involves two key aspects: the complexity associated with our diffusion model for generating posterior samples and the computational demands of PCA. Both of these aspects are influenced by the chosen value of K ≤ d, which serves both as the number of drawn samples and the overall number of initial PC's to work with. As for the complexity: (i) Assuming that a single diffusion iteration can be achieved in constant time, the complexity of generating K samples is given by O(IK), where I ∈ N denotes the number of iterations in the diffusion algorithm; and (ii) For the PCA, the complexity is provided by O(d 2 K + d 3 ), where K refers to the number of PCs.\nTherefore, the value of K ≤ d plays a pivotal role in governing the precision-complexity trade-off across all our proposed methods: E-PUQ, DA-PUQ, and RDA-PUQ, all of which involve sampling and PCA. The greater the number of PCs employed, the more precise our uncertainty assessment, at the expense of computational complexity, as discussed above. sidered: image colorization, super-resolution and inpainting." }, { "figure_ref": [], "heading": "", "publication_ref": [ "b6" ], "table_ref": [], "text": "On the right side, a 2D example illustrates an uncertainty region constructed by our approach in contrast to one produced by the pixelwise approach, demonstrating the distinction between the lower and upper corners in each approach.\nIn the case of E-PUQ, we achieve a complete uncertainty assessment at the computation cost of K = d. This leads to the effective reduction of the reconstruction risk to zero for all image pixels. However, it's essential to recognize that in practical scenarios, such as the global applications illustrated in our empirical study in Section V and in Appendix G, conducting sampling and PCA with K = d on high-dimensional data, such as d = 3 × 128 × 128, becomes unfeasible.\nHence, we introduced DA-PUQ to enhance the method's computational efficiency by allowing K ≪ d, thereby mitigating complexity. To further enhance interpretability, we introduced k(x) in Equation (7), which aims to reduce the number of PCs to be used (out of the already constructed K PCs), while ensuring that the reconstruction guarantee is maintained with as few PCs as possible. This balance is demonstrated in Figure 19, where various values of K showcase that the reconstruction risk remains unaffected, yet more uncertainty axes, k(x), are needed to uphold this equilibrium.\nGiven the challenge of determining an appropriate value for K that ensures robust statistical guarantees, we introduced RDA-PUQ. This variant tunes K to the lower value that fulfills the necessary statistical guarantees.\nIn Figure 19, we visually depict the precision-complexity trade-off through experiments involving different values of K in the context of DA-PUQ's global application in colorization. Here, we illustrate the complexity of our method through the selection of varying K values, where higher values imply higher complexity, as they require the generation of more samples and the construction of more PCs. Meanwhile, precision is assessed by examining the resulting k(x) values, where higher k(x) values correspond to situations where the uncertainty assessment is less accurate, signifying a higher reconstruction risk when employing all the K PCs. Consequently, more axes are needed to maintain a balanced risk." }, { "figure_ref": [], "heading": "J. Lower and Upper Corners", "publication_ref": [ "b0", "b1", "b0", "b1" ], "table_ref": [], "text": "We conclude this paper by providing a visual comparative analysis of lower and upper corners within uncertainty regions applied globally across the three tasks: image colorization, super-resolution, and inpainting.\nFormally, the lower and upper corners within the image domain of an uncertainty region, are constructed using the intervals outlined in Equation ( 1). These are defined as the following expressions: the lower corner is defined as μ(x) -V (x) l(x), and the upper corner is defined as μ(x)+ V (x)û(x). Here, V (x) is a matrix comprising of the K selected PCs from B(x) as it's columns, and l(x), û(x) are column vectors of length K.\nFor example, when choosing to work within the pixel domain by selecting the standard basis, B(x) = e 1 , e 2 . . . e d , where e i ∈ R d represents the one-hot vector with a value of 1 in the i th entry, the lower and upper corners align with the lower and upper bounds presented in prior work [1], [2] that operates in the pixel domain.\nIt is essential to note that in our work, we use the term corners to emphasize that the lower and upper corners in the image domain do not establish intervals. This is in contrast to the pixelwise approach, which constructs intervals around each pixel, making the terminology \"lower and upper bound images\" more conceptually suitable.\nIn Figure 20 (right), we illustrate the difference between the lower and upper corners of our uncertainty region (depicted as green dots) and the lower and upper bounds of the pixelwise approach (depicted as red dots). This comparison is presented through a 2D example, demonstrating the process of constructing an uncertainty region for a posterior distribution using our method, in contrast to the pixelwise approach.\nIn Figure 20 (left), we provide a visual comparison between the lower and upper corners generated by DA-PUQ and the lower and upper bounds produced by [1], [2]. It is evident that our lower and upper corners exhibit a higher perceptual quality compared to the lower and upper bounds from earlier pixel domain approaches. This suggests that the lower and upper corners represent more probable samples than those generated by the pixelwise approach. Therefore, the uncertainty regions constructed by our approach are more confined compared to those constructed using the pixelwise approach.\nInterestingly, by traversing between the two corners of DA-PUQ by their convex combination, we essentially walk in the main \"boulevard\" of the uncertainty region. Figure 21 shows the resulting images in this path for the three applications con-" } ]
Uncertainty quantification for inverse problems in imaging has drawn much attention lately. Existing approaches towards this task define uncertainty regions based on probable values per pixel, while ignoring spatial correlations within the image, resulting in an exaggerated volume of uncertainty. In this paper, we propose PUQ (Principal Uncertainty Quantification) -a novel definition and corresponding analysis of uncertainty regions that takes into account spatial relationships within the image, thus providing reduced volume regions. Using recent advancements in generative models, we derive uncertainty intervals around principal components of the empirical posterior distribution, forming an ambiguity region that guarantees the inclusion of true unseen values with a user-defined confidence probability. To improve computational efficiency and interpretability, we also guarantee the recovery of true unseen values using only a few principal directions, resulting in more informative uncertainty regions. Our approach is verified through experiments on image colorization, super-resolution, and inpainting; its effectiveness is shown through comparison to baseline methods, demonstrating significantly tighter uncertainty regions.
ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1 Principal Uncertainty Quantification with Spatial Correlation for Image Restoration Problems
[ { "figure_caption": "V(x; T (x; B(x))) := d d i=1 û(x)i + l(x)i", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .Fig. 4 .34Fig. 3. The sampling procedure for two image restoration problems using a conditional stochastic generator. The top row corresponds to super-resolution in local mode with patches, while the bottom row shows colorization in global mode. The implementation details are described in Section IV-A.", "figure_data": "", "figure_id": "fig_1", "figure_label": "34", "figure_type": "figure" }, { "figure_caption": "Input:Training set. Calibration set. Number of PCs K ∈ N. An unseen input instance x ∈ R d . Output: Statistically valid uncertainty axes and intervals for x. ▷ Approximation phase 1: Train a machine learning system (e.g., Section IV-A) to estimate the following: K PCs of Py|x Importance weights of PCs The conditional mean Lower and upper bounds on the PCs ▷ Calibration phase 2: if Exact uncertainty (accurate) then 3: Apply E-PUQ using the calibration data 4: else if Approximate uncertainty (reduced complexity) then 5:", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Input:Calibration set Scal := {xi, yi} n i=1 . Scanned calibration parameter values Λ = [1 . . . λmax]. Approximation phase estimations B, ŵ, μ, ũ, l. Misscoverage ratio α ∈ (0, 1). Calibration error level δ ∈ (0, 1). 1: for (x, y) ∈ Scal do 2: B(x), ŵ(x), μ(x), ũ(x), l(x) ← Apply Algorithm 2 to x, with the choice of K = d samples 3: for λ ∈ Λ do ▷ Scale uncertainty intervals 4: û(x) ← λũ(x) and l(x) ← λ l(x) 5:T λ (x; B(x)) ← Equation (1) using μ(x), û(x), l(x) ▷ Compute weighted coverage loss, Equation(5) 6:", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Algorithm 44Dimension-Adaptive PUQ Procedure Input: Calibration set Scal := {xi, yi} n i=1 . Scanned calibration parameter values Λ 1 ← [1 . . . λ1 max ] and Λ 2 ← [1 . . . λ2 max ].", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": ") ← λ2 ũ(x) and l(x) ← λ2 l(x)9:T λ 2 (x; B(x)) ← Eq. (1) using μ(x), û(x), l(x) ▷ Compute weighted coverage loss, Equation (5) 10:L2(x, y; λ1, λ2) ← k(x;λ 1 ) i=1 ŵi(x)• 1 vi(x)T y ̸ ∈ T λ 2 (x; B(x))i", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig.5. The three image recovery tasks, colorization (top), super-resolution (middle) and inpainting (bottom). For each we present a given measurement x, the ground-truth y, and 10 candidate samples from the (approximated) posterior distribution. These samples fuel the approximation phase in our work.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .7Fig. 7. Local Experiments: A comparison of DA-PUQ (see Section IV-B2) and RDA-PUQ (see Appendix D), when applied locally on 8x8 patches with α = δ = 0.1, β = 0.05 and q = 0.9. Each column corresponds to a relevant metric (see Section V-A), and each row corresponds to a specific task. The uncertainty volume was computed with ϵ = 1e -10. Here, the dimensionality is presented by two overlapping bars, where the yellow bars represent the distribution of K in DA-PUQ and K in RDA-PUQ, and the inner bars represent the distribution of k(x) in both cases.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Fig. 8 .8Fig. 8. Local Experiments: Uncertainty volume maps for patches applied in image colorization (top), super-resolution (middle), and inpainting (bottom)with E-PUQ, im2im-uq[1] and Conffusion[2]. Each pixel in the maps corresponds to the uncertainty volume, defined in Equation (3), of its corresponding patch. These results expose the effectiveness of our method that incorporates spatial correlations, resulting in a reduction of the uncertainty volume.", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Fig. 9 .Fig. 10 .910Fig. 9. Global Experiments: A comparison of DA-PUQ (see Section IV-B2) and RDA-PUQ (see Appendix D), when applied globally on the colorization task with α = β = δ = 0.1 and q = 0.95. The uncertainty volume was computed with ϵ = 1e -10.", "figure_data": "", "figure_id": "fig_9", "figure_label": "910", "figure_type": "figure" }, { "figure_caption": "[ 38 ]38Figure 12 depicts the role of the Principal Component (PCs) vectors in the context of the image colorization task. This figure provides an intuition behind employing these vectors for the uncertainty quantification. We show the estimation of the first three PCs using our globally applied PUQ and visualize the uncertainty region formed by these axes. Our approach facilitates efficient exploration within the uncertainty region, thanks to the linear axes that incorporate spatial correlation, as illustrated by the visualization of v1 (x), v2 (x), and v3 (x).", "figure_data": "", "figure_id": "fig_10", "figure_label": "38", "figure_type": "figure" }, { "figure_caption": "5 : 8 : 9 :589), ŵ(x), μ(x), ũ(x), l(x) ← Apply Algorithm 2 to x, with the choice of Kλ 3 samples for λ1 ∈ Λ1 do ▷ Compute adaptive dimensionality, Equation(7) 6:k(x; λ1, λ3) ← min k k : Kλ 3 i=1 ŵi(x) ≥ λ1 ▷ Compute reconstruction loss, Equation (8) 7: yc ← y -μ(x) L1(x, y; λ1, λ3) ← Qq k(x;λ 1 ,λ 3 ) j=1 vj(x) T ycvj(x) -yc i d i=1for λ2 ∈ Λ2 do ▷ Scale uncertainty intervals 10: û(x) ← λ2 ũ(x) and l(x) ← λ2 l(x) 11:", "figure_data": "", "figure_id": "fig_11", "figure_label": "589", "figure_type": "figure" }, { "figure_caption": "←Extract valid λs from LTT[15] applied on {(L1(x, y; λ1, λ3), L2(x, y; λ1, λ2, λ3)) : (x, y) ∈ Scal, λ1 ∈ Λ 1 , λ2 ∈ Λ 2 , λ3 ∈ Λ 3 } at risk levels (β, α). ▷ Compute the minimizers for the uncer. volume, Equation (3) 18: λ1, λ2, λ3← arg min λ 1 ,λ 2 ,λ 3 ∈ Λ 1 n n i=1 V λ 1 ,λ 2 (xi; B(xi))Output: Given a new instance x ∈ X , obtain valid uncertainty intervals for it, T λ2 (x; B(x)) over k(x; λ1) ≤ Kλ 3 PCs.", "figure_data": "", "figure_id": "fig_12", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 13 .Fig. 14 .1314Fig. 13. Global Experiments: A comparison of DA-PUQ (see Section IV-B2) and RDA-PUQ (see Appendix D), when applied globally on super-resolution and inpainting tasks with α = β = δ = 0.1, where in super-resolution we set q = 0.95 and in inpainting we set q = 0.8. The uncertainty volume was computed with ϵ = 1e -10. x y Recons. v1(x) v2(x)", "figure_data": "", "figure_id": "fig_13", "figure_label": "1314", "figure_type": "figure" }, { "figure_caption": "Fig. 15 .Fig. 16 .1516Fig.15. Global Experiments: Images sampled uniformly from the estimated global uncertainty regions, referring to the super-resolution task. Using DA-PUQ results high-perceptual images, while im2im-uq[1] and Conffusion[2] produce unlikely images. These results indicate that our uncertainty regions are significantly more confined than those of previous works.", "figure_data": "", "figure_id": "fig_14", "figure_label": "1516", "figure_type": "figure" }, { "figure_caption": "Fig. 19 .19Fig. 19. An analysis illustrating the precision-complexity trade-off of global DA-PUQ in the colorization task. The complexity aspect is presented by varying values of K, while precision is represented by the mean number of PCs provided for the user, denoted as k(x). The parameters setting is: α = 0.1, β = 0.1, q = 0.95 and δ = 0.1. Smaller k(x) values correspond to more accurate PCs, while lower values of K indicate improved method complexity.", "figure_data": "", "figure_id": "fig_15", "figure_label": "19", "figure_type": "figure" }, { "figure_caption": "Fig. 21 .21Fig. 21. Visualization of the main \"boulevard\" within the uncertainty regions of DA-PUQ applied globally across three tasks: image colorization, superresolution, and inpainting. The traversal along this path is obtained by a convex combination of the lower and upper corners, given by: (1-t)•lo(x)+t•up(x), where t ∈ [0, 1].", "figure_data": "", "figure_id": "fig_16", "figure_label": "21", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "LOCAL EXPERIMENTS: QUANTITATIVE COMPARISON OF THE MEANS AND STANDARD DEVIATIONS OF OUR LOCALLY APPLIED PUQ METHOD ON RGB PATCH RESOLUTION OF 8X8, UTILIZING THE TWO PROPOSED PROCEDURES. NOTE THAT IN THIS EXPERIMENT d = 8 × 8 × 3 = 192", "figure_data": "Uncert. Volumeim2im-uq [1]0192 / 1921.6e-1 ± 7.2e-2Conffusion [2]0192 / 1921.7e-1 ± 1.3e-1ColorizationE-PUQ0192 / 1922.3e-3 ± 9.8e-4DA-PUQ2.5e-2 ± 5.3e-41.6 ± 0.77 / 1002.4e-11 ± 1.3e-11RDA-PUQ1.7e-2 ± 9.7e-43.", "figure_id": "tab_1", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "EXPERIMENTS: QUANTITATIVE COMPARISON OF THE MEANS AND STANDARD DEVIATIONS OF OUR GLOBALLY APPLIED PUQ METHOD IN THE COLORIZATION TASK, UTILIZING THE PROPOSED DA-PUQ (SEE SECTION IV-B2) AND RDA-PUQ (SEE APPENDIX D) PROCEDURES.", "figure_data": "Recons. RiskDim. k(x) / KUncert. Volumeim2im-uq [1]049152 / 491521.4e-1 ± 3.2e-2Conffusion [2]049152 / 491521.4e-1 ± 5.5e-2DA-PUQ5.0e-2 ± 1.1e-32.2 ± 0.93 / 1001.2e-13 ± 5.0e-14RDA-PUQ4.3e-2 ± 2.8e-35.5 ± 4.5 / 22.3 ± 10.9 3.1e-13 ± 2.4e-13", "figure_id": "tab_2", "figure_label": "II", "figure_type": "table" } ]
Omer Belhasin; Yaniv Romano; Daniel Freedman; Ehud Rivlin; Michael Elad
[ { "authors": "Amit Pal Anastasios N Angelopoulos; Stephen Kohli; Michael Bates; Jitendra Jordan; Thayer Malik; Srigokul Alshaabi; Yaniv Upadhyayula; Romano", "journal": "PMLR", "ref_id": "b0", "title": "Image-to-image regression with distribution-free uncertainty quantification and applications in imaging", "year": "2022" }, { "authors": "Eliahu Horwitz; Yedid Hoshen", "journal": "", "ref_id": "b1", "title": "Conffusion: Confidence intervals for diffusion models", "year": "2022" }, { "authors": "Roger Koenker; Gilbert Bassett", "journal": "journal of the Econometric Society", "ref_id": "b2", "title": "Regression quantiles. Econometrica", "year": "1978" }, { "authors": " Swami Sankaranarayanan; Stephen Anastasios N Angelopoulos; Yaniv Bates; Phillip Romano; Isola", "journal": "", "ref_id": "b3", "title": "Semantic uncertainty intervals for disentangled latent spaces", "year": "2022" }, { "authors": "Mehdi Mirza; Simon Osindero", "journal": "", "ref_id": "b4", "title": "Conditional generative adversarial nets", "year": "2014" }, { "authors": "Prafulla Dhariwal; Alexander Nichol", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b5", "title": "Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": "Jascha Sohl-Dickstein; Eric Weiss; Niru Maheswaranathan; Surya Ganguli", "journal": "PMLR", "ref_id": "b6", "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "year": "2015" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b7", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Jonathan Ho; Chitwan Saharia; William Chan; David J Fleet; Mohammad Norouzi; Tim Salimans", "journal": "J. Mach. Learn. Res", "ref_id": "b8", "title": "Cascaded diffusion models for high fidelity image generation", "year": "2022" }, { "authors": "Chitwan Saharia; Jonathan Ho; William Chan; Tim Salimans; David J Fleet; Mohammad Norouzi", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b9", "title": "Image super-resolution via iterative refinement", "year": "2022" }, { "authors": "Chitwan Saharia; William Chan; Huiwen Chang; Chris Lee; Jonathan Ho; Tim Salimans; David Fleet; Mohammad Norouzi", "journal": "", "ref_id": "b10", "title": "Palette: Image-to-image diffusion models", "year": "2022" }, { "authors": "Vladimir Vovk; Alexander Gammerman; Glenn Shafer", "journal": "Springer", "ref_id": "b11", "title": "Algorithmic learning in a random world", "year": "2005" }, { "authors": "Jing Lei; G' Max; Alessandro Sell; Ryan J Rinaldo; Larry Tibshirani; Wasserman", "journal": "Journal of the American Statistical Association", "ref_id": "b12", "title": "Distribution-free predictive inference for regression", "year": "2018" }, { "authors": "N Anastasios; Stephen Angelopoulos; Bates", "journal": "", "ref_id": "b13", "title": "A gentle introduction to conformal prediction and distribution-free uncertainty quantification", "year": "2021" }, { "authors": "Stephen Anastasios N Angelopoulos; Emmanuel J Bates; Michael I Candès; Lihua Jordan; Lei", "journal": "", "ref_id": "b14", "title": "Learn then test: Calibrating predictive algorithms to achieve risk control", "year": "2021" }, { "authors": "Stéphane Lathuilière; Pablo Mesejo; Xavier Alameda-Pineda; Radu Horaud", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b15", "title": "A comprehensive analysis of deep regression", "year": "2019" }, { "authors": "Venkataraman Santhanam; Larry S Vlad I Morariu; Davis", "journal": "", "ref_id": "b16", "title": "Generalized deep image to image regression", "year": "2017" }, { "authors": "Xinchen Yan; Jimei Yang; Kihyuk Sohn; Honglak Lee", "journal": "Springer", "ref_id": "b17", "title": "Attribute2image: Conditional image generation from visual attributes", "year": "2016" }, { "authors": "Karol Gregor; Ivo Danihelka; Alex Graves; Danilo Rezende; Daan Wierstra", "journal": "PMLR", "ref_id": "b18", "title": "Draw: A recurrent neural network for image generation", "year": "2015" }, { "authors": "Ian Goodfellow; Jean Pouget-Abadie; Mehdi Mirza; Bing Xu; David Warde-Farley; Sherjil Ozair; Aaron Courville; Yoshua Bengio", "journal": "Communications of the ACM", "ref_id": "b19", "title": "Generative adversarial networks", "year": "2020" }, { "authors": "Yang Song; Stefano Ermon", "journal": "Advances in neural information processing systems", "ref_id": "b20", "title": "Generative modeling by estimating gradients of the data distribution", "year": "2019" }, { "authors": "Zahra Kadkhodaie; P Eero; Simoncelli", "journal": "", "ref_id": "b21", "title": "Solving linear inverse problems using the prior implicit in a denoiser", "year": "2020" }, { "authors": "Yang Song; Jascha Sohl-Dickstein; P Diederik; Abhishek Kingma; Stefano Kumar; Ben Ermon; Poole", "journal": "", "ref_id": "b22", "title": "Score-based generative modeling through stochastic differential equations", "year": "2020" }, { "authors": "Bahjat Kawar; Gregory Vaksman; Michael Elad", "journal": "", "ref_id": "b23", "title": "Stochastic image denoising by sampling from the posterior distribution", "year": "2021" }, { "authors": "Bahjat Kawar; Gregory Vaksman; Michael Elad", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b24", "title": "Snips: Solving noisy inverse problems stochastically", "year": "2021" }, { "authors": "Bahjat Kawar; Michael Elad; Stefano Ermon; Jiaming Song", "journal": "", "ref_id": "b25", "title": "Denoising diffusion restoration models", "year": "2022" }, { "authors": "Zahra Kadkhodaie; Florentin Guth; Stéphane Mallat; Eero P Simoncelli", "journal": "", "ref_id": "b26", "title": "Learning multi-scale local conditional probability models of images", "year": "2023" }, { "authors": "Yaniv Romano; Evan Patterson; Emmanuel Candes", "journal": "Advances in neural information processing systems", "ref_id": "b27", "title": "Conformalized quantile regression", "year": "2019" }, { "authors": "Victor Chernozhukov; Kaspar Wüthrich; Yinchu Zhu", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b28", "title": "Distributional conformal prediction", "year": "2021" }, { "authors": "Matteo Sesia; Yaniv Romano", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b29", "title": "Conformal prediction using conditional histograms", "year": "2021" }, { "authors": "Chirag Gupta; Arun K Kuchibhotla; Aaditya Ramdas", "journal": "Pattern Recognition", "ref_id": "b30", "title": "Nested conformal prediction and quantile out-of-bag ensemble methods", "year": "2022" }, { "authors": "Danijel Kivaranovic; Kory D Johnson; Hannes Leeb", "journal": "PMLR", "ref_id": "b31", "title": "Adaptive, distribution-free prediction intervals for deep networks", "year": "2020" }, { "authors": "Stephen Bates; Anastasios Angelopoulos; Lihua Lei; Jitendra Malik; Michael Jordan", "journal": "Journal of the ACM (JACM)", "ref_id": "b32", "title": "Distribution-free, risk-controlling prediction sets", "year": "2021" }, { "authors": "Stephen Anastasios N Angelopoulos; Adam Bates; Lihua Fisch; Tal Lei; Schuster", "journal": "", "ref_id": "b33", "title": "Conformal risk control", "year": "2022" }, { "authors": "Jacopo Teneggi; Matthew Tivnan; Web Stayman; Jeremias Sulam", "journal": "PMLR", "ref_id": "b34", "title": "How to trust your diffusion model: A convex optimization approach to conformal risk control", "year": "2023" }, { "authors": "Yves Meyer", "journal": "Springer", "ref_id": "b35", "title": "Orthonormal wavelets", "year": "1987" }, { "authors": "Tero Karras; Timo Aila; Samuli Laine; Jaakko Lehtinen", "journal": "", "ref_id": "b36", "title": "Progressive growing of gans for improved quality, stability, and variation", "year": "2017" } ]
[ { "formula_coordinates": [ 3, 319.05, 469.59, 243.99, 10.43 ], "formula_id": "formula_0", "formula_text": "T (x; B(x))i := vi(x) T μ(x) -l(x)i, vi(x) T μ(x) + û(x)i .(1)" }, { "formula_coordinates": [ 3, 334.27, 725.19, 225.28, 26.84 ], "formula_id": "formula_1", "formula_text": "E d i=1 ŵi(x) • 1 vi(x) T y ∈ T (x; B(x))i > 1 -α, (2" }, { "formula_coordinates": [ 3, 559.55, 734.64, 3.48, 7.77 ], "formula_id": "formula_2", "formula_text": ")" }, { "formula_coordinates": [ 4, 107.69, 480.78, 174.19, 26.84 ], "formula_id": "formula_3", "formula_text": "≈ exp 1 d d i=1 log û(x)i + l(x)i + ϵ -ϵ ," }, { "formula_coordinates": [ 4, 335.76, 56.17, 174.66, 10.18 ], "formula_id": "formula_4", "formula_text": "y x ŷi ∼ Py|x" }, { "formula_coordinates": [ 4, 340.03, 605.48, 223.01, 35.56 ], "formula_id": "formula_5", "formula_text": "E Qq    K j=1 vj(x) T ycvj(x) -yc i    d i=1 ≤ β ,(4)" }, { "formula_coordinates": [ 6, 48.96, 163.58, 251.06, 24.28 ], "formula_id": "formula_6", "formula_text": "f θ : X × Z → Y patch . Given an input instance x ∈ R d , we propose to generate K samples, denoted by {f θ (x, z i )} K i=1" }, { "formula_coordinates": [ 6, 311.98, 68.72, 251.06, 91.15 ], "formula_id": "formula_7", "formula_text": "Input: Instance x ∈ X . Conditional stochastic generative model f θ : X → Y or f θ : X → Ypatch. Maximal PCs / samples number K ≤ d. Misscoverage ratio α ∈ (0, 1). ▷ Generate samples drawn from Py|x 1: for i = 1 to K do 2: ŷi(x) ← f θ (x, zi) 3: end for ▷ Compute conditional mean 4: μ(x) ← 1 K K i=1 ŷi(x)" }, { "formula_coordinates": [ 6, 316.54, 167.58, 246.49, 55.53 ], "formula_id": "formula_8", "formula_text": "5: Ŷ (x) ← [ŷ1(x), ŷ2(x) . . . ŷK (x)] ∈ R d×K 6: Ŷ (x) -μ(x) • 1 T K = V (x) Σ(x) Û (x) T 7: B(x) ← {v1(x), v2(x) . . . vK (x)}, where vi(x) = [ V (x)]i 8: ŵ(x) ← σ1(x) 2 , . . . , σK (x) 2 /c ∈ R K , where σi(x) = [ Σ(x)]i and c = K j=1 σj(x)" }, { "formula_coordinates": [ 6, 312.55, 251.23, 205.62, 31.56 ], "formula_id": "formula_9", "formula_text": "l(x)i ← Qα/2 ({vi(x) T (ŷj(x) -μ(x))} K j=1 ) 11: ũ(x)i ← Q1-α/2 ({vi(x) T (ŷj(x) -μ(x))} K j=1 ) 12:" }, { "formula_coordinates": [ 6, 327.13, 438.69, 232.42, 26.84 ], "formula_id": "formula_10", "formula_text": "L(x, y; λ) := d i=1 ŵi(x) • 1 vi(x) T y ̸ ∈ T λ (x; B(x))i . (5" }, { "formula_coordinates": [ 6, 559.55, 448.14, 3.48, 7.77 ], "formula_id": "formula_11", "formula_text": ")" }, { "formula_coordinates": [ 6, 376.12, 585.02, 186.91, 10.73 ], "formula_id": "formula_12", "formula_text": "P E[L(x, y; λ)] ≤ α ≥ 1 -δ ,(6)" }, { "formula_coordinates": [ 7, 53.52, 258, 202.91, 20.08 ], "formula_id": "formula_13", "formula_text": "d i=1 ŵi(x) • 1 vi(x) T y ̸ ∈ T λ (x; B(x))i 8:" }, { "formula_coordinates": [ 7, 48.96, 329.84, 251.06, 33.15 ], "formula_id": "formula_14", "formula_text": "λ ← arg min λ∈ Λ 1 n n i=1 V λ (xi; B(x)) Output: Given a new instance x ∈ X , obtain valid uncertainty intervals for it, T λ(x; B(x))." }, { "formula_coordinates": [ 7, 344.29, 93.84, 218.75, 26.84 ], "formula_id": "formula_15", "formula_text": "k(x; λ1) := min 1≤k≤K k s.t. k i=1 ŵi(x) ≥ λ1 .(7)" }, { "formula_coordinates": [ 7, 319.37, 189.74, 243.67, 46.57 ], "formula_id": "formula_16", "formula_text": "L1(x, y; λ1) := Qq       k(x;λ 1 ) j=1 vj(x) T ycvj(x) -yc i    d i=1    ,(8)" }, { "formula_coordinates": [ 7, 340.46, 386.53, 222.58, 8.06 ], "formula_id": "formula_17", "formula_text": "L2(x, y; λ1, λ2) :=(9)" }, { "formula_coordinates": [ 7, 367.55, 408.87, 167.01, 19.94 ], "formula_id": "formula_18", "formula_text": "i=1 ŵi(x) • 1 vi(x) T y ̸ ∈ T λ 2 (x; B(x))i ." }, { "formula_coordinates": [ 7, 358.9, 539.76, 204.14, 22.31 ], "formula_id": "formula_19", "formula_text": "P E[L1(x, y; λ1)] ≤ β E[L2(x, y; λ1, λ2)] ≤ α ≥ 1 -δ ,(10)" }, { "formula_coordinates": [ 8, 53.52, 258.98, 226.06, 18.86 ], "formula_id": "formula_20", "formula_text": "1: for (x, y) ∈ Scal do 2: B(x), ŵ(x), μ(x), ũ(x), l(x) ← Apply Algorithm 2 to" }, { "formula_coordinates": [ 8, 53.52, 309.46, 190.48, 74.28 ], "formula_id": "formula_21", "formula_text": "k(x; λ1) ← min k k : K i=1 ŵi(x) ≥ λ1 ▷ Compute reconstruction loss, Equation (8) 5: yc ← y -μ(x) 6: L1(x, y; λ1) ← Qq k(x;λ 1 ) j=1 vj(x) T ycvj(x) -yc i d i=1 7:" }, { "formula_coordinates": [ 8, 49.54, 518.89, 244.58, 22.69 ], "formula_id": "formula_22", "formula_text": ") 15: λ1, λ2 ← arg min λ 1 ,λ 2 ∈ Λ 1 n n i=1 V λ 1 ,λ 2 (xi; B(xi))" }, { "formula_coordinates": [ 13, 69.35, 591.37, 210.29, 26.84 ], "formula_id": "formula_23", "formula_text": "L(x, y; λ) := d i=1 ŵi(x) • 1 vi(x) T y ̸ ∈ T λ (x; B(x))i ." }, { "formula_coordinates": [ 13, 368.65, 416.21, 181.18, 35.6 ], "formula_id": "formula_25", "formula_text": "     vi(x) T yc -û(x)i if vi(x) T yc > û(x)i > 0 l(x)i -vi(x) T yc if vi(x) T yc < -l(x)i < 0 0 otherwise." }, { "formula_coordinates": [ 13, 369.65, 531.97, 189.65, 26.84 ], "formula_id": "formula_26", "formula_text": "Energy(x, y) = d i=1 σi(x) 2 hi(x, y) 2 . (12" }, { "formula_coordinates": [ 13, 559.3, 541.42, 3.73, 7.77 ], "formula_id": "formula_27", "formula_text": ")" }, { "formula_coordinates": [ 14, 56.35, 185.48, 236.28, 37.12 ], "formula_id": "formula_29", "formula_text": "L1(x, y; λ1) := Qq       k(x;λ 1 ) j=1 vj(x) T ycvj(x) -yc i    d i=1    ." }, { "formula_coordinates": [ 14, 85.61, 340.22, 214.41, 44.54 ], "formula_id": "formula_30", "formula_text": "Projection(y) := μ(x) + V (x) V (x) T yc(14) = μ(x) + k(x;λ 1 ) j=1 vj(x) T ycvj(x) ," }, { "formula_coordinates": [ 14, 63.49, 500.58, 222.01, 33.12 ], "formula_id": "formula_31", "formula_text": "dist(y, Projection(y)) := k(x;λ 1 ) j=1 vj(x) T ycvj(x) -yc ∞ ." }, { "formula_coordinates": [ 14, 352.09, 419.89, 210.95, 22.31 ], "formula_id": "formula_32", "formula_text": "P E[L1(x, y; λ1, λ3)] ≤ β E[L2(x, y; λ1, λ2, λ3)] ≤ α ≥ 1 -δ ,(15)" }, { "formula_coordinates": [ 15, 49.54, 350.78, 226.95, 31.57 ], "formula_id": "formula_34", "formula_text": "← k(x;λ 1 ,λ 3 ) i=1 ŵi(x)• 1 vi(x) T y ̸ ∈ T λ 2 (x; B(x))i 13:" } ]
2023-07-27
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b25", "b5", "b24", "b5", "b8", "b3", "b9", "b19", "b23", "b24", "b6", "b7", "b18", "b20", "b10", "b12", "b11", "b0" ], "table_ref": [], "text": "With recent and rapid advances in dental imaging technologies, such as cone-beam computerized tomography (CBCT), intraoral scanners (IOS), and 3D facial scanners [26], there is a great need for the development of a smart digital platform of 3D jaw-teeth-face models to create an integrated patient treatment plan as a single digital anatomic model, including the bone, teeth, gingiva, and face [6], [25]. 3D jaw-teethface models can be used to provide a prediction of surgical outcomes in patients with facial deformities and create simulations of various osteotomies with an idea of the expected esthetic changes [6], [9]. It also enables dentists to explain Manuscript received XXX; revised XXX. Corresponding author: Kiwan Jeon ([email protected]).\ntreatment plans more effectively and facilitates communication between dentists and patients.\nWith the rapid enhancement of machine learning (ML) techniques, great success has been achieved in developing a fully automated method for integrating dental CBCT and IOS data into a jaw-tooth model that reaches the level of clinical application [4], [10].\nHowever, automatic integration between CBCT and face scans has not yet been achieved because of the dissimilarities between the data acquired from both devices [20], [24], [25]. Dental CBCT and face scan data do not always contain complete point-to-point correspondences owing to the different acquisition environments. In dental CBCT scanning, unlike in the face scanning process, patients are asked to close their eyes to remove motion artifacts or bite plastic sticks to separate the upper and lower jaw bones in the CT image. In addition, dental CBCT has a limited field of view (FOV) that often does not fully cover the patient's head in the longitudinal direction. Another hurdle is the difficulty in using ML techniques, which have recently shown remarkable performance in registration [7], [8], [19], [21]. This is because it is difficult to collect paired training data of CBCT and the corresponding facial images, given the legal and ethical restrictions related to medical data. This paper proposes a fully automatic CBCT-face scan registration method designed to address the aforementioned problems. The proposed method adopts a commonly used registration scheme that matches selected landmarks on both face surfaces, where the landmarks are selected from the portion of the face surface with the least geometric variation in the CBCT and face scan environments.\nThe main contribution of this study is the development of a new mathematical algorithm capable of converting a 3D landmark detection problem into a 2D problem of detecting the corresponding landmarks in two 2D projection images generated from different projection angles. This method allows the reuse of the existing ML method for 2D facial landmark detection provided in the open-source library Dlib [11]. It is crucial to note that the proposed method does not require annotated training data of facial landmarks because it uses a pre-trained facial landmark detection algorithm using various public datasets. The Dlib facial landmark detection (DFLD) algorithm first detects a face in a 2D face image using a maximum-margin object detector with a convolutional neural Fig. 1. Schematic diagram of the proposed registration method of CBCT and face scan data. In the first step, the 3D facial landmarks X ldmk , Y ldmk on CBCT and face scan surfaces are detected using two different 2D projection images. In the second step, the rigid transformation (T p0q ) is obtained using the paired X ldmk and Y ldmk . Finally, the ICP method is applied to accurately estimate T using the sub-surfaces X sub and Y sub .\nnetwork [13] and then provides 68 different feature points on the face based on an Ensemble of Regression Trees [12]. It is known to be robust and generalizable to various 2D face image models.\nThe proposed registration method is summarized as follows: the first step was to generate surfaces from the measured CT and facial scans. From each surface, two 2D projection images at different view angles were generated. We then detected the facial points corresponding to the 3D landmarks on the 2D projection images using the DFLD method. From the detected 2D facial points, the 3D landmark positions were estimated using a mathematical formula (See Theorem II.1). Using the multiple pairs of landmarks, the initial registration was performed. Finally, to improve the accuracy and efficiency of the registration of CT and face surfaces, the Iterative Closest Point (ICP) [1] method was applied using sub-surfaces, including 3D landmarks. Detailed procedures are described in the Methods section." }, { "figure_ref": [], "heading": "II. METHOD", "publication_ref": [], "table_ref": [], "text": "The goal of this study is to integrate 3D images from dental CBCT and facial scanners, which are different imaging modalities. Let X represent a 3D point cloud of a face surface obtained from a facial scanner. Let Y represent a 3D point cloud of the tissue surface obtained from a CBCT image. To achieve this goal, we align X and Y into a single coordinate system. The 3D X ´Y registration requires choosing subsurfaces X sub Ď X and Y sub Ď Y such that there exists a rigid transformation T : X Ñ Y satisfying\nT X sub « Y sub .(1)\nHere, the sub-surfaces X sub and Y sub should be chosen as the areas with the least geometric change in response to changes in the facial expression and movement of the mandible. A rigid transformation T with six degrees of freedom can be determined if three or more independent landmarks are detected in X sub and Y sub . Let X ldmk \" tx j \" px j , y j , z j q : j \" 1, ¨¨¨, Ju Ď X sub be landmark points and let Y ldmk \" ty j \" px j , ŷj , ẑj q : j \" 1, ¨¨¨, Ju Ď Y sub be the corresponding landmark points. The number J must be J ě 3. Given the pair of X ldmk and Y ldmk , the rigid transformation T can be obtained by the following least-squares minimization:\nT \" argmin\nT J ÿ j\"1 }T x j ´yj } 2 ,(2)\nwhere } ¨} denotes the Euclidean norm. Now, we describe a stable method for automatically detecting the 3D landmarks of X ldmk and Y ldmk ." }, { "figure_ref": [ "fig_0", "fig_1" ], "heading": "A. Automatic 3D landmark detection using 2D projection images", "publication_ref": [ "b16", "b15", "b4", "b11", "b26", "b27", "b6", "b4", "b5" ], "table_ref": [], "text": "In this section, the automatic detection method of X ldmk is explained because the detection of Y ldmk is performed in the same manner. The direct detection of 3D landmarks X ldmk on a 3D surface X with high confidence can be challenging. To address the difficulty of accurately detecting X ldmk on X, we transform the 3D detection problem into a relatively much easier 2D problem of detecting the corresponding landmark points on two 2D projection images generated from different projection angles, φ 1 and φ 2 .\nWe now explain how to generate the projection image I φ of the angle φ mentioned above. We rotate the 3D face surface X around the z-axis by an angle of φ. Let X φ represent the rotated surface. We select a plane of Π \" tps, d, zq : 1 ď s, z ď 1024u, where d is sufficiently large such that Π lies outside of X. The projection image I φ is generated by the rotated surface X φ onto the plane Π. See Figure 1 for Π and I φ ; to be precise, let x φ ps, zq be a point in X φ given by x φ ps, zq \" argmin\nxPℓ ps,zq XX φ }x ´ps, d, zq},(3)\nwhere ℓ ps,zq is a line passing through ps, d, zq and parallel to the y-axis. The image I φ with a light source position at q is given by [17], [16] I φ ps, zq \" max ˆnpx φ ps, zqq ¨pq ´xφ ps, zqq\n}q ´xφ ps, zq} , 0 ˙, (4\n)\nwhere npxq is the unit normal vector at x P X φ . Let tpx φ j , z j q : j \" 1, ¨¨¨, Ju denote 2D landmarks corresponding to 3D landmarks X ldmk . This is illustrated in Fig. 1. Owing to the light source at q, the projection image I φ contains 3D geometric features that allow the detection of 2D landmarks using conventional techniques [5], [12], [27], [28].\nTo detect 3D landmarks X ldmk , we generate the projection images I φ1 and I φ2 at two different angles, φ 1 and φ 2 . From I φ1 and I φ2 , we can easily obtain two sets of landmarks: tpx φ1 j , z j q : j \" 1, ¨¨¨, Ju and tpx φ2 j , z j q : j \" 1, ¨¨¨, Ju. We now explain our method for detecting X ldmk . The proposed method uses the 2D landmarks px φ1 j , z j q and px φ2 j , z j q to identify the corresponding 3D landmark x j \" px j , y j , z j q through the following theorem.\nTheorem II.1 Let two different rotation angles φ 1 and φ 2 be given. Suppose that the 2D landmarks px φ1 j , z j q and px φ2 j , z j q, corresponding to the 3D landmark x j , are obtained from the two projection images I φ1 and I φ2 , respectively. Let x j be expressed by\nx j \" px j , y j , z j q \" p´L sin θ, L cos θ, z j q,\nwhere L \" b x 2 j `y2 j and θ \" tan ´1p y j x j q ´π 2 . Then, L and θ are functions of φ 1 , φ 2 , x φ1 j , and x φ2 j that are respectively given by\nL \" Lpφ 1 , φ 2 , x φ1 j , x φ2 j q \" d px φ2 j ´xφ1 j cospφ 1 ´φ2 qq 2 sin 2 pφ 1 ´φ2 q `px φ1 j q 2 (6)\nand\nθ \" θpφ 1 , φ 2 , x φ1 j , x φ2 j q \" sin ´1 ˜´x φ1 j L ¸´φ 1 . (7)\nProof. Denoting θ i P r´π{2, π{2s, i \" 1, 2 by an angle from x φi j to the yz-plane, then x φi j \" ´L sin θ i , i \" 1, 2, x j \" ´L sinpθ 1 ´φ1 q, and y j \" L cospθ 1 ´φ1 q, where L \" b\nx 2 j `y2 j . See Fig. 2. In addition, θ i and φ i have the following relations:\nφ 1 ´φ2 \" θ 1 ´θ2 . (8\n)\nIt follows from the (8) that the x φ2 j is represented by: x φ2 j \" ´L sinpθ 1 ´pφ 1 ´φ2 qq \" x φ1 j cospφ 1 ´φ2 q `L cos θ 1 sinpφ 1 ´φ2 q. (9) It can be rewritten as\nL cos θ 1 \" x φ2 j ´xφ1 j cospφ 1 ´φ2 q sinpφ 1 ´φ2 q . (10\n)\nNow, it follows from L sin θ 1 \" ´xφ1 j and (10) that\nL \" d px φ2 j ´xφ1 j cospφ 1 ´φ2 qq 2 sin 2 pφ 1 ´φ2 q `px φ1 j q 2 ,(11)\nand\nθ 1 \" sin ´1 ˜´x φ1 j L ¸.(12)\nDenoting θ \" θ 1 ´φ1 , this completes the proof. Note that the rotation angles φ 1 and φ 2 should be selected carefully. The difference |φ 1 ´φ2 | should not be too small or too large. Our experiments showed that either |φ 1 ´φ2 | ď π 9 or |φ 1 ´φ2 | ě π 3 tends to result in large errors in 3D landmark detection. This is illustrated in Fig. 3. The following corollary explains these errors theoretically.\nCorollary II.1 Let x φ1 j and x φ2 j be the detected 2D landmarks and let x j be the estimated 3D landmark by the formulas (6) and (7). Assume that xj , xφ1 j , and xφ2 j are the corresponding true landmark positions to x j , x φ1 j , and x φ2 j . Let L be the true value of L in (5). If the difference of projection angles, |φ 1 ´φ2 | \" ϵ, is small, the error |L ´L| has the following asymptotic behavior as ϵ Ñ 0: where the symbol Op¨q is the standard big-O asymptotic notation.\n|L ´L| \" ϵ ´1 ˇˇ|x φ2 j ´x φ1 j | ´|x φ2 j ´xφ1 j | ˇˇ`Op1q,(13)\nNote that the difference |L ´L| in ( 13) is closely related to the 3D detection error }x j ´x j }. The above corollary shows that }x j ´x j } is magnified by a factor of ϵ ´1.\nProof. Without loss of generality, let |φ 1 ´φ2 | \" φ 1 φ2 . Since ϵ Ñ 0, the small angle approximation (sin ϵ « ϵ and cos ϵ « 1) can be applied to (6), yielding the following relations:\nL \" d px φ2 j ´xφ1 j cos ϵq 2 sin 2 ϵ `px φ1 j q 2 \" b ϵ ´2px φ2 j ´xφ1 j q 2 `px φ1 j q 2 `Op1q \" ϵ ´1|x φ2 j ´xφ1 j | `Op1q.(14)\nHence, we obtain\n|L ´L| \" ϵ ´1 ˇˇ|x φ2 j ´xφ1 j | ´|x φ2 j ´x φ1 j | ˇˇ`Op1q. (15\n)\nThis completes the proof.\nIn the case when |φ 1 ´φ2 | ě π 3 , there is a high possibility that either xnpx φ1 j ps, zqq, p0, 1, 0qy or xnpx φ2 j ps, zqq, p0, 1, 0qy is very small, resulting in inaccurate detection of either x φ1 or x φ2 ." }, { "figure_ref": [], "heading": "B. Fine registration using ICP method", "publication_ref": [ "b29", "b15" ], "table_ref": [], "text": "We adopt an ICP method to accurately estimate the optimal T in (1) by using the information of X sub and Y sub . Denoting T p0q as the solution of (2), T is estimated by iteratively solving the following minimization problems [30]. For k \" 1, 2, . . . , K,\nT pkq \" argmin T 1 N N ÿ i\"1\n}T pk´1q px i q ´yj ˚,T pk´1q } 2 , x i P X sub (16) where y j ˚,T pk´1q is the closest point to the transformed point T pk´1q px i q. More precisely, for given T pk´1q , y j ˚,T pk´1q is given by\ny j ˚,T pk´1q \" argmin yj PYsub }T pk´1q px i q ´yj } 2 . (17\n)\nThe detailed sub-surfaces X sub and Y sub are shown in Fig 5 . These sub-surfaces can be estimated using the landmarks X ldmk and Y ldmk ." }, { "figure_ref": [], "heading": "C. Datasets", "publication_ref": [ "b14" ], "table_ref": [], "text": "Three dental CBCT scans were acquired using a commercial CBCT scanner (RAYSCAN Studio, Ray Co., Ltd.) with a tube voltage of 90 kVp and tube current of 8 mA. The 3D CBCT images of size 666 ˆ666 ˆ666 were reconstructed with a pixel size of 0.3 ˆ0.3 mm 2 and slice thickness of 0.3 mm. The FOV of CBCT was 20 cm ˆ20 cm. From the reconstructed CT images, a surface (i.e., soft tissue) was extracted with a threshold value of ´500 HU. Then, the surface mesh was generated using the standard matching-cube algorithm [15], where a point cloud of the surface model was obtained. Corresponding surface data from the facial scans were acquired using a commercial facial scanner (RayFace 200, Ray Co., Ltd.).\nWe collected a total of 20 multi-detector CT (MDCT) scans and their corresponding face scans for further evaluation. The MDCT scans were obtained using a commercial MDCT scanner (SOMATOM Force, Siemens) with a tube voltage of 100 kVp and a current of 120 mA. The MDCT slice images of size 512 ˆ512 were reconstructed with a pixel size of 0.46 ˆ0.46 mm 2 . The number and thickness of slices varied depending on acquisition conditions. The average number and thickness of slices were 530 mm and 0.55 mm, respectively. To visually match the MDCT images to CBCT images, the MDCT images of the upper cranium were cropped. The surface data from MDCT scans was generated in the same manner as for the CBCT scans, using a commercial 3D scanner (EinScan Pro, Shining 3D)." }, { "figure_ref": [], "heading": "D. Evaluation metrics for 3D landmark detection and surface registration", "publication_ref": [ "b22" ], "table_ref": [], "text": "To quantitatively measure the 3D landmark detection accuracy, we calculated the point-wise error between the landmark K ldmk and the corresponding true landmark Kldmk , K \" X, Y, by\nE poi pK ldmk , Kldmk q \" g f f e 1 J J ÿ j\"1 }k j ´k j } 2 , (18\n)\nwhere k j and kj are elements of K ldmk and Kldmk , respectively. In our study, Kldmk was obtained manually. To quantitatively measure the registration accuracy, we also computed the surface error between the two sub-surfaces T pX sub q and Y sub using the following two metrics [23]:\nE sup surf pT pX sub q, Y sub q \" sup xPT pXsubq inf yPYsub }x ´y}(19)\nand\nE mean surf pT pX sub q, Y sub q \" 1 |X sub | ÿ xPT pXsubq inf yPYsub }x ´y},(20)\nwhere sup, inf, and |X sub | denote the supremum, infimum, and the number of points in the point cloud set X sub , respectively.\nHere, E ave surf and E sup represent the mean and maximum surface errors over T pX sub q, respectively." }, { "figure_ref": [ "fig_1" ], "heading": "E. Implementation details", "publication_ref": [], "table_ref": [], "text": "We adopted a pre-trained DFLD model to detect the 2D landmarks tpx φ j , z j qu J j\"1 and tpy φ j , z j qu J j\"1 corresponding to the 3D landmarks X ldmk and Y ldmk , respectively. The DFLD model is known to be robust and computationally efficient in the computer vision field because it is learned using various public datasets, including ImageNet, VOC, and VGG. The DFLD model can detect 68 2D face landmarks related to the eyebrows, eyes, nose, lips, and face contours in 2D images. From the 68 positions, we empirically selected 10 positions with the least positional change in the surfaces of the CBCT and face scans and regarded them as our facial landmarks X ldmk and Y ldmk . These positions are shown in detail in Fig. 1.\nWe evaluated the 3D landmark detection error with respect to the rotation angle φ. Throughout this study, we set φ 2 \" ´φ1 , φ 1 ą 0. The averaged point-wise errors E poi pX ldmk , Xldmk q and E poi pY ldmk , Ŷldmk q for three subjects are shown in Fig. 3. The error increased when |φ 1 ´φ2 | either decreased or increased. The overall error for face scan was lower than that for CBCT. However, the minimum errors for the CBCT and face scans were comparable. The DFLD method failed to detect x φ1 in the 2D projection image of CBCT for |φ 1 ´φ2 | ą 7π{18. In our study, we selected |φ 1 ´φ2 | \" 2π{9 with the minimum error for 3D landmark detection." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "III. RESULTS", "publication_ref": [ "b17", "b21" ], "table_ref": [ "tab_1", "tab_2" ], "text": "Fig. 4 presents the 3D landmark detection results of the proposed method regarding the surfaces of the CBCT and face scans. The proposed method stably detected 3D landmarks on both surfaces for all three subjects. The point-wise errors E poi pX ldmk , Xldmk q and E poi pY ldmk , Ŷldmk q for the three subjects were computed. The computed values are shown below each panel in Fig. 4. The average errors of the CBCT and face scans for the three subjects were 6.0733 mm and 5.3534 mm, respectively.\nWe compared the registration performance of the proposed method with those of existing global registration methods, such as Coherent Point Drift (CPD) [18] and Fast Point Feature Histograms (FPFHs) [22], followed by those of ICP methods. Fig. 5 shows the registration results for the subsurfaces Y sub of the three subjects. In Fig. 5, the color at each point represents the signed distance between the two surfaces obtained by CBCT and face scanning. More precisely, for each y P Y sub , the signed distance dpyq is calculated as\ndpyq \" sgnppx ˚´yq ¨npyqq}x ˚´y}, (21\n)\nwhere sgnp¨q is a sign function and x ˚P T pX sub q is a point satisfies\n}x ˚´y} \" inf xPT pXsubq }x ´y}.(22)\nOverall, the proposed method aligned the two surfaces more accurately than existing registration methods. The results also imply that the proposed method, using only 3D landmarks (third row in Fig. 5), provides a good initial transformation for the ICP compared with these existing global registration methods. The corresponding registration results were compared visually in the CBCT images, as shown in Fig. 6. For a quantitative comparison, we computed the surface errors E sup surf pT pX sub q, Y sub q and E mean surf pT pX sub q, Y sub q of the registration methods for the three subjects, where the detailed sub-surfaces of X sub and Y sub are shown in Fig. 6. The computed errors are presented in Table I. Among the registration methods, the proposed method achieved the minimum mean surface errors of 4.1487 mm and 0.7381 mm for the metrics E sup surf and E mean surf , respectively. Table II presents the quantitative results of the registration methods for MDCT and face scan datasets. In the case of the MDCT dataset, the performance of the proposed registration method was additionally compared to that of manual registration by an expert in oral and maxillofacial surgery, who had more than 20 years of experience, followed by ICP. The proposed method outperformed both the global registration methods and manual registration, achieving the lowest mean surface errors for the metrics E sup surf and E mean surf, with values of 2.2818 mm and 0.3749 mm, respectively." }, { "figure_ref": [], "heading": "IV. DISCUSSION AND CONCLUSION", "publication_ref": [ "b1", "b13", "b2" ], "table_ref": [], "text": "This paper proposes a fully automatic registration method between dental CBCT and facial scan data. Noting that the facial surface obtained from the facial scanner corresponded only partially to that obtained from dental CBCT, the proposed method was designed to match the portion of the facial surface with the smallest geometrical change using a 3D geometric landmark registration approach. The novel mathematical formulation described in Theorem II.1 can reduce a 3D landmark detection problem to a 2D problem of detecting the corresponding landmarks on two 2D projection images generated from two different projection angles. This reduction allows robust detection of 3D landmarks by leveraging a pre-trained 2D facial landmark detection algorithm. A major advantage of reusing a pre-trained 2D landmark detection algorithm is that the cumbersome and costly problem of collecting the annotations of facial landmarks for training is eliminated. Experiments demonstrated that the proposed method outperformed other existing global registration methods, such as CPD and FPFH, followed by ICP. The proposed method achieved a mean surface error of 0.7381 mm and 0.3749 mm for CBCT and MDCT cases, respectively. In particular, in the case of MDCT, it was attained similar or lower mean errors compared with manual registration results of expert, which makes it possible to expect the capability to use in clinical application. The proposed landmark-based registration method can be applied to dental CBCT with a large FOV, covering the patient's nose to eyebrows. Recently, numerous commercial products with large FOVs (e.g., 16 cmˆ16 cm, 16 cmˆ23 cm, 20 cm ˆ20 cm) have been released for visualizing the entire craniofacial area, and they can be beneficial for orthodontics, airway studies, and oral surgery [2], [14]. A future research aim is to develop a fully automatic method of non-rigid registration [3] between CBCT and facial scans to match the surface areas near the mouth where large geometric deformations occur. The machine learning can be used to effectively represent non-rigid transformations." }, { "figure_ref": [], "heading": "Metric", "publication_ref": [], "table_ref": [], "text": "Method Subject " }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "This work was supported by a grant of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health & Welfare, Republic of Korea (HI20C0127). H.S.P. and K.J. were partially supported by the National Institute for Mathematical Sciences (NIMS) grant funded by the Korean government (No. NIMS-B23910000). C.M.H and J.K.S were partially supported by Samsung Science & Technology Foundation (No. SRFC-IT1902-09)." } ]
This paper presents a fully automatic registration method of dental cone-beam computed tomography (CBCT) and face scan data. It can be used for a digital platform of 3D jaw-teeth-face models in a variety of applications, including 3D digital treatment planning and orthognathic surgery. Difficulties in accurately merging facial scans and CBCT images are due to the different image acquisition methods and limited area of correspondence between the two facial surfaces. In addition, it is difficult to use machine learning techniques because they use face-related 3D medical data with radiation exposure, which are difficult to obtain for training. The proposed method addresses these problems by reusing an existing machine-learning-based 2D landmark detection algorithm in an open-source library and developing a novel mathematical algorithm that identifies paired 3D landmarks from knowledge of the corresponding 2D landmarks. A main contribution of this study is that the proposed method does not require annotated training data of facial landmarks because it uses a pre-trained facial landmark detection algorithm that is known to be robust and generalized to various 2D face image models. Note that this reduces a 3D landmark detection problem to a 2D problem of identifying the corresponding landmarks on two 2D projection images generated from two different projection angles. Here, the 3D landmarks for registration were selected from the sub-surfaces with the least geometric change under the CBCT and face scan environments. For the final fine-tuning of the registration, the Iterative Closest Point method was applied, which utilizes geometrical information around the 3D landmarks. The experimental results show that the proposed method achieved an averaged surface distance error of 0.74 mm for three pairs of CBCT and face scan datasets.
Automatic 3D Registration of Dental CBCT and Face Scan Data using 2D Projection Images
[ { "figure_caption": "Fig. 2 .2Fig. 2. Geometries of 2D-3D landmarks", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. 3D landmark detection error of CBCT and face scan with respect to the rotation angle.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. 3D landmark detection results of CBCT and face scan surfaces. Blue balls represent the manually annotated landmark points. Red balls represent the points detected by the proposed algorithm. The point-wise error E poi between annotated and detected points are shown below each figure.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .Fig. 6 .56Fig. 5. Comparison results of the registration methods for three subjects. The colors represent distances between the two surfaces obtained by CBCT and face scan. The unit is mm.", "figure_data": "", "figure_id": "fig_3", "figure_label": "56", "figure_type": "figure" }, { "figure_caption": "EVALUATION OF THE REGISTRATION METHODS FOR CBCT AND FACE SCAN DATASETS. THE SURFACE ERRORS E SUP SURF AND E MEAN SURF WERE COMPUTED FOR THREE SUBJECTS. THE UNIT IS mm.", "figure_data": "MetricMethodMean ˘StdManual2.2863 ˘0.6965E sup surf pT pX sub q, Y sub qCDP+ICP FPFH+ICP30.3500 ˘17.3744 3.1801 ˘1.2598Proposed (landmark only)7.6889 ˘3.5975Proposed (landmark+ICP)2.2818 ˘0.7098Manual0.4227 ˘0.1174E mean surf pT pX sub q, Y sub qCDP+ICP FPFH+ICP9.0067 ˘5.8236 0.6997 ˘0.4568Proposed (landmark only)3.1006 ˘2.1206Proposed (landmark+ICP)0.3749 ˘0.1289", "figure_id": "tab_1", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "EVALUATION OF THE REGISTRATION METHODS FOR MDCT AND FACE SCAN DATASETS. THE SURFACE ERRORS E SUP SURF AND E MEAN SURF WERE COMPUTED FOR TEN SUBJECTS. THE UNIT IS mm.", "figure_data": "", "figure_id": "tab_2", "figure_label": "II", "figure_type": "table" } ]
Hyoung Suk; Chang Min; Sang-Hwy Lee; Jin Keun; Kiwan Jeon
[ { "authors": "K S Arun; T S Huang; S D Blostein", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b0", "title": "Least-Squares Fitting of Two 3-D Point Sets", "year": "1987" }, { "authors": "J M Agrawal; M S Agrawal; L G Nanjannawar; A D Parushetti", "journal": "The Journal of Contemporary Dental Practice", "ref_id": "b1", "title": "CBCT in orthodontics: the wave of future", "year": "2013" }, { "authors": "M Bahri; E O'sullivan; S Gong; F Liu; X Liu; M M Bronstein; S Zafeiriou", "journal": "International Journal of Computer Vision", "ref_id": "b2", "title": "Shape my face: registering 3D face scans by surface-to-surface translation", "year": "2021" }, { "authors": "M Chung; J Lee; W Song; Y Song; I H Yang; J Lee; Y G Shin", "journal": "IEEE Transactions on Medical Imaging", "ref_id": "b3", "title": "Automatic registration between dental cone-beam CT and scanned surface via deep pose regression neural networks and clustered similarities", "year": "2020" }, { "authors": "X Dong; Y Yan; W Ouyang; Y Yang", "journal": "", "ref_id": "b4", "title": "Style aggregated network for facial landmark detection", "year": "2018" }, { "authors": "M Elnagar; S Aronovich; B Kusnoto", "journal": "Oral Maxillofac Surg Clin North Am", "ref_id": "b5", "title": "Digital workflow for combined orthodontics and orthognathic surgery", "year": "2020" }, { "authors": "S Huang; Z Gojcic; M Usvyatsov; A Wieser; K Schindler", "journal": "", "ref_id": "b6", "title": "Predator: Registration of 3d point clouds with low overlap", "year": "2021" }, { "authors": "K T Islam; S Wijewickrema; S O'leary", "journal": "Scientific Reports", "ref_id": "b7", "title": "A deep learning based framework for the registration of three dimensional multi-modal medical images of the head", "year": "2021" }, { "authors": "T Joda; G O Gallucci", "journal": "Clinical oral implants research", "ref_id": "b8", "title": "The virtual patient in dental medicine", "year": "2015" }, { "authors": "T J Jang; H S Yun; C M Hyun; J.-E Kim; S.-H Lee; J K Seo", "journal": "", "ref_id": "b9", "title": "Fully automatic integration of dental CBCT images and full-arch intraoral impressions with stitching error correction via individual tooth segmentation and identification", "year": "2021" }, { "authors": "D E King", "journal": "The Journal of Machine Learning Research", "ref_id": "b10", "title": "Dlib-ml: A machine learning toolkit", "year": "2009" }, { "authors": "V Kazemi; J Sullivan", "journal": "", "ref_id": "b11", "title": "One millisecond face alignment with an ensemble of regression trees", "year": "2014" }, { "authors": "D E King", "journal": "", "ref_id": "b12", "title": "Max-margin object detection", "year": "2015" }, { "authors": "S D Kapila; J M Nervina", "journal": "Dentomaxillofacial radiology", "ref_id": "b13", "title": "CBCT in orthodontics: assessment of treatment outcomes and indications for its use", "year": "2015" }, { "authors": "W E Lorensen; H E Cline", "journal": "ACM siggraph computer graphics", "ref_id": "b14", "title": "Marching cubes: A high resolution 3D surface construction algorithm", "year": "1987" }, { "authors": "E Lengyel", "journal": "Course Technology Press", "ref_id": "b15", "title": "Mathematics for 3D game programming and computer graphics", "year": "2011" }, { "authors": "S M Lee; H P Kim; K Jeon; S H Lee; J K Seo", "journal": "Physics in Medicine & Biology", "ref_id": "b16", "title": "Automatic 3D cephalometric annotation system using shadowed 2D image-based machine learning", "year": "2019" }, { "authors": "A Myronenko; X Song", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b17", "title": "Point set registration: Coherent point drift", "year": "2010" }, { "authors": "J Ma; X Jiang; A Fan; J Jiang; J Yan", "journal": "International Journal of Computer Vision", "ref_id": "b18", "title": "Image matching from handcrafted to deep features: A survey", "year": "2021" }, { "authors": "K Y Nahm; Y Kim; Y S Choi; J Kim; S H Lee; G Nelson", "journal": "American Journal of Orthodontics and Dentofacial Orthopedics", "ref_id": "b19", "title": "Accurate registration of cone-beam computed tomography scans to 3-dimensional facial photographs", "year": "2014" }, { "authors": "G D Pais; S Ramalingam; V M Govindu; J C Nascimento; R Chellappa; P Miraldo", "journal": "", "ref_id": "b20", "title": "3DRegNET: A deep neural network for 3d point registration", "year": "2020" }, { "authors": "R B Rusu; N Blodow; M Beetz", "journal": "", "ref_id": "b21", "title": "Fast point feature histograms (FPFH) for 3d registration", "year": "2009" }, { "authors": "R T Rockafellar; R J B Wets", "journal": "Springer Science & Business Media", "ref_id": "b22", "title": "Variational analysis", "year": "2009" }, { "authors": "S A Schendel; C Lane", "journal": "Seminars in Orthodontics", "ref_id": "b23", "title": "3D orthognathic surgery simulation using image fusion", "year": "2009" }, { "authors": "S Shujaat; M M Bornstein; J B Price; R Jacobs", "journal": "Dentomaxillofacial Radiology", "ref_id": "b24", "title": "Integration of imaging modalities in digital dental workflows-possibilities, limitations, and potential future developments", "year": "2021" }, { "authors": "Bart Vandenberghe", "journal": "Dental Materials", "ref_id": "b25", "title": "The crucial role of imaging in digital dentistry", "year": "2020" }, { "authors": "N Wang; X Gao; D Tao; H Yang; X Li", "journal": "Neurocomputing", "ref_id": "b26", "title": "Facial feature point detection: A comprehensive survey", "year": "2018" }, { "authors": "Y Wu; Q Ji", "journal": "International Journal of Computer Vision", "ref_id": "b27", "title": "Facial landmark detection: A literature survey", "year": "2019" }, { "authors": "Y Yan; S Duffner; P Phutane; A Berthelier; C Blanc; C Garcia; T Chateau", "journal": "Pattern Recognition", "ref_id": "b28", "title": "2D Wasserstein loss for robust facial landmark detection", "year": "2021" }, { "authors": "J Yang; H Li; D Campbell; Y Jia", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b29", "title": "Go-ICP: A globally optimal solution to 3D ICP point-set registration", "year": "2015" } ]
[ { "formula_coordinates": [ 2, 143.64, 695.3, 156.38, 9.84 ], "formula_id": "formula_0", "formula_text": "T X sub « Y sub .(1)" }, { "formula_coordinates": [ 2, 408.32, 437.1, 154.72, 29.33 ], "formula_id": "formula_1", "formula_text": "T J ÿ j\"1 }T x j ´yj } 2 ,(2)" }, { "formula_coordinates": [ 3, 144.07, 87.92, 155.95, 17.78 ], "formula_id": "formula_2", "formula_text": "xPℓ ps,zq XX φ }x ´ps, d, zq},(3)" }, { "formula_coordinates": [ 3, 164.4, 152.98, 131.75, 23.77 ], "formula_id": "formula_3", "formula_text": "}q ´xφ ps, zq} , 0 ˙, (4" }, { "formula_coordinates": [ 3, 296.15, 160.94, 3.87, 8.64 ], "formula_id": "formula_4", "formula_text": ")" }, { "formula_coordinates": [ 3, 82.47, 507.38, 217.55, 46.1 ], "formula_id": "formula_6", "formula_text": "L \" Lpφ 1 , φ 2 , x φ1 j , x φ2 j q \" d px φ2 j ´xφ1 j cospφ 1 ´φ2 qq 2 sin 2 pφ 1 ´φ2 q `px φ1 j q 2 (6)" }, { "formula_coordinates": [ 3, 73.62, 573.82, 226.4, 26.34 ], "formula_id": "formula_7", "formula_text": "θ \" θpφ 1 , φ 2 , x φ1 j , x φ2 j q \" sin ´1 ˜´x φ1 j L ¸´φ 1 . (7)" }, { "formula_coordinates": [ 3, 134.16, 682.25, 161.99, 9.65 ], "formula_id": "formula_8", "formula_text": "φ 1 ´φ2 \" θ 1 ´θ2 . (8" }, { "formula_coordinates": [ 3, 296.15, 682.57, 3.87, 8.64 ], "formula_id": "formula_9", "formula_text": ")" }, { "formula_coordinates": [ 3, 362.14, 232.82, 196.75, 27.25 ], "formula_id": "formula_10", "formula_text": "L cos θ 1 \" x φ2 j ´xφ1 j cospφ 1 ´φ2 q sinpφ 1 ´φ2 q . (10" }, { "formula_coordinates": [ 3, 558.89, 243.91, 4.15, 8.64 ], "formula_id": "formula_11", "formula_text": ")" }, { "formula_coordinates": [ 3, 335.8, 286.08, 227.24, 29.13 ], "formula_id": "formula_12", "formula_text": "L \" d px φ2 j ´xφ1 j cospφ 1 ´φ2 qq 2 sin 2 pφ 1 ´φ2 q `px φ1 j q 2 ,(11)" }, { "formula_coordinates": [ 3, 391.23, 336.87, 171.81, 26.34 ], "formula_id": "formula_13", "formula_text": "θ 1 \" sin ´1 ˜´x φ1 j L ¸.(12)" }, { "formula_coordinates": [ 3, 320.33, 732.2, 242.71, 13.68 ], "formula_id": "formula_14", "formula_text": "|L ´L| \" ϵ ´1 ˇˇ|x φ2 j ´x φ1 j | ´|x φ2 j ´xφ1 j | ˇˇ`Op1q,(13)" }, { "formula_coordinates": [ 4, 88.41, 311.16, 211.61, 66.83 ], "formula_id": "formula_15", "formula_text": "L \" d px φ2 j ´xφ1 j cos ϵq 2 sin 2 ϵ `px φ1 j q 2 \" b ϵ ´2px φ2 j ´xφ1 j q 2 `px φ1 j q 2 `Op1q \" ϵ ´1|x φ2 j ´xφ1 j | `Op1q.(14)" }, { "formula_coordinates": [ 4, 60.8, 406.01, 235.07, 13.68 ], "formula_id": "formula_16", "formula_text": "|L ´L| \" ϵ ´1 ˇˇ|x φ2 j ´xφ1 j | ´|x φ2 j ´x φ1 j | ˇˇ`Op1q. (15" }, { "formula_coordinates": [ 4, 295.87, 409.07, 4.15, 8.64 ], "formula_id": "formula_17", "formula_text": ")" }, { "formula_coordinates": [ 4, 48.96, 592.96, 90.54, 29.33 ], "formula_id": "formula_18", "formula_text": "T pkq \" argmin T 1 N N ÿ i\"1" }, { "formula_coordinates": [ 4, 88.39, 686.02, 207.49, 19 ], "formula_id": "formula_19", "formula_text": "y j ˚,T pk´1q \" argmin yj PYsub }T pk´1q px i q ´yj } 2 . (17" }, { "formula_coordinates": [ 4, 295.87, 688.41, 4.15, 8.64 ], "formula_id": "formula_20", "formula_text": ")" }, { "formula_coordinates": [ 4, 350.23, 616.92, 208.65, 33.48 ], "formula_id": "formula_21", "formula_text": "E poi pK ldmk , Kldmk q \" g f f e 1 J J ÿ j\"1 }k j ´k j } 2 , (18" }, { "formula_coordinates": [ 4, 558.89, 631.31, 4.15, 8.64 ], "formula_id": "formula_22", "formula_text": ")" }, { "formula_coordinates": [ 4, 332.95, 724.77, 230.09, 20.11 ], "formula_id": "formula_23", "formula_text": "E sup surf pT pX sub q, Y sub q \" sup xPT pXsubq inf yPYsub }x ´y}(19)" }, { "formula_coordinates": [ 5, 60.21, 71, 239.81, 38.74 ], "formula_id": "formula_24", "formula_text": "E mean surf pT pX sub q, Y sub q \" 1 |X sub | ÿ xPT pXsubq inf yPYsub }x ´y},(20)" }, { "formula_coordinates": [ 5, 94.67, 733.93, 201.2, 10.86 ], "formula_id": "formula_25", "formula_text": "dpyq \" sgnppx ˚´yq ¨npyqq}x ˚´y}, (21" }, { "formula_coordinates": [ 5, 295.87, 735.81, 4.15, 8.64 ], "formula_id": "formula_26", "formula_text": ")" }, { "formula_coordinates": [ 5, 375.41, 89.03, 187.63, 17.06 ], "formula_id": "formula_27", "formula_text": "}x ˚´y} \" inf xPT pXsubq }x ´y}.(22)" } ]
2023-12-18
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b27" ], "table_ref": [], "text": "Understanding the cognitive processes that occur in the human brain when observing visual stimuli (e.g., natural images) has long been a primary focus for neuroscientists. Both objective visual stimuli and subjective cognitive activities can elicit the transmission of intricate neural signals in the visual cortex of the brain, thus laying the foundation for higher-order cognitive and decision-making processes. With the advancement of techniques such as functional magnetic resonance imaging (fMRI), it has become possible to capture real-time brain activity signals with greater accuracy and finer granularity, thereby accelerating the progress of neuroscientific research. Deciphering and reconstructing from these intricate signals remain a great challenge to both cognitive neuroscience and downstream applications like Brain-Computer Interfaces (BCI) (Nicolas-Alonso and Gomez-Gil 2012; Milekovic et al. 2018)." }, { "figure_ref": [], "heading": "Ground Truth (GT)", "publication_ref": [ "b6", "b7", "b0", "b29", "b40", "b3", "b12", "b34", "b6", "b43", "b3", "b12", "b34", "b6", "b43" ], "table_ref": [], "text": "Ours MinD-Vis\nFigure 1: Illustration of synthesis results. A recent method MinD-Vis (Chen et al. 2023) can generate photo-realistic results, but they cannot well match the visual stimuli in terms of semantics and silhouette. Our method can generate better results more consistent with the GT visual stimuli.\nEarly attempts (Van Gerven et al. 2010;Damarla and Just 2013;Horikawa and Kamitani 2017;Akamatsu et al. 2020) at analyzing brain activity on visual tasks mainly focus on matching human subjects' brain activity with observed natural images, or reconstructing visual patterns of simple geometric shapes (Miyawaki et al. 2008;Schoenmakers et al. 2013;Van Gerven, De Lange, and Heskes 2010). These explorations demonstrate the feasibility of deriving semantic information for perceived images from brain signals, yet they have poor generalization to unseen semantic categories or complicated reconstruction tasks.\nRecent studies (Beliy et al. 2019;Gaziv et al. 2022;Ozcelik et al. 2022;Chen et al. 2023;Takagi and Nishimoto 2023) have made significant progress in reconstructing visual stimuli from brain signals. (Beliy et al. 2019;Gaziv et al. 2022) can generate images that are similar in shape to the original visual stimuli, but the images suffer from severe distortion and blur issues. (Ozcelik et al. 2022;Chen et al. 2023;Takagi and Nishimoto 2023) have employed commonly used generative models, such as Generative Adversarial Networks (GAN) or diffusion models, to generate high-quality RGB images that maintain semantic con-sistency with the original visual stimuli conditioned on corresponding fMRI signals. However, such methods struggle with positional inconsistency, as shown in Fig. 1. In general, existing methods have not effectively utilized the semantic and spatial features inherent in fMRI signals.\nIn this paper, we present a Controllable Mind Visual Diffusion Model (CMVDM) that enables the mind diffusion model with a control network to leverage the extracted faithful semantic and silhouette information for high-fidelity human vision reconstruction. Specifically, we first finetune a pretrained latent diffusion model (LDM) with a semantic alignment loss and pretrain a silhouette extractor to estimate accurate semantic and silhouette information of the fMRI data. Taking inspiration from ControlNet, we then introduce a control network, which takes the silhouette information as a condition, into the pretrained LDM to guide the diffusion process to generate desired images that match the original visual stimuli in terms of both semantic and silhouette information. Fig. 1 shows two examples where CMVDM outperforms the previous state-of-the-art approach, MinD-Vis.\nIn summary, the main contributions of this paper are as follows:\n• We propose a novel Controllable Mind Visual Diffusion Model (CMVDM) that leverages both semantic and spatial visual patterns in brain activity to reconstruct photorealistic images. A control network is utilized to enable effective manipulation over the positions of generated objects or scenes in the reconstructed images, providing a much better structural similarity to the original visual stimuli. • We design two extractors to extract semantic and silhouette attributes to provide accurate information for generating images that closely resemble the visual stimuli. Besides, we build a residual module to provide information beyond semantics and silhouette. • We conduct comprehensive experiments on two datasets to evaluate the performance of our method. It achieves state-of-the-art qualitative and quantitative results compared to existing methods, demonstrating the efficacy of CMVDM for decoding high-quality and controllable images from fMRI signals." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b41", "b14", "b42", "b9", "b45", "b38", "b35", "b52", "b31", "b18", "b24", "b39", "b11", "b36", "b22", "b1", "b26", "b2", "b44", "b54", "b5", "b20", "b49", "b30", "b16", "b33", "b19", "b53", "b0", "b16", "b0", "b3", "b12", "b34", "b43", "b6" ], "table_ref": [], "text": "Diffusion Probabilistic Models. Diffusion models (DMs) were initially introduced by (Sohl-Dickstein et al. 2015) as a novel generative model that gradually denoises images corrupted by Gaussian noise to produce samples. Recent advances in DMs have demonstrated their superior performance in image synthesis, with notable models including (Ho, Jain, and Abbeel 2020;Song, Meng, and Ermon 2020;Dhariwal and Nichol 2021;Vahdat, Kreis, and Kautz 2021;Rombach et al. 2022;Peebles and Xie 2022). DDGAN (Xiao, Kreis, and Vahdat 2022) is a model that reduces the number of sampling steps by directly predicting the ground truth in each timestep. DMs have also achieved state-of-theart performance in other synthesis tasks, such as text-toimage generation with GLIDE (Nichol et al. 2021), speech synthesis with (Kong et al. 2020;Liu et al. 2021), and superresolution with (Li et al. 2022a;Saharia et al. 2022;Gao et al. 2023). In addition, DMs have been applied to text-to-3D synthesis in (Poole et al. 2022;Lin et al. 2022), and other 3D object syntheses in (Anciukevičius et al. 2022;Li et al. 2022b;Luo and Hu 2021). Furthermore, DMs have found applications in video synthesis (Ho et al. 2022b,a), semantic segmentation (Baranchuk et al. 2021), text-to-motion generation (Tevet et al. 2022), face animation (Zeng et al. 2023), and object detection (Chen et al. 2022). (Kulikov et al. 2022;Wang et al. 2022) are models that generate diverse results by learning the internal patch distribution from a single image.\nControlNet employs a control network on a pretrained textconditioned LDM for controllable image synthesis. Overall, DMs have shown promising results and have been widely adopted in various synthesis tasks.\nNeural Decoding of Visual Stimuli. Neural decoding of visual stimuli has been a topic of growing interest in recent years. Numerous studies have explored the possibility of using machine learning algorithms to decode visual information from patterns of neural activity in the human brain. For instance, (Naselaris et al. 2009) demonstrates that it is possible to reconstruct natural images from fMRI data using a linear decoder. Similarly, (Kay et al. 2008) shows that the orientation of gratings from patterns of activity in the early visual cortex can be decoded using a support vector machine. More recent studies have built on these findings by exploring more complex visual stimuli, such as natural scenes (Nishimoto et al. 2011) and faces (Kriegeskorte et al. 2007), and by developing more sophisticated machine learning algorithms, such as deep neural networks (Yamins et al. 2014). To enable decoding of novel scenarios, some works use an identification-based approach (Horikawa and Kamitani 2017; Akamatsu et al. 2020;Kay et al. 2008), where they model the relationship between brain activity and visual semantic knowledge such as image features extracted by a CNN (Horikawa and Kamitani 2017; Akamatsu et al. 2020). These studies provide valuable insights into the interpretation of human brain signals in the visual cortex, which can help the development of more effective decoding algorithms for a wide range of neuroimaging applications, such as Brain-Computer Interfaces. However, these methods require a large amount of paired stimuli-responses data that is hard to obtain. Therefore, decoding novel image categories accurately remains a challenge.\nfMRI-to-Image Reconstruction With the remarkable advancements in generative models, recent studies have focused on the reconstruction of images from human brain activity. These studies employ various approaches, such as building an encoder-decoder structure to align image features with corresponding fMRI data, as demonstrated by (Beliy et al. 2019) and (Gaziv et al. 2022). To further enhance the quality of image reconstruction, researchers have turned to more sophisticated techniques, including generative adversarial networks (GAN) (Ozcelik et al. 2022) and diffusion models (Takagi and Nishimoto 2023;Chen et al. 2023). These methods have shown promise in achieving more plausible image reconstruction. Nonetheless, the ap- Figure 2: Overview of our proposed method. Initially, we train E f mri and D slh in the \"Finetuning LDM\" and \"Silhouette Extraction\" parts, respectively. Subsequently, we utilize E f mri , D slh , and F res to extract semantic, silhouette, and supplement information from fMRI signals as conditions. Finally, we integrate the control network with the LDM to generate high-fidelity and controllable results tailored to the aforementioned conditions.\nproaches described above have limitations in terms of image reconstruction quality and localization accuracy, resulting in unreliable reconstruction outcomes and inadequate utilization of the deep semantic and shallow positional information inherent in fMRI signals." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "In this section, we describe the CMVDM model, which combines attribute extractors and a control model to produce precise and controllable outcomes from fMRI signals. Fig. 2 illustrates the architecture of CMVDM." }, { "figure_ref": [], "heading": "Problem Statement and Overview of CMVDM", "publication_ref": [ "b38", "b14", "b42", "b38", "b38" ], "table_ref": [], "text": "Let the paired {fMRI, image} dataset Ω = {(c f mri,i , I i )} n i=1 , where c f mri,i ∈ R 1×N and I i ∈ R H×W ×3 . The fMRI data is extracted as a 1D signal from the region of interest (ROI) on the visual cortex averaged across the time during which the visual stimuli are presented. N denotes the number of voxels of the extracted signal. We adopt the pretrained image encoder of the LDM (Rombach et al. 2022) to encode the observed image I into the latent code z. Our CMVDM aims to learn an estimation of the data distribution p(z|c f mri ) through a Markov chain with T timesteps. Following (Ho, Jain, and Abbeel 2020;Song, Meng, and Ermon 2020;Rombach et al. 2022), we define the fixed forward Markov diffusion process q as:\nq (z1:T | z0) = T t=1 q (zt | zt-1) , q (zt | zt-1) = N zt | 1 -βtzt-1, βtI ,(1)\nwhere z 0 denotes the latent code of an image. This Markov diffusion process propagates by adding Gaussian noise, with variances β t ∈ (0, 1) in T iterations. Given z 0 , the distribution of z t can be represented by:\nq (zt | z0) = N (zt | √ γtz0, (1 -γt)I) ,(2)\nwhere\nγ t = t i=1 (1 -β i ).\nIn the inference process, CMVDM learns the conditional distributions p θ (z t-1 |z t , c f mri ) and conducts a reverse Markov process from Gaussian noise z T ∼ N (0, I) to a target latent code z 0 as:\np θ (z0:T | c f mri ) = p (zT ) T t=1 p θ (zt-1 | zt, c f mri ) , p (zT ) = N (zT | 0, I) , p θ (zt-1 | zt, c f mri ) = N zt-1 | µ θ (c f mri , zt, t) , σ 2 t I ,(3)\nwhere\nσ t = 1-γt-1 1-γt β t .\nThe pretrained image decoder of the LDM (Rombach et al. 2022) turns the final latent code to an image.\nFurthermore, we extract the attributes and control the generated results. Firstly, we extract the semantic and silhouette information by utilizing the fMRI encoder E f mri and the silhouette estimating network D slh , respectively. This step enables us to accurately decouple the fMRI information c f mri . Subsequently, we utilize the control model F ctrl to generate high-quality images that match the visual stimuli in terms of both semantic and silhouette information. F ctrl is able to leverage the extracted information to produce better results. Besides, the residual module F res is designed to provide information beyond semantics and silhouette." }, { "figure_ref": [], "heading": "Finetuning of the Pretrained LDM", "publication_ref": [ "b38", "b46", "b37" ], "table_ref": [], "text": "Before extracting the silhouette information and controlling the generated results, we need to finetune the pretrained LDM (Rombach et al. 2022) to enable it to generate consistent images and extract the semantic information based on the input fMRI signals. Following MinD-Vis, we employ the fMRI encoder E f mri pretrained on the HCP dataset (Van Essen et al. 2013) to encode the brain activity signals to the fMRI embeddings. Besides, we use the pretrained LDM to generate output images. By optimizing the fMRI encoder E f mri and the cross-attention layers in the LDM, while freezing the other blocks during the finetuning process, we can obtain reliable consistent generated results. The finetuning loss is defined as follows:\nL f = E z0,t,c f mri ,ϵ∼N (0,1) [||ϵ -ϵ θ (z t , t, E f mri (c f mri ))|| 2 2 ],(4\n) where ϵ θ is the denoising network of the LDM. In this way, the LDM can ensure the consistency of the generated results. Let c ctx = E f mri (c f mri ) be the semantic information extracted from the fMRI signals. Due to the lack of direct semantic supervision, E f mri may be insufficient for providing enough semantic information. Therefore, we design a noval alignment loss L align to further enhance the semantic information c ctx :\nL align = e -cosine(fimg,MLP(cctx)) ,(5)\nwhere cosine(•, •) denotes the cosine similarity, f img is the image feature extracted by the CLIP image encoder (Radford et al. 2021), and MLP represents a trainable multilayer perceptron. After this training stage, the LDM can make the generated images consistent with the fMRI signals. Nonetheless, due to the absence of explicit positional condition guidance, it is still a challenge for the LDM to generate silhouette-matched results. In the next two sections, we will describe how to extract silhouette information from the fMRI signals and control the final results." }, { "figure_ref": [ "fig_2" ], "heading": "Silhouette Extraction", "publication_ref": [ "b12", "b12", "b8" ], "table_ref": [], "text": "In this section, we aim to extract silhouette information from fMRI signals. (Gaziv et al. 2022) uses a combination of selfsupervised and supervised learning to reconstruct images similar to visual stimuli.\nDespite the low fidelity of the image generation quality, their generated results demonstrate a notable ability to accurately replicate the silhouette of the visual stimuli (see Fig. 6). Based on this, we devise a silhouette estimation network that is capable of providing rough positional guidance for CMVDM.\nOur silhouette estimation network consists of two components: an encoder E slh and a decoder D slh . The encoder E slh projects the input images to the fMRI signal space, while the decoder D slh performs the inverse transformation.\nLet c f mri,i be the ground truth (GT) fMRI signal, I i be the corresponding GT image, and ĉfmri,i = E slh (I i ) be the estimated fMRI signal. We define the encoder training loss L e by a combination of the Mean Square Error (MSE) loss and cosine similarity:\nLe = 1 |Ω| |Ω| i=1 [α1 • ∥c f mri,i -ĉfmri,i ∥ 2 + α2 • (1 -cosine(c f mri,i , ĉfmri,i ))],(6)\nwhere α i∈{1,2} are the hyperparameters set empirically to α 1 = 1 and α 2 = 0.3.\nAfter completing the training of E slh , we fix its parameters and train the reverse process for the decoder D slh . Due to the limited availability of paired {fMRI, image} data, mapping fMRI signals to images is challenging. Inspired by (Gaziv et al. 2022), we utilize semi-supervised training to extract intricate silhouette information. The self-supervised process can be simply represented as: φi = D slh (E slh (ϕ i )), where ϕ i ∈ Φ denotes the image from ImageNet (without corresponding fMRI data) (Deng et al. 2009), and φi denotes the reconstructed image. By minimizing the disparity between ϕ i and φi , the self-supervised process helps E slh and D slh to learn more generalized image representation. We employ the Structural Similarity (SSIM) loss besides the Mean Absolute Error (MAE) loss to penalize the spatial distances between the reconstructed images and the GT images. The two losses are:\nLmae = 1 |Ω| |Ω| i=1 | Îi -Ii| supervised + 1 |Φ| |Φ| i=1 | φi -ϕi| self -supervised ,(7)\nLssim = 1 - (2µIµ Î + C1)(2σ I Î + C2) (µ 2 I + µ 2 Î + C1)(σ 2 I + σ 2 Î + C2) ,(8)\nwhere µ Î , µ I , σ Î , and σ I represent the mean and std values of the reconstructed images Î and GT images I, C 1 and C 2 are constants to stabilize the calculation.\nThe decoder loss L d is defined as the combination of the two losses:\nL d = L mae + L ssim .(9)\nAfter training, D slh is able to generate images Î from c f mri that provide positional guidance for CMVDM. To avoid confusion, we'll refer to Î as c slh in the following section." }, { "figure_ref": [], "heading": "Training of Control Model", "publication_ref": [ "b55" ], "table_ref": [], "text": "After obtaining the enhanced semantic information c ctx = E f mri (c f mri ) and the reliable silhouette information c slh = D slh (c f mri ) from c f mri , we use them to control the generated results as shown in Fig. 2. Inspired by ControlNet, we design a control model to control the overall composition of the generated images. Specifically, we freeze all the parameters in the denoising network ϵ θ and clone the U-Net encoder of ϵ θ into the trainable F ctrl (•; Θ c ) with a set of parameters Θ c (the red blocks of control model in Fig. 2). The inputs of F ctrl include z t , c ctx , and the silhouette feature c slh . The combined condition code x ′ c,t can be formulated as: where Z(•) denotes the zero convolution operation (Zhang and Agrawala 2023). Furthermore, in order to compensate for the fMRI data loss during attribute extraction, we utilize a trainable residual block denoted as F res . This block is trained in conjunction with F ctrl . The final combined condition code x c,t is represented as:\nx ′ c,t = Z(F ctrl (z t + Z(c slh ), c ctx ; Θ c )),(10)\nx c,t =Z(F ctrl (z t + Z(c slh + Z(F res (c f mri ))), c ctx ; Θ c )).(11)\nThen the output features x c,t of the control model are added to the U-Net decoder features of the frozen ϵ θ , as shown in Fig. 2.\nFinally, we use the following loss L ctrl to supervise the training of the control model and F res in our CMVDM:\nL ctrl = E z0,t,c f mri ,ϵ∼N (0,1) [||ϵ -ϵ θ (z t , t, c ctx , x c,t )|| 2 2 ].(12)\nNote that with their losses, the control model training, the pretrained LDM finetuning, and the D slh training are independent. In our framework, we separately pretrained E f mri and D slh and froze their weights to jointly train F res and F ctrl (as depicted in Fig 2 )." }, { "figure_ref": [], "heading": "Experiments Datasets and Implementation", "publication_ref": [ "b4", "b51", "b23", "b8", "b46", "b17" ], "table_ref": [], "text": "Datasets. In this study, we employ two public datasets with paired fMRI signals and images: Generic Object Decoding (GOD) dataset (Horikawa and Kamitani 2017), and Brain, Object, Landscape Dataset (BOLD5000) (Chang et al. 2019). The GOD dataset is a well-known and extensively researched collection of fMRI-based brain signal decoding data. It comprises 1250 distinct images belonging to 200 different categories, with 50 images designated for testing. The BOLD5000 dataset is a rich resource for studying the neural representation of visual stimuli, as it contains diverse images from natural and artificial domains. The images are drawn from three existing datasets: SUN (Xiao et al. 2010), COCO (Lin et al. 2014), and ImageNet (Deng et al. 2009), which contain images of various categories of objects and animals. BOLD5000 was acquired from four subjects who underwent fMRI scanning while viewing 5,254 images in 15 sessions. The fMRI data were preprocessed and aligned to a common anatomical space, resulting in 4803 fMRI-image pairs for training and 113 for testing. The dataset provides a unique opportunity to investigate how the human brain encodes visual information across different levels of abstraction and complexity. Additionally, we use the large-scale fMRI data from Human Connectome Project (HCP) (Van Essen et al. 2013) in an unsupervised manner to pretrain the fMRI encoder E f mri in our method, which aims to fully extract the features of fMRI signals.\nTraining Details. We adopt 1 A100-SXM4-40GB GPU for the training of E f mri and the control model, and 1 V100-SXM2-32GB GPU for D slh training. Both E f mri and the control model are trained by the AdamW (Loshchilov and Hutter 2017) with β = (0.9, 0.999) and eps = 1e -8 for 500 epochs. D slh is optimized using Adam (Kingma and Ba 2015) with a learning rate of 5e -3 and β = (0.5, 0.99) for 150 epochs." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [ "b12", "b6", "b50" ], "table_ref": [], "text": "N-way Classification Accuracy (Acc). Following (Gaziv et al. 2022;Chen et al. 2023), we employ the n-way top-1 classification task to evaluate the semantic correctness of the generated results, where multiple trials for top-1 classification accuracies are calculated in n -1 randomly selected classes with the correct class. Specifically, we follow MinD-Vis and use a pretrained ImageNet-1K classifier (Dosovitskiy et al. 2020) to estimate the accuracy. Firstly, we input the generated results and the ground-truth images into the classifier, and then check whether the top-1 classification matches the correct class. More details about this metric can be found in our supplementary material.\nPearson Correlation Coefficient (PCC). The Pearson correlation coefficient (PCC) measures the degree of linear association between two variables. PCC is used to measure the correlation between the pixel values of the generated results and those of the ground truth, with +1 indicating a perfect positive linear relationship and -1 indicating a perfect negative linear relationship. The larger the PCC value, the stronger the relevance between visual stimuli and generated images.\nStructure Similarity Index Measure (SSIM). We adopt SSIM to evaluate the reconstruction faithfulness of the generated results. As analyzed in (Wang et al. 2004), the structural similarity of two images is measured by three different factors, brightness, contrast, and structure, where the mean is used as the estimate of brightness, the standard deviation as the estimate of contrast, and the covariance as the measurement of structural similarity." }, { "figure_ref": [], "heading": "Ground Truth", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2", "fig_2", "fig_1" ], "heading": "Ours", "publication_ref": [], "table_ref": [ "tab_1", "tab_1", "tab_1" ], "text": "MinD-Vis IC-GAN Gaziv Beliy Results on the GOD Dataset. We conduct a quantitative comparison between CMVDM and the four SOTA models using the testing dataset of GOD. Table 1 summarizes the results, revealing that CMVDM overall outperforms the other methods significantly. Compared to MinD-Vis and IC-GAN, both of which yield good results, CMVDM outperforms them significantly in terms of SSIM. This indicates that the images generated by CMVDM exhibit a higher degree of resemblance to the visual stimuli in terms of object silhouette and image structure. Additionally, Fig. 6 demonstrates that CMVDM generates visually impressive images with semantic and structural information closest to the visual stimuli. Gaziv achieves remarkable results in terms of SSIM, but their accuracy reported in Table 1 and visual results presented in Fig. 6 demonstrate that their method is not capable of generating high-fidelity images. Results on the BOLD5000 Dataset. We conduct a comparative analysis between our CMVDM and the most recent method MinD-Vis using the testing dataset of BOLD5000.\nAs depicted in Table 1, it is evident that CMVDM consistently outperforms MinD-Vis across all evaluation metrics. Additionally, Fig. 4 provides visualizations of some results from both methods, clearly demonstrating that CMVDM generates more realistic outcomes that are more similar to the GT visual stimuli. Notably, the BOLD5000 dataset, being more complex than the GOD dataset, further validates the effectiveness of our proposed method." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "We further conduct experiments on the GOD dataset to analyze the effectiveness of each module of CMVDM. Specifically, we employ MinD-Vis as the baseline and design two comparison models: (1) adding the semantic align loss L align to MinD-Vis, ( 2) adding the control model to MinD-Vis. The results, presented in " }, { "figure_ref": [], "heading": "Consistency Analysis", "publication_ref": [], "table_ref": [], "text": "To further verify the generative stability of CMVDM, we conduct an analysis to compare the consistency of two diffusion-based methods. As shown in Fig. 5, we sample three images reconstructed by CMVDM and MinD-Vis from the same fMRI signal. The images generated by CMVDM demonstrate a high degree of consistency to GT images both semantically and structurally. However, the results generated by MinD-Vis are capable of reproducing GT images semantically but are not consistent in structure." }, { "figure_ref": [], "heading": "Ground Truth", "publication_ref": [], "table_ref": [], "text": "Sample-1 Sample-2" }, { "figure_ref": [], "heading": "Ours", "publication_ref": [], "table_ref": [], "text": "MinD-Vis Sample-3\nFigure 5: Consistency analysis of the generated results." }, { "figure_ref": [], "heading": "Further Analysis", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "The impact of using the residual module F res in our CMVDM is significant on the BOLD5000 dataset, as demonstrated in Table 3. However, the effect of F res on the GOD dataset is not as pronounced. We believe that there are two reasons for this discrepancy. Firstly, the voxels of a single fMRI signal provided by the BOLD5000 dataset are much less than that provided by the GOD dataset, making it more challenging to extract valid semantic and silhouette information from BOLD5000. Therefore, F res is necessary to compensate for the information gap. Secondly, compared to GOD, BOLD5000 has more diverse images, including scenes that are not present in GOD. The semantic judgment and position alignment of the images in BOLD5000 are more complex than those in GOD. Therefore, we utilize F res to provide more information and improve the reconstruction performance. We provide further investigation on the impact of fMRI signals from different visual cortical regions in the supplementary material." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In " }, { "figure_ref": [ "fig_2" ], "heading": "More Visualization Results", "publication_ref": [ "b6" ], "table_ref": [], "text": "In this section, we present comprehensive visualizations of all the samples from the test sets of the BOLD5000 and GOD datasets. Each group consists of three columns, with each representing the original visual stimuli (Ground Truth) from the test dataset, the output generated by CMVDM, and the results generated by MinD-Vis (Chen et al. 2023).\nThe visualization of the 50 images from the GOD test set is shown in Fig. 6, and the visualization of the 113 images from the BOLD5000 test set is illustrated in Figs. 7, 8, 9" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b8", "b6", "b34", "b12", "b3", "b51", "b23", "b8", "b46" ], "table_ref": [], "text": "Generic Object Decoding Dataset. The Generic Object Decoding (GOD) (Horikawa and Kamitani 2017) dataset is a collection of 1250 paired {fMRI, image} data. The images from (Deng et al. 2009) were presented to five healthy subjects, while an fMRI scanner recorded their brain activity. The dataset was collected in two sessions. In the training session, 1, 200 images from 150 categories were presented to each subject only once. In the test session, 50 images from 50 categories were presented 35 times each. Following previous works (Chen et al. 2023;Ozcelik et al. 2022;Gaziv et al. 2022;Beliy et al. 2019), we chose the data from Subject 3 in the GOD dataset to make a fair comparison. We computed the average of the fMRI signals across all trials to arrive at the final results for evaluation. Notably, there was no overlap between the categories of images shown to the subjects in the training session and the test session. After the scanning sessions, the acquired fMRI data underwent motion correction and then was co-registered to the whole-head high-resolution anatomical images. By conducting standard retinotopy and localizer experiments, the authors determined 7 brain regions (V1, V2, V3, V4, LOC, FFA, PPA) and combined their voxels to define the entire visual cortex (VC). We selected the voxels from VC (4643 voxels in total) as the fMRI data to conduct our experiments.\nBrain, Object, Landscape Dataset. The Brain, Object, Landscape Dataset (BOLD5000) has 5, 254 {fMRI,image} pairs from 4, 916 unique images from Scene UNderstanding (SUN) (Xiao et al. 2010), Common Objects in Context (COCO) (Lin et al. 2014) and ImageNet (Deng et al. 2009). In this dataset, 4, 803 training images and 113 test images were presented to 4 subjects aged between 24 and 27. BOLD5000 alleviates the issue of limited data in the field and encompasses a broad variety of image categories, while also enriching the visual stimuli with real-world indoor and outdoor scenes. This enables a detailed investigation into the neural representation of visual inputs across a wide range of visual semantics. It is noteworthy that the BOLD5000 dataset contains a smaller number of voxels per subject (1685) than that in the GOD dataset (4643), due to the difference in the fMRI scanners used to collect the data.\nHuman Connectome Project fMRI Dataset. The Human Connectome Project (HCP) (Van Essen et al. 2013) is a large-scale research project that aims to map the structural and functional connectivity of the human brain using advanced neuroimaging techniques. One of the main datasets produced by HCP is the fMRI dataset, which measures the blood oxygen level-dependent (BOLD) signal changes in response to various tasks and resting states. The fMRI dataset consists of high-resolution (2 mm isotropic) data from 1200 healthy young adults (ages 22 -35). The dataset includes four sessions of resting-state fMRI (rs-fMRI), and seven sessions of task-based fMRI (T-fMRI), covering motor, working memory, language, social, relational, emotion, and gambling domains. The fMRI data have been extensively preprocessed and analyzed using state-of-the-art methods, such as independent component analysis (ICA), seed-based correlation analysis (SCA), and multivariate pattern analysis (MVPA). Following MinD-Vis, we pretrain the fMRI encoder E f mri on this dataset to learn effective fMRI representations." }, { "figure_ref": [], "heading": "ImageNet Validation Dataset. The ImageNet Validation", "publication_ref": [ "b8", "b28" ], "table_ref": [], "text": "Dataset is a subset of ImageNet (Deng et al. 2009), which is a large-scale visual database containing millions of images annotated with object labels. The ImageNet Validation Dataset consists of 50, 000 images, each labeled with one of 1000 different object categories. These categories are organized hierarchically according to the WordNet (Miller 1998) hierarchy, and each category may correspond to multiple words or phrases. The ImageNet Validation Dataset is commonly used to evaluate the performance of machine learning models on the image classification task. In our method, we utilized the ImageNet Validation Dataset as the dataset for self-supervised training of D slh , which aims to reconstruct images that are structurally similar to the original visual stimuli. The dataset covers a wide range of object categories, such as animals, plants, vehicles, furniture, clothing, and more. This diversity makes it suitable for learning generalizable and diverse features." }, { "figure_ref": [], "heading": "More Implementation Details", "publication_ref": [ "b12" ], "table_ref": [], "text": "Architecture of the Silhouette Estimation Network\nThe silhouette estimation network consists of two components, an encoder E slh and a decoder D slh . The encoder E slh is used for encoding natural images and projecting the image features to the fMRI signal space. Following (Gaziv et al. 2022), E slh first extracts image features using a pretrained VGG19 network. The features are then fed into downsampling blocks with batch normalization, ×2 maximum pooling, 3 × 3 convolution with 32 channels, ReLU activation, and batch normalization to obtain the hierarchy of semantic and spatial representations. Finally, the four representations are concatenated and mapped into the fMRI signal space by a full-connected layer.\nThe decoder D slh architecture uses a full-connected layer to transform and reshape the fMRI input into a 64 × 14 × 14 feature map. This feature map is then fed into three blocks, and each consists of ×2 up-sampling, 5×5 convolution with 64 channels, ReLU activation, and group normalization. A " }, { "figure_ref": [], "heading": "Architecture of the LDM", "publication_ref": [], "table_ref": [], "text": "The Latent Diffusion Model (LDM) in this work consists of an autoencoder and a denoising network with a UNet structure. The autoencoder's encoder and decoder both have a depth of 3 blocks. The blocks of the encoder have feature channel sizes of 128, 256, and 512, respectively, while the decoder follows the same structure in reverse, and the latent space of the autoencoder corresponds to a feature resolution of 64 × 64. In addition, the denoising network of the LDM employs a UNet architecture, featuring an encoder and a decoder, each comprising four blocks in depth. The encoder blocks have channel sizes of 192, 384, 576, and 960, respectively. Besides, the LDM takes a 512-dimensional condition input.\nAt the same time, the control model primarily consists of a hint model which is employed to process the input condition, an encoder, and several zero-convolution layers. The hint model is composed of an 8-layer convolutional neural network, and the structure of the encoder of the control model is identical to that of the encoder in the LDM's denoising network." }, { "figure_ref": [], "heading": "Architechture of the Residual Block", "publication_ref": [], "table_ref": [], "text": "The residual model, denoted as F res , contains a Multilayer Perceptron (MLP) configured with three fully connected (FC) layers and a convolutional neural network (CNN) with four distinct layers. output provided by this model is added to the output generated by D slh ." }, { "figure_ref": [], "heading": "Evaluation Metric", "publication_ref": [], "table_ref": [], "text": "The specific steps for N-way Classification Accuracy (Acc) are as follows: firstly, we use a pre-trained classifier to obtain the ground truth class ŷ. Then, we calculate the classification results for the generated image using the classifier p = p 0 , ..., p 999 . Next, we randomly select n-1 classes and combine them with the ground truth class to obtain a new set of probabilities p ′ = p ŷ , p y1 , ..., p yn-1 . If the max value of p ′ y is p ŷ , we consider it as a correct classification; otherwise, it is an incorrect classification. We repeat this process multiple times on the entire testing dataset to calculate the classification accuracy." }, { "figure_ref": [ "fig_6" ], "heading": "Additional Analysis", "publication_ref": [], "table_ref": [ "tab_6", "tab_6" ], "text": "We have further investigated the impact of fMRI signals from different visual cortical regions on our experimental outcomes. This investigation has two main components: probing the quality of silhouette reconstruction based on voxels from distinct visual cortical areas and analyzing the effects of different hierarchical visual cortices (higher visual cortex (HVC) and lower visual cortex (LVC)) on the semantic and positional aspects of the reconstruction results.\nInitially, leveraging the GOD dataset, we explore the influence of fMRI signals from various visual cortical areas In our paper, we use the VC signal that encompasses signals from all visual cortical areas. The results of its reconstruction are also shown in Fig. 10. Compared to the images reconstructed solely using signals from individual regions, the VC images capture both the structural and RGB information that closely aligns with the GT image distribution. This richness of information facilitates more accurate control over high-fidelity image generation.\nWe then train our CMVDM model using fMRI signals from the LVC, HVC, and VC regions. For reference, we also train MinD-Vis with the L align under the same setting. We use the same evaluation metrics as in the paper, and the results are presented in Table 4. Notable observations include:\n• Both methods exhibit lower Acc on LVC, which suggests that the semantic information on LVC is unclear. However, they both demonstrate high structural similarity (PCC and SSIM), affirming the functional interpretation of fMRI signals within the LVC region. • Both methods perform better on HVC with higher Acc, signifying increased semantic similarity. However, PCC and SSIM, measuring structural similarity, are lower, suggesting that the HVC region contributes vital semantic information to the visual comprehension process. • VC, encompassing voxels from both LVC and HVC, achieves the highest values in both semantic and structural similarity metrics. This implies the effective and substantial fusion of information from high and low visual cortical regions in our CMVDM approach.\nFurthermore, we visually compare the reconstruction results of CMVDM trained on LVC, HVC, and VC regions against the test set in Fig. 11. This visualization corroborates the results presented in Table 4 to some extent." }, { "figure_ref": [], "heading": "Discussion and Limitations.", "publication_ref": [], "table_ref": [], "text": "Our visualization results demonstrate that CMVDM generates better images on the GOD dataset than those on the BOLD5000 dataset in terms of image structure and object silhouette. This difference can be attributed to the silhouette information c slh extracted by D slh being less satisfactory on the BOLD5000 data. Two possible reasons for this are as follows: Firstly, the BOLD5000 dataset is more complex due to the greater diversity of indoor/outdoor scenes, as well as interactions between objects, including both single and multiple objects. On the other hand, the GOD dataset only focuses on single objects. Secondly, due to the difference in the fMRI scanners and experimentation settings, the acquisition and preprocessing procedures for fMRI signals may vary. Specifically, a single fMRI signal in the BOLD5000 dataset contains fewer voxels (1685) compared to the GOD dataset (4643), which increases the difficulty of extracting meaningful semantic and positional information using the fMRI encoder E f mri . These factors may pose challenges to generating satisfactory c slh on the BOLD5000 dataset.\nWhile CMVDM outperforms prior approaches in generating more plausible results, it exhibits a discrepancy between the two datasets. This may be due to the small sizes of the two datasets we used, which prevent our CMVDM from being verified sufficiently. Therefore, a potential limitation of this study is the lack of validation on a larger paired fMRI-image dataset. Additionally, as mentioned above, the fMRI signals obtained under different experimental conditions vary a lot, and the cross-domain generation ability and robustness of the model still need to be further explored. We plan to address these limitations and further improve our approach in future studies." }, { "figure_ref": [], "heading": "Social Impact", "publication_ref": [], "table_ref": [], "text": "This work does not have a direct negative social impact. However, we should pay attention to the ethical and privacy issues in the process of collecting or using our model to visualize fMRI signals and prevent them from being abused for malicious purposes." }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "This research was supported by Zhejiang Provincial Natural Science Foundation of China under Grant No. D24F020011, Beijing Natural Science Foundation L223024, National Natural Science Foundation of China under Grant 62076016. The work was also supported by the National Key Research and Development Program of China (Grant No. 2023YFC3300029) and \"One Thousand Plan\" projects in Jiangxi Province Jxsg2023102268 and ATR key laboratory grant 220402." }, { "figure_ref": [], "heading": "Appendix", "publication_ref": [], "table_ref": [], "text": "In this supplementary material, we first give more visualization results, then detail the datasets and the implementation, and finally state the limitations and social impact." } ]
Brain signal visualization has emerged as an active research area, serving as a critical interface between the human visual system and computer vision models. Diffusion-based methods have recently shown promise in analyzing functional magnetic resonance imaging (fMRI) data, including the reconstruction of high-quality images consistent with original visual stimuli. Nonetheless, it remains a critical challenge to effectively harness the semantic and silhouette information extracted from brain signals. In this paper, we propose a novel approach, termed as Controllable Mind Visual Diffusion Model (CMVDM). Specifically, CMVDM first extracts semantic and silhouette information from fMRI data using attribute alignment and assistant networks. Then, a control model is introduced in conjunction with a residual block to fully exploit the extracted information for image synthesis, generating high-quality images that closely resemble the original visual stimuli in both semantic content and silhouette characteristics. Through extensive experimentation, we demonstrate that CMVDM outperforms existing state-of-theart methods both qualitatively and quantitatively. Our code is available 1 .
Controllable Mind Visual Diffusion Model
[ { "figure_caption": "Figure 3 :3Figure 3: Comparison with four SOTA methods on the GOD dataset.", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Comparison with MinD-Vis on the BOLD5000 dataset.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Comparison on the GOD Dataset.", "figure_data": "", "figure_id": "fig_2", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Comparison on the BOLD5000 Dataset (Part 1).", "figure_data": "", "figure_id": "fig_3", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Comparison on the BOLD5000 Dataset (Part 2).", "figure_data": "", "figure_id": "fig_4", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :Figure 10 :910Figure 9: Comparison on the BOLD5000 Dataset (Part 3).", "figure_data": "", "figure_id": "fig_5", "figure_label": "910", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Reconstructed images from different regions of the visual cortex by our method.", "figure_data": "", "figure_id": "fig_6", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Quantitative comparison with four state-of-the-art (SOTA) methods. Bold results denote the best results and underlined results denote the second-best results.", "figure_data": "MethodGODBOLD5000Acc (%)PCCSSIMAcc(%)PCCSSIMBeliy (2019)4.2880.48285 0.51795///Gaziv (2022)9.1280.68326 0.64857///IC-GAN (2022)29.3860.44857 0.54489///MinD-Vis (2023)26.6440.53159 0.52669 25.918 0.54486 0.52379CMVDM (Ours)30.1120.76751 0.63167 27.791 0.55691 0.53459", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Ablation study of CMVDM's components.", "figure_data": "MethodAcc (%) PCCSSIMMinD-Vis26.644 0.53159 0.54489MinD-Vis+L align27.362 0.56686 0.52628MinD-Vis+Control Model 28.438 0.75730 0.63404CMVDM30.112 0.76751 0.63167", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": ", demonstrate the efficacy of both L align and the control model within our", "figure_data": "Ground TruthOursMinD-Vis", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Quantitative analysis of the residual block in CMVDM. Vis with L align yields improved results in terms of ACC and PCC, which illustrate that L align can improve the capability of CMVDM to obtain semantic information. Furthermore, MinD-Vis+Control Model outperforms MinD-Vis+L align in each metric, particularly in SSIM, indicating that the silhouette contains valuable semantic information that is used in the control model.", "figure_data": "DatasetMethod Acc(%)PCCSSIMBOLD5000w/o F res 25.393 0.54184 0.52951 w F res 27.791 0.55691 0.53459GODw/o F res 29.436 0.75837 0.63894 w F res 30.112 0.76751 0.63167CMVDM. MinD-", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Comparison of the performance between LVC, HVC, and VC regions using MinD-Vis and CMVDM, against the test dataset of GOD.", "figure_data": "Acc (%) PCC SSIMMinD-Vis L align (LVC)6.800.570.47MinD-Vis L align (HVC)17.200.500.45MinD-Vis L align (VC)27.360.570.53CMVDM (LVC)8.800.680.63CMVDM (HVC)21.620.550.53CMVDM (VC)30.110.770.63(V1, V2, V3, FFA, PPA, LOC, LVC, HVC, and VC). Theseinput signals are individually utilized for pretraining silhou-ette decoder D slh and applied to reconstruct images from thetest set fMRI signals using D slh . The comparative visualiza-tion results are presented in Fig. 10. Notable observationsinclude:• V1, V2, and V3, as components of the LVC, yield re-constructed images with spatial structures that closely re-semble ground truth (GT) images. This strongly suggeststhe role of the lower visual cortex in processing spatialinformation in visual signals.• The visualizations of FFA, PPA, and LOC (from HVC)lack interpretable spatial structures. However, these re-gions are hypothesized to have meaningful semanticcomprehension of visual signals, which is further vali-dated by subsequent experiments.", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" } ]
Bohan Zeng; Shanglin Li; Xuhui Liu; Sicheng Gao; Xiaolong Jiang; Xu Tang; Yao Hu; Jianzhuang Liu; Baochang Zhang
[ { "authors": "Y Akamatsu; R Harakawa; T Ogawa; M Haseyama", "journal": "IEEE Transactions on Signal Processing", "ref_id": "b0", "title": "Brain decoding of viewed image categories via semisupervised multi-view Bayesian generative model", "year": "2020" }, { "authors": "T Anciukevičius; Z Xu; M Fisher; P Henderson; H Bilen; N J Mitra; P Guerrero", "journal": "", "ref_id": "b1", "title": "RenderDiffusion: Image Diffusion for 3D Reconstruction, Inpainting and Generation", "year": "2022" }, { "authors": "D Baranchuk; A Voynov; I Rubachev; V Khrulkov; A Babenko", "journal": "", "ref_id": "b2", "title": "Label-efficient semantic segmentation with diffusion models", "year": "2021" }, { "authors": "R Beliy; G Gaziv; A Hoogi; F Strappini; T Golan; M Irani", "journal": "", "ref_id": "b3", "title": "From voxels to pixels and back: Selfsupervision in natural-image reconstruction from fMRI", "year": "2019" }, { "authors": "N Chang; J A Pyles; A Marcus; A Gupta; M J Tarr; E M Aminoff", "journal": "Scientific data", "ref_id": "b4", "title": "BOLD5000, a public fMRI dataset while viewing 5000 visual images", "year": "2019" }, { "authors": "S Chen; P Sun; Y Song; P Luo", "journal": "", "ref_id": "b5", "title": "Diffusiondet: Diffusion model for object detection", "year": "2022" }, { "authors": "Z Chen; J Qing; T Xiang; W L Yue; J H Zhou", "journal": "", "ref_id": "b6", "title": "Seeing beyond the brain: Conditional diffusion model with sparse masked modeling for vision decoding", "year": "2023" }, { "authors": "S R Damarla; M A Just", "journal": "Human Brain Mapping", "ref_id": "b7", "title": "Decoding the representation of numerical values from brain activation patterns", "year": "2013" }, { "authors": "J Deng; W Dong; R Socher; L.-J Li; K Li; L Fei-Fei", "journal": "", "ref_id": "b8", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "P Dhariwal; A Nichol", "journal": "", "ref_id": "b9", "title": "Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": "A Dosovitskiy; L Beyer; A Kolesnikov; D Weissenborn; X Zhai; T Unterthiner; M Dehghani; M Minderer; G Heigold; S Gelly", "journal": "", "ref_id": "b10", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "S Gao; X Liu; B Zeng; S Xu; Y Li; X Luo; J Liu; X Zhen; B Zhang", "journal": "", "ref_id": "b11", "title": "Implicit Diffusion Models for Continuous Super-Resolution", "year": "2023" }, { "authors": "G Gaziv; R Beliy; N Granot; A Hoogi; F Strappini; T Golan; M Irani", "journal": "NeuroImage", "ref_id": "b12", "title": "Self-supervised natural image reconstruction and large-scale semantic classification from brain activity", "year": "2022" }, { "authors": "J Ho; W Chan; C Saharia; J Whang; R Gao; A Gritsenko; D P Kingma; B Poole; M Norouzi; D J Fleet", "journal": "", "ref_id": "b13", "title": "Imagen video: High definition video generation with diffusion models", "year": "2022" }, { "authors": "J Ho; A Jain; P Abbeel", "journal": "", "ref_id": "b14", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "J Ho; T Salimans; A Gritsenko; W Chan; M Norouzi; D J Fleet; Neurips; T Horikawa", "journal": "Nature Communications", "ref_id": "b15", "title": "Generic decoding of seen and imagined objects using hierarchical visual features", "year": "2017" }, { "authors": "K N Kay; T Naselaris; R J Prenger; J L Gallant", "journal": "Nature", "ref_id": "b16", "title": "Identifying natural images from human brain activity", "year": "2008" }, { "authors": "D P Kingma; J Ba", "journal": "", "ref_id": "b17", "title": "Adam: A method for stochastic optimization", "year": "2015" }, { "authors": "Z Kong; W Ping; J Huang; K Zhao; B Catanzaro", "journal": "", "ref_id": "b18", "title": "Diffwave: A versatile diffusion model for audio synthesis", "year": "2020" }, { "authors": "N Kriegeskorte; E Formisano; B Sorger; R Goebel", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b19", "title": "Individual faces elicit distinct response patterns in human anterior temporal cortex", "year": "2007" }, { "authors": "V Kulikov; S Yadin; M Kleiner; T Michaeli", "journal": "", "ref_id": "b20", "title": "SinDDM: A Single Image Denoising Diffusion Model", "year": "2022" }, { "authors": "H Li; Y Yang; M Chang; S Chen; H Feng; Z Xu; Q Li; Y Chen", "journal": "", "ref_id": "b21", "title": "Srdiff: Single image superresolution with diffusion probabilistic models. Neurocomputing", "year": "2022" }, { "authors": "C.-H Lin; J Gao; L Tang; T Takikawa; X Zeng; X Huang; K Kreis; S Fidler; M.-Y Liu; T.-Y Lin", "journal": "", "ref_id": "b22", "title": "Magic3D: High-Resolution Text-to-3D Content Creation", "year": "2022" }, { "authors": "T.-Y Lin; M Maire; S Belongie; J Hays; P Perona; D Ramanan; P Dollár; C L Zitnick", "journal": "", "ref_id": "b23", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "J Liu; C Li; Y Ren; F Chen; P Liu; Z Zhao", "journal": "", "ref_id": "b24", "title": "Diffsinger: Diffusion acoustic model for singing voice synthesis", "year": "2021" }, { "authors": "I Loshchilov; F Hutter", "journal": "", "ref_id": "b25", "title": "Decoupled weight decay regularization", "year": "2017" }, { "authors": "S Luo; W Hu", "journal": "", "ref_id": "b26", "title": "Diffusion probabilistic models for 3d point cloud generation", "year": "2021" }, { "authors": "T Milekovic; A A Sarma; D Bacher; J D Simeral; J Saab; C Pandarinath; B L Sorice; C Blabe; E M Oakley; K R Tringale", "journal": "Journal of Neurophysiology", "ref_id": "b27", "title": "Stable long-term BCIenabled communication in ALS and locked-in syndrome using LFP signals", "year": "2018" }, { "authors": "G A Miller", "journal": "MIT press", "ref_id": "b28", "title": "WordNet: An electronic lexical database", "year": "1998" }, { "authors": "Y Miyawaki; H Uchida; O Yamashita; M.-A Sato; Y Morito; H C Tanabe; N Sadato; Y Kamitani", "journal": "Neuron", "ref_id": "b29", "title": "Visual image reconstruction from human brain activity using a combination of multiscale local image decoders", "year": "2008" }, { "authors": "T Naselaris; R J Prenger; K N Kay; M Oliver; J L Gallant", "journal": "Neuron", "ref_id": "b30", "title": "Bayesian reconstruction of natural images from human brain activity", "year": "2009" }, { "authors": "A Nichol; P Dhariwal; A Ramesh; P Shyam; P Mishkin; B Mcgrew; I Sutskever; M Chen", "journal": "", "ref_id": "b31", "title": "Glide: Towards photorealistic image generation and editing with textguided diffusion models", "year": "2021" }, { "authors": "L F Nicolas-Alonso; J Gomez-Gil", "journal": "Sensors", "ref_id": "b32", "title": "Brain computer interfaces, a review", "year": "2012" }, { "authors": "S Nishimoto; A T Vu; T Naselaris; Y Benjamini; B Yu; J L Gallant", "journal": "Current biology", "ref_id": "b33", "title": "Reconstructing visual experiences from brain activity evoked by natural movies", "year": "2011" }, { "authors": "F Ozcelik; B Choksi; M Mozafari; L Reddy; R Vanrullen", "journal": "", "ref_id": "b34", "title": "Reconstruction of perceived images from fMRI patterns and semantic brain exploration using instance-conditioned GANs", "year": "2022" }, { "authors": "W Peebles; S Xie", "journal": "", "ref_id": "b35", "title": "Scalable Diffusion Models with Transformers", "year": "2022" }, { "authors": "B Poole; A Jain; J T Barron; B Mildenhall", "journal": "", "ref_id": "b36", "title": "Dreamfusion: Text-to-3d using 2d diffusion", "year": "2022" }, { "authors": "A Radford; J W Kim; C Hallacy; A Ramesh; G Goh; S Agarwal; G Sastry; A Askell; P Mishkin; J Clark", "journal": "", "ref_id": "b37", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "R Rombach; A Blattmann; D Lorenz; P Esser; B Ommer", "journal": "", "ref_id": "b38", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "C Saharia; J Ho; W Chan; T Salimans; D J Fleet; M Norouzi", "journal": "TPAMI", "ref_id": "b39", "title": "Image super-resolution via iterative refinement", "year": "2022" }, { "authors": "S Schoenmakers; M Barth; T Heskes; M Van Gerven", "journal": "NeuroImage", "ref_id": "b40", "title": "Linear reconstruction of perceived images from human brain activity", "year": "2013" }, { "authors": "J Sohl-Dickstein; E Weiss; N Maheswaranathan; S Ganguli", "journal": "", "ref_id": "b41", "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "year": "2015" }, { "authors": "J Song; C Meng; S Ermon", "journal": "", "ref_id": "b42", "title": "Denoising diffusion implicit models", "year": "2020" }, { "authors": "Y Takagi; S Nishimoto", "journal": "", "ref_id": "b43", "title": "High-resolution image reconstruction with latent diffusion models from human brain activity", "year": "2023" }, { "authors": "G Tevet; S Raab; B Gordon; Y Shafir; D Cohen-Or; A H Bermano", "journal": "", "ref_id": "b44", "title": "Human motion diffusion model", "year": "2022" }, { "authors": "A Vahdat; K Kreis; J Kautz", "journal": "", "ref_id": "b45", "title": "Score-based generative modeling in latent space", "year": "2021" }, { "authors": "D C Van Essen; S M Smith; D M Barch; T E Behrens; E Yacoub; K Ugurbil; W.-M H Consortium", "journal": "NeuroImage", "ref_id": "b46", "title": "The WU-Minn human connectome project: an overview", "year": "2013" }, { "authors": "M A Van Gerven; B Cseke; F P De Lange; T Heskes", "journal": "NeuroImage", "ref_id": "b47", "title": "Efficient Bayesian multivariate fMRI analysis using a sparsifying spatio-temporal prior", "year": "2010" }, { "authors": "M A Van Gerven; F P De Lange; T Heskes", "journal": "Neural Computation", "ref_id": "b48", "title": "Neural decoding with hierarchical generative models", "year": "2010" }, { "authors": "W Wang; J Bao; W Zhou; D Chen; D Chen; L Yuan; H Li", "journal": "", "ref_id": "b49", "title": "SinDiffusion: Learning a Diffusion Model from a Single Natural Image", "year": "2022" }, { "authors": "Z Wang; A C Bovik; H R Sheikh; E P Simoncelli", "journal": "", "ref_id": "b50", "title": "Image quality assessment: from error visibility to structural similarity", "year": "2004" }, { "authors": "J Xiao; J Hays; K A Ehinger; A Oliva; A Torralba", "journal": "", "ref_id": "b51", "title": "Sun database: Large-scale scene recognition from abbey to zoo", "year": "2010" }, { "authors": "Z Xiao; K Kreis; A Vahdat", "journal": "", "ref_id": "b52", "title": "Tackling the Generative Learning Trilemma with Denoising Diffusion GANs", "year": "2022" }, { "authors": "D L Yamins; H Hong; C F Cadieu; E A Solomon; D Seibert; J J Dicarlo", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b53", "title": "Performance-optimized hierarchical models predict neural responses in higher visual cortex", "year": "2014" }, { "authors": "B Zeng; X Liu; S Gao; B Liu; H Li; J Liu; B Zhang", "journal": "", "ref_id": "b54", "title": "Face Animation with an Attribute-Guided Diffusion Model", "year": "2023" }, { "authors": "L Zhang; M Agrawala", "journal": "", "ref_id": "b55", "title": "Adding conditional control to text-to-image diffusion models", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 90.29, 660.21, 202.21, 42.68 ], "formula_id": "formula_0", "formula_text": "q (z1:T | z0) = T t=1 q (zt | zt-1) , q (zt | zt-1) = N zt | 1 -βtzt-1, βtI ,(1)" }, { "formula_coordinates": [ 3, 358.95, 403.28, 199.05, 14.33 ], "formula_id": "formula_1", "formula_text": "q (zt | z0) = N (zt | √ γtz0, (1 -γt)I) ,(2)" }, { "formula_coordinates": [ 3, 360.3, 424.14, 97.6, 14.11 ], "formula_id": "formula_2", "formula_text": "γ t = t i=1 (1 -β i )." }, { "formula_coordinates": [ 3, 326.41, 486.43, 231.59, 55.47 ], "formula_id": "formula_3", "formula_text": "p θ (z0:T | c f mri ) = p (zT ) T t=1 p θ (zt-1 | zt, c f mri ) , p (zT ) = N (zT | 0, I) , p θ (zt-1 | zt, c f mri ) = N zt-1 | µ θ (c f mri , zt, t) , σ 2 t I ,(3)" }, { "formula_coordinates": [ 3, 356.93, 548.23, 65.35, 14.47 ], "formula_id": "formula_4", "formula_text": "σ t = 1-γt-1 1-γt β t ." }, { "formula_coordinates": [ 4, 54, 223.07, 236.47, 22.09 ], "formula_id": "formula_5", "formula_text": "L f = E z0,t,c f mri ,ϵ∼N (0,1) [||ϵ -ϵ θ (z t , t, E f mri (c f mri ))|| 2 2 ],(4" }, { "formula_coordinates": [ 4, 99.49, 343.09, 193.01, 11.72 ], "formula_id": "formula_6", "formula_text": "L align = e -cosine(fimg,MLP(cctx)) ,(5)" }, { "formula_coordinates": [ 4, 361.25, 76.14, 196.75, 40.55 ], "formula_id": "formula_7", "formula_text": "Le = 1 |Ω| |Ω| i=1 [α1 • ∥c f mri,i -ĉfmri,i ∥ 2 + α2 • (1 -cosine(c f mri,i , ĉfmri,i ))],(6)" }, { "formula_coordinates": [ 4, 352.46, 335.21, 205.54, 41.29 ], "formula_id": "formula_8", "formula_text": "Lmae = 1 |Ω| |Ω| i=1 | Îi -Ii| supervised + 1 |Φ| |Φ| i=1 | φi -ϕi| self -supervised ,(7)" }, { "formula_coordinates": [ 4, 349.25, 387.71, 208.75, 22.08 ], "formula_id": "formula_9", "formula_text": "Lssim = 1 - (2µIµ Î + C1)(2σ I Î + C2) (µ 2 I + µ 2 Î + C1)(σ 2 I + σ 2 Î + C2) ,(8)" }, { "formula_coordinates": [ 4, 395.22, 475.17, 162.78, 9.65 ], "formula_id": "formula_10", "formula_text": "L d = L mae + L ssim .(9)" }, { "formula_coordinates": [ 4, 343.56, 692.72, 214.44, 12.69 ], "formula_id": "formula_11", "formula_text": "x ′ c,t = Z(F ctrl (z t + Z(c slh ), c ctx ; Θ c )),(10)" }, { "formula_coordinates": [ 5, 73.78, 264.89, 218.72, 23.6 ], "formula_id": "formula_12", "formula_text": "x c,t =Z(F ctrl (z t + Z(c slh + Z(F res (c f mri ))), c ctx ; Θ c )).(11)" }, { "formula_coordinates": [ 5, 59.07, 357.6, 233.43, 26.51 ], "formula_id": "formula_13", "formula_text": "L ctrl = E z0,t,c f mri ,ϵ∼N (0,1) [||ϵ -ϵ θ (z t , t, c ctx , x c,t )|| 2 2 ].(12)" } ]
2023-05-17
[ { "figure_ref": [ "fig_1", "fig_2", "fig_3", "fig_1", "fig_1", "fig_3", "fig_3", "fig_6", "fig_5", "fig_9" ], "heading": "Introduction", "publication_ref": [ "b30", "b7", "b16", "b3", "b4", "b24", "b14", "b23", "b10", "b0", "b31", "b24", "b14", "b0", "b9", "b17" ], "table_ref": [], "text": "We study whether multiple Large Language Models (LLMs) can improve each other in a negotiation game with minimal human intervention, in the fashion of AlphaGo Zero [31] where AI agents improve themselves by continuously playing competitive games under well-defined rules. The answers to this research question have profound implications. On the positive side, if the agents were able to improve autonomously, strong agents might be created with very few human annotations, which greatly saves the cost compared to today's data-hungry LLM training [8,17]. On the risky side, it also implies strong agents with limited human oversight [4]. In this work, we ask two language models (a seller and a buyer) to bargain about a product. The seller is asked to sell the product at a higher price, while the buyer aims to purchase it at a lower price (Fig. 1A). After reaching a deal, we ask a third language GPT-3.5-Turbo Buyer Critic: Employ the \"flinch\" technique: when the seller offers a counteroffer, the buyer should display a degree of surprise or disappointment Buyer's Improvement: Oh! That's higher than I expected. I saw a similar balloon at another store for $14. Can you match that price?\nBuyer Critic: Use the power of silence: The buyer can employ the power of silence in the negotiation process by pausing longer before responding to the seller's offer.\nBuyer's Improvement: *pause* ... Alright, I'll take the balloon for $13.\nSeller Critic:\nUtilize split-the-difference: In situations where a small price difference remains, propose to split the difference with the buyer.\nSeller's Improvement: I understand, how about we split the difference and make it $16.75 to accommodate your budget? A. We ask two LLM agents to play a bargaining game as the seller and the buyer. Their goals are to sell/ buy the product at a higher/ lower price. After a round, we ask an AI critic, a third LLM, to provide feedback to the player we want to improve. Then we ask the player to improve their negotiation strategies based on the feedback. We repeat this process for multiple rounds and study if models can continuously improve. See Fig. 2 for an example run. B. Bargaining techniques that we observed from the AI Critic and how the player incorporates these techniques into the negotiation strategy. C. Abilities that are required in our game (C2 -negotiation, C3 -AI feedback, and C4 -continuous improvements) classify models into different tiers. We find out that only strong and well-aligned models (like gpt-4 and claude-v1.3) can continuously improve from iterative AI feedback (see Fig. 3 for example models that do not exhibit these abilities).\nSeller\nmodel to play as the critic and give feedback to a player. Then we play the game again, asking the player to improve their strategy using AI feedback provided by the critic LLM.\nWe choose the bargaining game because it comes with well-defined rules described in text, and a clear and measurable objective (a lower/ higher deal price) for strategic negotiation. Although the game seems easy at first glance, it requires non-trivial capabilities of the language models, as the model needs to: (1) clearly understand and strictly follow the textual rules of the negotiation game (2) correspond to the textual feedback provided by the critic LM and improve based on it iteratively (see example feedback in Fig 1B ); (3) reflect upon the strategy and feedback over the long term and improve over multiple rounds. We will see that not all models we considered show all these abilities (Fig. 1C), and only models that can (1) understand negotiation rules and strategies (capable) and (2) respond to AI instructions (well-aligned) can continuously improve from AI feedback over multiple rounds (in our experiments, only gpt-3.5-turbo, gpt-4, and claude-v1.3) meet these requirements). We have also tried more complicated textual games including board games and textual RPG games in the preliminary experiments, but they are more challenging for current agents to understand and follow the rules.\nWe call our approach In-Context Learning from AI Feedback (ICL-AIF). Specifically, we use the feedback from the AI critic as well as the previous rounds of dialog history as in-context demonstrations [5]. By doing this, the critic's suggestions for improvements and the player's actual improvement in the previous rounds effectively become the few-shot prompts for the next round of negotiation. We use in-context learning for two reasons: (1) tuning large language models with reinforcement And I will try to sell it at a higher price (higher than $16.0) than the previous round. B: then we use a claude-instant-v1.0 critic to provide feedback. C: upon receiving the feedback, the seller improves its strategy based on the suggestions. Note that colored phrases like \"high quality latex and handcrafted by expert artisans\" correspond to previous AI feedback \"how rare and special it is\". We measure the final price as the proxy of the effectiveness of the strategy because the overall goal is to get a better price. In this case, it improves from $16 to $17. learning is prohibitively expensive [25,15] and the base model [24] may not be accessible to a wide range of the community; (2) in-context learning is recently shown to be closely related to gradient descent [11,1,32], such that the conclusions we draw is fairly likely to generalize when one actually finetunes the model (if resources permit). One notable difference between our ICL-AIF and the mainstream Reinforcement Learning from Human Feedback (RLHF) is that in RL the reward is a scalar [25,15] while in ICL the feedback is in natural language. We study AI feedback (rather than rely on human intervention after each round) because it is more scalable and can allow models to self-improve automatically.\nOur experiments lead to several intriguing findings: (1) The requirements of our bargaining game effectively serve as a testbed for assessing the abilities of LLMs (Fig. 1C): although most models can do chitchat in a casual scenario, as of our experiment date (May 2023), cohere-command [10] model does not understand the rule of bargaining (Fig. 3A), ai21-jurassic [18] model does not respond to AI feedback (Fig. 3B), claude-instant-v1.0 can at most improve one round (Fig. 5), and only gpt-3.5-turbo, gpt-4, and claude-v1.3 can continuously improve over multiple rounds. (2) Models behave differently upon receiving feedback when playing different roles. Models playing the buyer role may be harder to improve than when in the seller role (Fig. 4). (3) It is indeed possible for strong agents like gpt-4 to continuously improve meaningfully using previous experiences and online iterative AI feedback, yet the attempt to sell at a higher price (or buy at a lower price) comes with the risk of failing to reach a deal at all (Fig. 6). We further show evidence of the model being able to negotiation in a less verbose but more strategic (thus more effective) way (Fig. 7). Overall, we hope our work serves as a meaningful initiative for improving language models' negotiation in a game setting using AI feedback." }, { "figure_ref": [ "fig_1" ], "heading": "Problem Setting", "publication_ref": [ "b17", "b23", "b13", "b13" ], "table_ref": [], "text": "Our goal is to study whether LLMs can improve each other by playing a negotiation game and incorporating AI feedback, as shown in Fig. 1A. We set the product being bargained as a balloon (and our results hold when changing the balloon to other items). We use different combinations of backend LLM engines: cohere-command [10], AI21's jurassic-2 [18], OpenAI's gpt-3.5-turbo and gpt-4 [24], Anthropic's claude-instant-v1.0 (which supposedly matches gpt-3.5-turbo [14]) and claude-v1.3 (which is supposed to be slightly worse but close to gpt-4 [14]). throughout our experiments, we provide feedback to improve only one of the two players, while its rival receives no feedback, clears the negotiation history of previous rounds, and restarts. We vary the engines for the model being improved while fixing its rival's engine to be gpt-3.5-turbo. Essentially, our game is gpt-3.5-turbo vs. all other engines. We keep the LM engine behind the critic is always the same as the player it provides feedback to. One example setting is a gpt-4 seller playing against a gpt-3.5-turbo buyer, with a gpt-4 critic. After one round, the gpt-4 critic provides feedback to the gpt-4 seller such that the seller can improve in the next round while its rival gpt-3.5-turbo buyer clears its dialog history and restarts." }, { "figure_ref": [ "fig_2" ], "heading": "Process of the Game", "publication_ref": [], "table_ref": [], "text": "Before the game begins, the rules of the negotiation game are explained to the models through textual instructions with the objective of selling/ buying at a higher/ lower price. We set the deal price to [$10, $20] for easier evaluation, since other the deal price may vary in a wide range according to the observations from our preliminary experiments. To achieve this, we hard code the seller to kick off the negotiation with \"This is a good balloon and its price is $20.\" Similarly, the buyer always opens with \"Would you consider selling it for $10?\" When both players strictly follow the game rules, the deal price would be between $10 and $20. We let the models play multiple runs and measure the average deal price before and after AI feedback. During the game, the seller's output is used to prompt the buyer and vice versa, conditioning on the entire conversation history. This process is repeated till a terminal state is reached. Fig. 2A shows an example round. We define three game states: (1) ON-GOING: the negotiation between the two players is still ongoing;\n(2) DEAL: the negotiation has concluded and the two players have reached a deal; (3) NO DEAL: the players cannot agree on a price and have failed to reach a deal. To track the game states, we set an additional moderator (powered by a fourth LLM, in our case, gpt-3.5-turbo) to read the current dialog and classify the states (we will discuss more details about the moderator later). We measure the performance of the players based on the final deal price." }, { "figure_ref": [ "fig_2" ], "heading": "Critic", "publication_ref": [], "table_ref": [], "text": "A round is finished when the negotiation reaches a terminating state, either a DEAL or NO DEAL. After each round, the critic LM is asked to provide constructive feedback to the player we aim to improve. This player's dialog history from all past rounds and all feedback it has received are used to prompt the critic LM (Fig. 2B). The critic model is instructed to provide three suggestions to the player, in order to improve its negotiation strategies to achieve a more favorable price in the next game. Before the next round, the player being improved receives the critic's feedback as a textual prompt, while its rival clears its negotiation history and restarts." }, { "figure_ref": [], "heading": "The Moderator", "publication_ref": [], "table_ref": [], "text": "The game state is classified by prompting a gpt-3.5-turbo moderator using few-shot demonstrations. The moderator reads the most recent four rounds (as well as in-context examples of different dialog states) and determines the state of the negotiation. Empirically, we found that four rounds of conversations are sufficient for the moderator to determine the negotiation state.\nOne key challenge here is detecting no-deals as the model seems to be better at recognizing DEAL than NO DEAL. We mitigate this issue by playing multiple runs, inspect failure cases manually, and add them to the prompt with corrected labels. We find this method an effective side product recommend it as a technique for prompt optimization for generic classification tasks." }, { "figure_ref": [], "heading": "Playing for Multiple Rounds", "publication_ref": [], "table_ref": [], "text": "Finally, we would like to explore whether the players can continuously improve from AI feedback in a game over multiple rounds. Intuitively, the more rounds the players play, the more challenging to keep improving because the (already improved) price from the previous round becomes the baseline for the next round. In the experiments, we will show that only gpt-4 can improve over 5 rounds while other models' improvements may saturate at about 3 rounds." }, { "figure_ref": [], "heading": "Related Work Game Playing and AlphaGo Zero", "publication_ref": [ "b30", "b18", "b8", "b5", "b5", "b11", "b29" ], "table_ref": [], "text": "Our setting is strongly inspired by AlphaGo Zero [31] where two agents play the game of Go and improve each other with minimal human intervention. Here we would like to explore its counterpart in natural language. Our work is similar to AlphaGo Zero in the sense that we also have AI agents (large language models) playing competitive games (bargaining) and try to improve with little human supervision. Yet there is an important difference between our work and AlphaGo Zero: we have a third agent, the critic, to give feedback helping its player to improve. This is a cooperative relationship that does not exist in AlphaGo Zero. On the NLP side, the closest related work is Lewis et al. [19] where they have (small) RNN [9] language models to bargain, and our work can be viewed as a more developed version of them since we change the engine to be large language models. In general, our work is broadly under the area of AI negotiation [6,6], strategic reasoning [12], and general game playing [30]. " }, { "figure_ref": [], "heading": "Large Language Models as Generative Agents", "publication_ref": [ "b32", "b23", "b33", "b27", "b12", "b14", "b1", "b2", "b25", "b19" ], "table_ref": [], "text": "Large language models have demonstrated incredible multi-dimensional capabilities [33,24], especially in complex reasoning [34,28,13] and multi-round dialog [15,2,3], which serve as the foundation of this work. Our work is related to concurrent works like Generative Agents [26] and CAMEL [20] as they also study the behavior of LLMs in a multi-agent game setting. The core difference between our work and theirs is that we have a clear objective (the deal price) for the model to improve through competition and cooperation, while their work studies the generic social behavior of LLMs." }, { "figure_ref": [], "heading": "Learning from AI Feedback", "publication_ref": [ "b2", "b28", "b26", "b21", "b6", "b22" ], "table_ref": [], "text": "Our method is also strongly inspired by constitutional AI [3] as we both use AI feedback, while the difference is that our feedback is directly in natural language (not a scalar from a reward model). There are also related/ concurrent works demonstrating the effectiveness of natural language feedback [29,27,22] and self-refinement [7,23]. Our work further confirms the effectiveness of AI feedback in the strategic negotiation game setting." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In our experiments, we consider three stages that gradually deepen our exploration of learning from AI feedback: (1) We first set up the basics of the game (Sec. 4.2), showing that only a few models can improve from AI critics, in which case AI feedback can be comparable (but more scalable) as human feedback. Other models either do not understand/ follow the rule of bargaining, or cannot incorporate AI feedback for improvements. (2) Then we study the models' behaviors when playing different roles (Sec. 4.3). We discover the intriguing result that buyers are in general harder to improve than sellers.\n(3) Finally, we study whether models can continuously improve over multiple rounds (Sec. 4.4), and show a tradeoff of deal price versus success rate: although some models can continuously improve the deal price, it comes with a higher risk of breaking a deal. We further show evidence of negotiation in a more strategic way: both gpt-4 and claude-v1.3's responses become longer after multiple rounds of AI feedback (note that verbosity is a straightforward negotiation strategy), yet gpt-4 is less verbose than claude-v1.3 but achieves higher deal price and deal rate, meaning that its responses, although using fewer words, are more strategic and effective." }, { "figure_ref": [], "heading": "Experiment Setup", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Model Engines", "publication_ref": [ "b13", "b20", "b15", "b13", "b20" ], "table_ref": [], "text": "The minimum requirement for models to enter our game is that they should be a chatbot. All models we consider (cohere-command, AI21's jurassic-2, OpenAI's gpt and Anthropic's claude) can be accessed by API calls. Amoung them, gpt-4 is the most expensive one and running 500 rounds of negotiation costs about $120 and gpt-3.5-turbo costs about $10. Other models are beta testing (as of May 2023) and do not charge money. For reference, the approximate rank of these models, from benchmarks like chain-of-thought hub [14] and HeLM [21], is that gpt-4 and claude-v1.3 are approximately similar, better than gpt-3.5-turbo and claude-instant-v1.0, Table 1: Seller performance using AI feedback vs. randomly selected human feedback from a pre-defined pool. Recall that the buyer is fixed to be gpt-3.5-turbo and has no access to previous rounds. AI's feedback is comparable to human's, but is more scalable, as the two both induce similar price increases.\nGPT-3.5-Turbo Claude-instant-v1.0 Claude-v1. and better than cohere-command and j2-jumbo-instruct. We will consider more models in the future, such as Google's PaLM-2 [16].\nWe let all models compete with gpt-3.5-turbo, effectively making it a baseline for all other models. We will show that, aligning with other concurrent model rankings [14,21], gpt-3.5-turbo is a middle-level powerful engine (worse than gpt-4, better than claude-instant-v1.0). For a given model engine (say claude-v1.3), we run it as the seller (with gpt-3.5-turbo as the buyer) and as a buyer (with gpt-3.5-turbo now as the seller) We first let the models to play one round and manually inspect if they understand the rules of bargaining. If they do, we let them play two rounds to see if they could respond to AI feedback. For the critic model, we set its engine the same as its player. We repeat the game 500 times to compute the average deal price before and after AI feedback.\nIf they do improve one round, we let them play multiple rounds and see if they could continuously improve their strategy. We repeat the game 200 times with 5 max rounds to compute the average deal price for each round. When decoding from the model engines, we use sampling with default temperature (1.0 for gpt and claude, 0.75 for cohere and 0.7 for j2)." }, { "figure_ref": [], "heading": "Prompt Engineering", "publication_ref": [], "table_ref": [], "text": "In this work, we only had to manually optimize the prompts for the moderator because the player may reach/ break a deal with very diverse expressions, and we would like to make sure the moderator correctly recognizes all of them. As mentioned above, we identify the errors made by the moderator in identifying deals and keep adding them as in-context demonstrations until the model reaches a sufficiently high accuracy (about 90+ by manual inspection). For the players and the critic, we do not do prompt engineering and keep the instructions the same for all engines (but the format may be different, e.g., claude requires two linebreaks before \"HUMAN:\" and j2 requires two \"##\" after each dialog round). Code and Prompts will be released publicly on publication." }, { "figure_ref": [], "heading": "Basic Experiments", "publication_ref": [], "table_ref": [], "text": "In this section, we first study the minimal requirements for models to participle in our game, namely (1) understanding the rule of bargaining and (2) responding to AI feedback. Then we consider basic comparison between AI and human feedback, showing that AI feedback can be comparable to human feedback, but more scalable." }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "Conversational ability does not guarantee ability to negotiate or learning from feedback", "publication_ref": [], "table_ref": [], "text": "We study whether conversational models can understand the rule of bargaining by manually checking traces of the dialog, and found that cohere-command fails to understand the rules, as is shown in Fig 3A . We observe that it does not realize what price is a better deal. For example, when playing seller, it rejects a proposal of $10 but accepts $8. We also observe that AI21's j2-jumbo-instruct model, although understanding the rule of bargaining, cannot incorporate AI feedback, as is shown in Fig. 3B. Generally, when instructed with AI feedback, the model keeps the same strategy as before, without any improvements.\nAfter ruling out the cohere-command and j2-jumbo-instruct models, we consider the three remaining models: gpt-3.5-turbo, claude-instant-v1.0 and claude-v1.3. For these three engines, we do not observe the problems in Fig. 3. This means that these models can be used for our multi-round games." }, { "figure_ref": [], "heading": "AI Feedback can be comparable to human feedback", "publication_ref": [], "table_ref": [], "text": "Now we consider some initial comparison between AI and human feedback. We emphasize that our goal is not to show which one is bettera similar level of effectiveness would suffice our study (to see if LLMs can continuously improve through self-play and AI feedback). For the human feedback, we manually write done a pool of 10 for weaker agents like claude-instant-v1.0 and gpt-3.5-turbo, improving from AI feedback as the seller is easier than as buyer. For sellers, AI feedback moves the deal distribution to a higher range (rightward), but does not move buyers' deal distribution much. Consequently, the change in average deal price when playing as buyers (-0.2 and -0.5) is clearly smaller than those as sellers (+1.0 and +1.7) C. Stronger agents (claude-v1.3/ gpt-4), can still improve from AI feedback even as buyers, with larger changes in average deal price (-1.2 and -3.0).\nsuggestions. Then we play 500 runs of the game, computing the deal price before and after feedback.\nAfter 500 runs, we compare the improvements after: (1) randomly sampling 3 suggestions from the predefined pool and (2) asking the AI critic to write down 3 suggestions. We note that this may underestimate the performance of human feedback, yet it would be unpractical to ask human to write done 3 suggestions for all 1500 runs (while AI feedback does not have this problem). The results are shown in Table 1 where we see that all three models (gpt-3.5-turbo, claude-instant-v1.0 and claude-v1.3) exhibit comparable improvements over human and AI feedback." }, { "figure_ref": [ "fig_5", "fig_8", "fig_8" ], "heading": "Behaviors of Different LLM Backend", "publication_ref": [], "table_ref": [], "text": "So far we have established that our game setting is valid for stronger LLM engines. Now we consider the detailed behavior comparisons using different engines for different roles. Specifically, we use claude-instant-v1.0, claude-v1.3, gpt-3.5-turbo, and gpt-4 to play the seller/ buyer (against a gpt-3.5-turbo buyer/ seller respectively), then study the deal price distribution before/ after AI feedback (also recall that the AI critic is powered by the same engine as its player).\nThe results are visualized in Fig. 4. When claude-instant-v1.0 and gpt-3.5-turbo play the seller, they are able to improve their average deal price after AI feedback (Fig. 4A). But when they play the buyer role, their average deal price does not improve, which indicates that buyers tend to be a harder role than sellers (Fig. 4B). Yet this observation does not hold for engines like gpt-4 and claude-v1.3, as they can still improve from AI feedback even playing buyers. Overall, this set of experiments reveal the nuanced capability differences between the four engines we consider." }, { "figure_ref": [ "fig_6", "fig_9", "fig_1" ], "heading": "Towards Continuous Improvements from Iterative AI Feedback", "publication_ref": [], "table_ref": [], "text": "Now we unroll the game to multiple rounds and see if models can continuously improve from previous dialog history and iterative AI feedback. Specifically, we let gpt-3.5-turbo, gpt-4, claude-instant-v1.0, and claude-v1.3 play as the seller/ buyer respectively against a rival powered by gpt-3.5-turbo. As mentioned before, the critic shares the same engine as the player it helps with. We play 200 runs of the game, and unroll each game to be 5 rounds. We compute the final deal price and the deal success rate and see if the price can be continuously improved.\nFig. 5 shows gpt-3.5-turbo and claude-instant-v1.0 playing different roles. For a given engine, improvements over one round may not necessarily extrapolate to multiple rounds, as we observe that gpt-3.5-turbo can improve over multiple rounds, but claude-instant-v1.0 only improves at most one round. Now we consider the tradeoff between the tendency of achieving a higher deal price versus the rick of breaking a deal, as is shown in Fig 6 . We see that when playing sellers, all four model engines can improve over at least one round, but this comes at the cost of decreasing deal success ratio. When playing buyers, there are models that cannot improve (claude-instant-v1.0), or saturate over 3 rounds (claude-v1.3), while gpt-4 and gpt-3.5-turbo can continuously improve, and gpt-4 achieves better (lower) deal price and higher deal rate than gpt-3.5-turbo.\nFinally, we study how iterative AI feedback influences the language complexity used by the agents by plotting the average response length (measured in number of characters) after each round, as is shown in Fig. 7. We see that both claude-v1.3 and gpt-4 become more verbose after iterative AI feedback with a continuously increasing response length. This is intuitive because being verbosity is a straightforward strategy in negotiation. Yet for claude-v1.3, the verbosity does not translate to better negotiation strategy, as its improvement saturates after three rounds (Fig. 6B1). In comparison, gpt-4's increase verbosity is more strategic, as it use less words than claude-v1.3, but achieves A2. Yet the deal success ratios continue to decrease over 5 rounds (mostly < 50% in the last round).\nB1. When playing buyer, GPT models are better at improving from AI feedback than Claude models." }, { "figure_ref": [], "heading": "B2.", "publication_ref": [], "table_ref": [], "text": "The deal success ratio continues to decrease, but overall higher (mostly >50%) than criticizing sellers.\nFigure 6: Performance of GPT and Claude models in multi-round games and their success rate of getting a deal. A1 and A2: when playing the seller, most models can improve over multiple rounds. Yet higher prices also mean that it is more likely the seller may break the deal, as shown in the continuously decreasing curve of A2. B1 and B2: when playing buyer, claude-instant-v1.0 cannot improve over multiple rounds while others can. Again, a better buying price also comes with a higher chance of running away from a deal. We see that GPT-4 achieves the best trade-off here: it gets the best price over multiple rounds with a higher success rate of reaching a deal.\nRound 1. How about we meet in the middle at $15?\nRound 2. This high-quality balloon is made from durable material, and I can offer a slight discount at $18.\nRound 3. Hi there! I hope you're having a fantastic day. This one-of-a-kind balloon, made from durable material, is priced at $20.\nRound 4: This special balloon can bring an extra touch of joy to any event and create lasting memories, making it a worthwhile purchase at $20.\nRound 5.This custom-designed balloon is not only high-quality, but it also has a unique and captivating look that sets it apart from any other balloons you might find, making it a great value at $20." }, { "figure_ref": [], "heading": "Response length in number of characters", "publication_ref": [], "table_ref": [], "text": "Claude-v1.3 seller GPT-4 seller we show examples of the seller's response when being asked the buyer's initial query \"Would you consider selling it for $10?\" After multiple rounds of negotiation, the seller's responses become more verbose and word-tuned. Yet verbosity does not mean better strategy: claude-v1.3 is more verbose (higher curve) than gpt-4, but it has a worse success rate and deal price (recall Fig. 6). This indicates that gpt-4's verbosity is more strategic.\nbetter deal price and deal success rate (Fig. 6B). This observation serve as strong evidence that AI feedback improves players' response towards a word-tuned, strategic direction." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "In this work, we study whether multiple large language models can autonomously improve each other in a negotiation game by role-playing and learning from AI feedback. Our experiments show that certain models can indeed improve by continuously playing competition games with iterative AI feedback, under well-defined rules in an AlphaGo Zero fashion. We also show the tradeoff between next-round price improvement and success rate, as a better deal price also comes with a higher risk of deal breaking. This suggests future research may consider global optimization for improving the overall gain over multiple rounds. We further show evidence of improved language from iterative AI feedback: in a multi-round game, one model may be less verbose than another, but be better word-tuned, thus more effective in getting a better deal.\nWe believe our results have profound implications for AI research: on the positive side, it indicates the possibility of continuously improving language models with minimal human intervention. On the risky side, it might be more challenging to oversight the model behavior in our framework because models are acting autonomously, which calls for future alignment and safety research in the multi-agent game setting. Overall, we believe our work provides a initial exploration for large language models' learning from game-playing and iterative AI feedback." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "GPT-3.5-Turbo | Buyer Great, it's a deal then. Thank you!" } ]
We study whether multiple large language models (LLMs) can autonomously improve each other in a negotiation game by playing, reflecting, and criticizing. We are interested in this question because if LLMs were able to improve each other, it would imply the possibility of creating strong AI agents with minimal human intervention. We ask two LLMs to negotiate with each other, playing the roles of a buyer and a seller, respectively. They aim to reach a deal with the buyer targeting a lower price and the seller a higher one. A third language model, playing the critic, provides feedback to a player to improve the player's negotiation strategies. We let the two agents play multiple rounds, using previous negotiation history and AI feedback as in-context demonstrations to improve the model's negotiation strategy iteratively. We use different LLMs (GPT and Claude) for different roles and use the deal price as the evaluation metric. Our experiments reveal multiple intriguing findings: (1) Only a subset of the language models we consider can self-play and improve the deal price from AI feedback, weaker models either do not understand the game's rules or cannot incorporate AI feedback for further improvement. (2) Models' abilities to learn from the feedback differ when playing different roles. For example, it is harder for Claude-instant to improve as the buyer than as the seller. (3) When unrolling the game to multiple rounds, stronger agents can consistently improve their performance by meaningfully using previous experiences and iterative AI feedback, yet have a higher risk of breaking the deal. We hope our work provides insightful initial explorations of having models autonomously improve each other with game playing and AI feedback.
Improving Language Model Negotiation with Self-Play and In-Context Learning from AI Feedback
[ { "figure_caption": "Critic: Use anchoring technique: Begin by emphasizing the high starting price and then offer a slightly lower price Seller's Improvement: This high-quality, long-lasting balloon is really worth $25, but I'm offering it for $20. Buyer proposes $15, seller calls $18 Context: B1. The \"flinch\" technique B2. The power of silence B3. Split-the-difference B4. The anchoring technique B. Example feedback from AI critic and how a GPT-4 player improves from it.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure1: Settings of our negotiation game. A. We ask two LLM agents to play a bargaining game as the seller and the buyer. Their goals are to sell/ buy the product at a higher/ lower price. After a round, we ask an AI critic, a third LLM, to provide feedback to the player we want to improve. Then we ask the player to improve their negotiation strategies based on the feedback. We repeat this process for multiple rounds and study if models can continuously improve. See Fig.2for an example run. B. Bargaining techniques that we observed from the AI Critic and how the player incorporates these techniques into the negotiation strategy. C. Abilities that are required in our game (C2 -negotiation, C3 -AI feedback, and C4 -continuous improvements) classify models into different tiers. We find out that only strong and well-aligned models (like gpt-4 and claude-v1.3) can continuously improve from iterative AI feedback (see Fig.3for example models that do not exhibit these abilities).", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: An example of playing the negotiation game and then improving from AI feedback. A: claude-instant-v1.0 plays the seller and gpt-3.5-turbo the buyer, bargaining about a balloon.B: then we use a claude-instant-v1.0 critic to provide feedback. C: upon receiving the feedback, the seller improves its strategy based on the suggestions. Note that colored phrases like \"high quality latex and handcrafted by expert artisans\" correspond to previous AI feedback \"how rare and special it is\". We measure the final price as the proxy of the effectiveness of the strategy because the overall goal is to get a better price. In this case, it improves from $16 to $17.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Not all models can play bargaining. A. As of May 2023, the cohere model does not understand the rule of bargaining and agrees on irrational prices. B. The AI21 Jurrasic-2 model, although understanding the rule of bargaining, does not incorporate the feedback from the critic. Since these models are consistently being updated, we include the timestamp and note future versions may have improved performance.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "A1.Claude-instant-v1.0 seller A2. GPT-3.5-Turbo seller A. AI feedback moves the distribution of sellers' deal price towards a higher range B1. Claude-instant-v1.0 buyer B. Buyers are harder to improve than sellers: AI feedback does not quite move buyers' deal price distribution C1. Claude-v1.3 buyer C2. GPT-4 buyer C. Stronger agents (Claude-v1.3 and GPT-4), when playing buyers, can still improve from AI feedback B2. GPT-3.5-Turbo buyer", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure4: Binned deal price frequencies of 500 games, before v.s. after feedback. Effective feedback should move the distribution towards a lower/ higher price range. X-axis: intervals of deals from $10 (buyers' initial price) to $20 (sellers' asking price). Y-axis: the frequency of the price. A and B: for weaker agents like claude-instant-v1.0 and gpt-3.5-turbo, improving from AI feedback as the seller is easier than as buyer. For sellers, AI feedback moves the deal distribution to a higher range (rightward), but does not move buyers' deal distribution much. Consequently, the change in average deal price when playing as buyers (-0.2 and -0.5) is clearly smaller than those as sellers (+1.0 and +1.7) C. Stronger agents (claude-v1.3/ gpt-4), can still improve from AI feedback even as buyers, with larger changes in average deal price (-1.2 and -3.0).", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "5 -5Turbo seller continuously improves from AI feedback A2. GPT-3.5-Turbo buyer continuously improves from AI feedback B1. Claude-instant-v1.0 seller only improves 1 round from AI feedback B2. Claude-instant-v1.0 buyer does not improve from AI feedback", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: In the multi-round setting, different engines have different behavior when playing seller/ buyer. Line plots are the average price over 200 runs and bar plots represent the price distribution. A1 v.s. B1. When playing sellers, gpt-3.5-turbo can improve from AI feedback in multiple rounds, while claude-instant-v1.0 only improves the first round. A2 v.s. B2. When playing buyers, gpt-3.5-turbo can improve in multiple rounds, whild claude-instant-v1.0 cannot.", "figure_data": "", "figure_id": "fig_7", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "4 A1.4When playing seller, most models can improve at least one round.", "figure_data": "", "figure_id": "fig_8", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: The average response length increases as the model learns from multiple rounds. Herewe show examples of the seller's response when being asked the buyer's initial query \"Would you consider selling it for $10?\" After multiple rounds of negotiation, the seller's responses become more verbose and word-tuned. Yet verbosity does not mean better strategy: claude-v1.3 is more verbose (higher curve) than gpt-4, but it has a worse success rate and deal price (recall Fig.6). This indicates that gpt-4's verbosity is more strategic.", "figure_data": "", "figure_id": "fig_9", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "3 ", "figure_data": "Before feedback16.2614.7415.40Random sampled human feedback16.83 (+0.57)16.33 (+1.59)16.89 (+1.49)AI feedback17.03 (+0.77)15.98 (+1.24)16.98 (+1.58)", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" } ]
Yao Fu; Hao Peng; Tushar Khot; Mirella Lapata
[ { "authors": "Ekin Akyürek; Dale Schuurmans; Jacob Andreas; Tengyu Ma; Denny Zhou", "journal": "", "ref_id": "b0", "title": "What learning algorithm is in-context learning? investigations with linear models", "year": "2022" }, { "authors": "Amanda Askell; Yuntao Bai; Anna Chen; Dawn Drain; Deep Ganguli; Tom Henighan; Andy Jones; Nicholas Joseph; Ben Mann; Nova Dassarma", "journal": "", "ref_id": "b1", "title": "A general language assistant as a laboratory for alignment", "year": "2021" }, { "authors": "Yuntao Bai; Saurav Kadavath; Sandipan Kundu; Amanda Askell; Jackson Kernion; Andy Jones; Anna Chen; Anna Goldie; Azalia Mirhoseini; Cameron Mckinnon", "journal": "", "ref_id": "b2", "title": "Constitutional ai: Harmlessness from ai feedback", "year": "2022" }, { "authors": "Jeeyoon Samuel R Bowman; Ethan Hyun; Edwin Perez; Craig Chen; Scott Pettit; Kamile Heiner; Amanda Lukosuite; Andy Askell; Anna Jones; Chen", "journal": "", "ref_id": "b3", "title": "Measuring progress on scalable oversight for large language models", "year": "2022" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b4", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Kushal Chawla; Jaysa Ramirez; Rene Clever; Gale Lucas; Jonathan May; Jonathan Gratch", "journal": "", "ref_id": "b5", "title": "Casino: A corpus of campsite negotiation dialogues for automatic negotiation systems", "year": "2021" }, { "authors": "Xinyun Chen; Maxwell Lin; Nathanael Schärli; Denny Zhou", "journal": "", "ref_id": "b6", "title": "Teaching large language models to self-debug", "year": "2023" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann", "journal": "", "ref_id": "b7", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Junyoung Chung; Caglar Gulcehre; Kyunghyun Cho; Yoshua Bengio", "journal": "", "ref_id": "b8", "title": "Empirical evaluation of gated recurrent neural networks on sequence modeling", "year": "2014" }, { "authors": "", "journal": "Cohere website", "ref_id": "b9", "title": "Cohere. Cohere command models", "year": "2023" }, { "authors": "Damai Dai; Yutao Sun; Li Dong; Yaru Hao; Zhifang Sui; Furu Wei", "journal": "", "ref_id": "b10", "title": "Why can gpt learn in-context? language models secretly perform gradient descent as meta optimizers", "year": "2022" }, { "authors": "Anton Fair) †; Noam Bakhtin; Emily Brown; Gabriele Dinan; Colin Farina; Daniel Flaherty; Andrew Fried; Jonathan Goff; Hengyuan Gray; Hu", "journal": "Science", "ref_id": "b11", "title": "Human-level play in the game of diplomacy by combining language models with strategic reasoning", "year": "2022" }, { "authors": "Yao Fu; Hao Peng; Ashish Sabharwal; Peter Clark; Tushar Khot", "journal": "", "ref_id": "b12", "title": "Complexity-based prompting for multi-step reasoning", "year": "2022" }, { "authors": "Yao Fu; Litu Ou; Mingyu Chen; Yuhao Wan", "journal": "Github", "ref_id": "b13", "title": "Measuring llms' reasoning performance", "year": "2023" }, { "authors": "Amelia Glaese; Nat Mcaleese; Maja Trębacz; John Aslanides; Vlad Firoiu; Timo Ewalds; Maribeth Rauh; Laura Weidinger; Martin Chadwick; Phoebe Thacker", "journal": "", "ref_id": "b14", "title": "Improving alignment of dialogue agents via targeted human judgements", "year": "2022" }, { "authors": " ", "journal": "", "ref_id": "b15", "title": "Palm 2 technical report", "year": "2023" }, { "authors": "Jordan Hoffmann; Sebastian Borgeaud; Arthur Mensch; Elena Buchatskaya; Trevor Cai; Eliza Rutherford; Diego De Las; Lisa Anne Casas; Johannes Hendricks; Aidan Welbl; Clark", "journal": "", "ref_id": "b16", "title": "Training compute-optimal large language models", "year": "2022" }, { "authors": "", "journal": "", "ref_id": "b17", "title": "Announcing jurassic-2 and task-specific apis", "year": "2023" }, { "authors": "Mike Lewis; Denis Yarats; Devi Yann N Dauphin; Dhruv Parikh; Batra", "journal": "", "ref_id": "b18", "title": "Deal or no deal? end-to-end learning for negotiation dialogues", "year": "2017" }, { "authors": "Guohao Li; Hasan Abed; Al Kader Hammoud; Hani Itani; Dmitrii Khizbullin; Bernard Ghanem", "journal": "", "ref_id": "b19", "title": "Camel: Communicative agents for\" mind\" exploration of large scale language model society", "year": "2023" }, { "authors": "Percy Liang; Rishi Bommasani; Tony Lee; Dimitris Tsipras; Dilara Soylu; Michihiro Yasunaga; Yian Zhang; Deepak Narayanan; Yuhuai Wu; Ananya Kumar", "journal": "", "ref_id": "b20", "title": "Holistic evaluation of language models", "year": "2022" }, { "authors": "Hao Liu; Carmelo Sferrazza; Pieter Abbeel", "journal": "", "ref_id": "b21", "title": "Languages are rewards: Hindsight finetuning using human feedback", "year": "2023" }, { "authors": "Aman Madaan; Niket Tandon; Prakhar Gupta; Skyler Hallinan; Luyu Gao; Sarah Wiegreffe; Uri Alon; Nouha Dziri; Shrimai Prabhumoye; Yiming Yang", "journal": "", "ref_id": "b22", "title": "Self-refine: Iterative refinement with self-feedback", "year": "2023" }, { "authors": " Openai", "journal": "", "ref_id": "b23", "title": "", "year": "2023" }, { "authors": "Long Ouyang; Jeffrey Wu; Xu Jiang; Diogo Almeida; Carroll Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b24", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Sung Joon; Park; C O' Joseph; Carrie J Brien; Meredith Ringel Cai; Percy Morris; Michael S Liang; Bernstein", "journal": "", "ref_id": "b25", "title": "Generative agents: Interactive simulacra of human behavior", "year": "2023" }, { "authors": "Ethan Perez; Sam Ringer; Kamilė Lukošiūtė; Karina Nguyen; Edwin Chen; Scott Heiner; Craig Pettit; Catherine Olsson; Sandipan Kundu; Saurav Kadavath", "journal": "", "ref_id": "b26", "title": "Discovering language model behaviors with model-written evaluations", "year": "2022" }, { "authors": "Shuofei Qiao; Yixin Ou; Ningyu Zhang; Xiang Chen; Yunzhi Yao; Shumin Deng; Chuanqi Tan; Fei Huang; Huajun Chen", "journal": "", "ref_id": "b27", "title": "Reasoning with language model prompting: A survey", "year": "2022" }, { "authors": "Jérémy Scheurer; Jon Ander Campos; Jun Shern Chan; Angelica Chen; Kyunghyun Cho; Ethan Perez", "journal": "", "ref_id": "b28", "title": "Training language models with natural language feedback", "year": "2022" }, { "authors": "David Silver; Aja Huang; Chris J Maddison; Arthur Guez; Laurent Sifre; George Van Den; Julian Driessche; Ioannis Schrittwieser; Veda Antonoglou; Marc Panneershelvam; Lanctot", "journal": "nature", "ref_id": "b29", "title": "Mastering the game of go with deep neural networks and tree search", "year": "2016" }, { "authors": "David Silver; Julian Schrittwieser; Karen Simonyan; Ioannis Antonoglou; Aja Huang; Arthur Guez; Thomas Hubert; Lucas Baker; Matthew Lai; Adrian Bolton", "journal": "nature", "ref_id": "b30", "title": "Mastering the game of go without human knowledge", "year": "2017" }, { "authors": "Eyvind Johannes Von Oswald; Ettore Niklasson; João Randazzo; Alexander Sacramento; Andrey Mordvintsev; Max Zhmoginov; Vladymyrov", "journal": "", "ref_id": "b31", "title": "Transformers learn in-context by gradient descent", "year": "2022" }, { "authors": "Jason Wei; Yi Tay; Rishi Bommasani; Colin Raffel; Barret Zoph; Sebastian Borgeaud; Dani Yogatama; Maarten Bosma; Denny Zhou; Donald Metzler", "journal": "", "ref_id": "b32", "title": "Emergent abilities of large language models", "year": "2022" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Ed Chi; Quoc Le; Denny Zhou", "journal": "", "ref_id": "b33", "title": "Chain of thought prompting elicits reasoning in large language models", "year": "2022" } ]
[]
10.18653/v1/N19-1423
2023-05-17
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "When task-oriented dialog (TOD) systems try to accomplish a task such as restaurant reservations and weather reporting for human users, they generally resort to an external knowledge base (KB) to retrieve relevant entity information for generating an informative system response. Conventional pipeline systems comprise several modules such as dialogue state tracking and dialogue policy learning that require annotations for training, where intermediate predictions such as belief state can be used for the retrieval. By contrast, end-to-end task-oriented dialog (E2E-TOD) systems aim to eliminate the dependence on intermediate annotations and generate" }, { "figure_ref": [], "heading": "&RQGHQVHG", "publication_ref": [ "b26", "b14", "b17", "b19", "b28", "b27", "b23", "b13", "b7", "b2", "b25", "b1" ], "table_ref": [], "text": ",QGRPDLQ &URVVGRPDLQ (QWLW\\)\n&'1(7 )*6HT ((5 ')1HW\nFigure 1: Performance of four end-to-end task-oriented dialog systems on MultiWOZ 2.1 when knowledge bases of different sizes are used. The evaluation metric is Entity F1 scores of entities in generated responses. \"Condensed\" means that each dialog is associated with a small-sized knowledge base, which is the default setting of many current systems. \"In-domain\" means that each dialog corresponds to a knowledge base of the same domain, while \"Cross-domain\" means that all dialogs share the same large-scale cross-domain knowledge base provided in the dataset.\nthe response end-to-end (Wu et al., 2019). Apparently, knowledge retrieval is at the core of this task, which is non-trivial as no gold labels are available for training a retriever. Arguably, this problem has limited the performance of existing E2E-TOD systems considering that substantial progress has been made in natural language generation.\nRoughly, existing approaches for knowledge retrieval in E2E-TOD systems can be divided into three categories. First, the knowledge base can be embedded into a memory network and queried with the representations of dialogue context (Madotto et al., 2018;Qin et al., 2020;Raghu et al., 2021). Second, the serialized knowledge base records can be encoded together with dialog context by pretrained language models (Xie et al., 2022;Wu et al., 2022;Tian et al., 2022). Third, the knowledge base can be embedded into model parameters through data augmentation to support im-plicit knowledge retrieval (Madotto et al., 2020;Huang et al., 2022). These approaches generally blend knowledge retrieval and response generation and train them by the supervision of reference responses, which has two limitations. First, the system response usually consists of pure language tokens and KB-related tokens (e.g., hotel names and phone numbers), and it is challenging to train a good retriever from the weak supervision of reference responses. Second, the systems may become inefficient when the scale of the knowledge base grows large. Our preliminary study 2 in Figure 1 confirms that when a large-scale cross-domain knowledge base is given, existing dialog systems suffer significant performance degradation.\nIn this paper, we propose a novel Multi-grAined KnowlEdge Retriever (MAKER) for E2E TOD systems to improve the acquisition of knowledge for response generation. The retriever decouples knowledge retrieval from response generation and introduces an entity selector and an attribute selector to select relevant entities and attributes from the knowledge base. Then, the response generator generates a system response based on the dialogue context and the multi-grained retrieval results. The retriever is trained by distilling knowledge from the response generator using the cross-attention scores of KB-related tokens in the response. We train the entity selector, attribute selector, and response generator jointly in an end-to-end manner.\nWe compare our system with other E2E TOD systems on three benchmark datasets (Eric et al., 2017;Wen et al., 2017;Eric et al., 2020). Empirical results show that our system achieves state-of-theart performance when either a small or a largescale knowledge base is used. Through in-depth analysis, we have several findings to report. First, our retriever shows great advantages over baselines when the size of knowledge bases grows large. Second, of the two selectors, the entity selector plays a more important role in the retriever. Third, our system consistently outperforms baselines as different numbers of records are retrieved, and works well even with a small number of retrieval results." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b14", "b16", "b26", "b19", "b28", "b20", "b23", "b13", "b7" ], "table_ref": [], "text": "2.1 End-to-End Task-Oriented Dialog Existing approaches for knowledge retrieval in end-to-end task-oriented dialog systems can be 2 More details of this study are given in Appendix B. divided into three categories. First, the knowledge base (KB) is encoded with memory networks, and KB records are selected using attention weights between dialogue context and memory cells. Mem2seq (Madotto et al., 2018) uses multi-hop attention over memory cells to select KB tokens during response generation. KB-Retriever (Qin et al., 2019) retrieves the most relevant entity from the KB by means of attention scores to improve entity consistency in the system response. GLMP (Wu et al., 2019) introduces a global-to-local memory pointer network to retrieve relevant triplets to fill in the sketch response. CD-NET (Raghu et al., 2021) retrieves relevant KB records by computing a distillation distribution based on dialog context.\nSecond, the concatenation of knowledge base and dialogue context is taken as input for pretrained language models. UnifiedSKG (Xie et al., 2022) uses a unified text-to-text framework to generate system responses. DialoKG (Rony et al., 2022) models the structural information of knowledge base through knowledge graph embedding and performs knowledge attention masking to select relevant triples. Q-TOD (Tian et al., 2022) proposes to rewrite dialogue context to generate a natural language query for knowledge retrieval.\nThird, the knowledge base is stored in model parameters for implicit retrieval during response generation. GPT-KE (Madotto et al., 2020) proposes to embed the knowledge base into pretrained model parameters through data augmentation. ECO (Huang et al., 2022) first generates the most relevant entity with trie constraint to ensure entity consistency in the response. However, these methods generally blend entity retrieval and response generation during response generation, which leads to sub-optimal retrieval performance when large-scale knowledge bases are provided." }, { "figure_ref": [], "heading": "Neural Retriever", "publication_ref": [ "b30", "b10", "b29", "b8", "b4", "b11", "b22", "b21" ], "table_ref": [], "text": "With the success of deep neural networks in various NLP tasks, they have also been applied to information retrieval. One of the mainstream approaches is to employ a dual-encoder architecture (Yih et al., 2011) to build a retriever. Our work is mostly inspired by the retrieval methods in question answering. To train a retriever with labeled questiondocument pairs, DPR (Karpukhin et al., 2020) uses in-batch documents corresponding to other questions together with BM25-retrieved documents as Figure 2: The overview of our end-to-end task-oriented dialog system, which consists of a knowledge retriever and a response generator. The retriever is further divided into an entity selector and an attribute selector to retrieve multi-grained knowledge, and optimized by distilling knowledge from the response generator.\nnegative samples for contrastive learning. To train a retriever with only question-answer pairs instead of question-document pairs, which is a weakly supervised learning problem, researchers propose to distill knowledge from the answer generator to train the retriever iteratively (Yang and Seo, 2020;Izacard and Grave, 2020). Other researchers try to train the retriever and generator in an end-to-end manner. REALM (Guu et al., 2020), RAG (Lewis et al., 2020), and EMDR 2 (Singh et al., 2021) propose to train the retriever end-to-end through maximum marginal likelihood. Sachan et al. (2021) propose to combine unsupervised pre-training and supervised fine-tuning to train the retriever. Motivated by these works, we propose a multi-grained knowledge retriever trained by distilling knowledge from the response generator in E2E-TOD systems." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "In this section, we first describe the notations and outline our method, and then introduce the knowledge retriever and response generator in detail." }, { "figure_ref": [], "heading": "Notations", "publication_ref": [], "table_ref": [], "text": "Given a dialog D = {U 1 , R 1 , ..., U T , R T } of T turns, where U t and R t are the t-th turn user utterance and system response, respectively. We use C t to represent the dialog context of the t-th turn, where C t = {U 1 , R 1 , ..., U t-1 , R t-1 , U t }. An external knowledge base (KB) is provided in the form of a set of entities, i.e., K = {E 1 , E 2 , ..., E B }, where each entity E i is composed of N attributevalue pairs, i.e., E i = {a 1 , v 1 i , ..., a N , v N i }. Endto-end task-oriented dialog systems take dialogue context C t and knowledge base K as input and generate an informative response R t ." }, { "figure_ref": [], "heading": "System Overview", "publication_ref": [], "table_ref": [], "text": "The architecture of our end-to-end task-oriented dialog system is shown in Figure 2. At each turn of conversation, our system resorts to a Multi-grAined KnowlEdge Retriever (MAKER) to retrieve a set of entities from the external knowledge base. Then, the response generator takes as input the retrieved entities together with the dialog context and generates a natural language response. The overall system is optimized in an end-to-end manner without the need for intermediate annotations.\nThe novelty of MAKER lies in that it decouples knowledge retrieval from response generation and provides multi-grained knowledge retrieval by means of an entity selector and an attribute selector. Specifically, the knowledge base is first encoded with an entity encoder Enc e at entity level. Then, the dialogue context is encoded with a context encoder Enc c and used to retrieve a set of relevant entities from the knowledge base, which is referred to as entity selection. Next, irrelevant attributes are filtered out with an attribute selector based on the interaction of dialog context and retrieved entities, where another encoder Enc a is used. Finally, each retrieved entity is concatenated with the dialog context and passed to a generator encoder Enc g to obtain their representations, based on which the generator decoder Dec g produces a system response. To train the retriever, the cross-attention scores from KB-related tokens in the reference response to each retrieved entity are used as supervision signals to update the entity selector, while the attribute selector is trained by using the occurrences of attribute values in the dialogue as pseudo-labels.\nTo better measure the relationship between entities and response, the whole training process involves two stages. First, the warming-up stage only trains the attribute selector and the response generator, with the entity selector not updated. As the above training converges, the second stage starts to update the entity selector together with other modules using cross-attention scores from the response generator." }, { "figure_ref": [], "heading": "Knowledge Retriever", "publication_ref": [ "b9", "b16", "b22" ], "table_ref": [], "text": "In this section, we introduce the entity selector, attribute selector, and the training of the retriever.\nEntity Selector To support large-scale knowledge retrieval, we model the entity selector as a dual-encoder architecture, where one encoder Enc c is used to encode the dialogue context and another encoder Enc e is to encode each entity (row) of the knowledge base, both into a dense vector. To encode an entity, we concatenate the attribute-value pairs of this entity into a sequence and pass it to Enc e . The selection score s t,i for entity E i is defined as the dot product between the context vector and the entity vector as:\ns t,i = Enc c (C t ) T Enc e (E i ).\n(1)\nThen, the top-K entities are obtained by:\nE t = TopK(s t,i ) = {E 1 , ..., E K }.(2)\nRetrieving the top-K entities can be formulated as maximum inner product search (MIPS), which can be accelerated to sub-linear time using efficient similarity search libraries such as FAISS (Johnson et al., 2019). We implement Enc c and Enc e with a pre-trained language model and allow them to share weights, where the final \"[CLS]\" token representation is used as the encoder output. Existing studies suggest that initializing Enc c and Enc e with BERT weights may lead to collapsed representations and harm the retrieval performance. Therefore, following KB-retriever (Qin et al., 2019), we initialize them by pre-training with distant supervision. 3 Since the entity selector is updated by knowledge distillation, recalculating the embeddings of all entities after each update introduces considerable computational cost. Therefore, we follow 3 More pre-training details are given in Appendix C.\nEMDR 2 (Singh et al., 2021) to update the embeddings of all entities after every 100 training steps.\nAttribute Selector To remove irrelevant attributes and values from the retrieved entities for finer-grained knowledge, we design an attribute selector as follows. We first concatenate dialog context C t with each entity E i ∈ E t and encode them with an attribute encoder Enc a , which is also a pretrained language model. Then, the final \"[CLS]\" token representation of Enc a is extracted and mapped into a N -dimensional vector by a feed-forward network (FFN) for attribute scoring:\na t,i = FFN(Enc a ([C t ; E i ])),(3)\nwhere each element in a t,i ∈ R N represents the importance of the corresponding attribute. Note that a t,i only measures the importance of attributes in E i . To obtain the accumulated importance, we calculate the sum of a t,i over all retrieved entities weighted by entity selection score s t,i :\na t = σ( K i=1 s t,i a t,i ),(4)\nwhere σ represents the sigmoid function.\nFinally, the attributes whose importance scores in a t are greater than a pre-defined threshold τ are selected to construct an attribute subset. The retrieved entities clipped with these attributes are treated as multi-grained retrieval results denoted by Êt . Specifically, we obtain Êt by masking irrelevant attribute-value pairs in each retrieved entity of E t .\nÊt = Clip(E t , a t , τ ) = { Ê1 , ..., ÊK }. (5)\nTo train the attribute selector, we design an auxiliary multi-label classification task. The pseudolabel is a N -dimensional 0-1 vector b t constructed by checking whether any value of an attribute in Êt appears in dialogue context C t or system response R t . Then, we define a binary cross-entropy loss L att for this classification task as:\nL att = BCELoss(a t , b t ). (6\n)\nUpdating The entity selector is updated by distilling knowledge from the response generator as supervision signals. Specifically, since only KBrelated tokens in the response are directly connected to the knowledge base, we regard the crossattention scores from these tokens to each retrieved entity as the knowledge to distill. The rationality behind this is that the cross-attention scores can usually measure the relevance between each entity and the response. Supposing response R t contains M KB-related tokens, we denote the crossattention scores from each KB-related token to entity Êi by C t,i ∈ R | Êi |×M ×L , where | Êi | represents the number of tokens in Êi and L is the number of decoder layers. Then, we calculate an accumulated score for entity Êi as:\nĉt,i = | Êi | j=1 M m=1 L l=1 C t,i,j,m,l .(7)\nThen, ĉt,i is softmax-normalized to obtain a crossattention distribution c t over the K retrieved entities to reflect their importance for the response. Finally, we calculate the KL-divergence between the selection scores s t of retrieved entities and cross-attention distribution c t as the training loss:\nL ent = D KL (s t ||c t ).(8)" }, { "figure_ref": [], "heading": "Response Generator", "publication_ref": [ "b8" ], "table_ref": [], "text": "Inspired by Fusion-in-Decoder (Izacard and Grave, 2020) in open-domain question answering, we employ a modified sequence-to-sequence structure for the response generator to facilitate direct interaction between dialog context and retrieved entities.\nGenerator Encoder Each entity Êi in Êt is first concatenated with dialog context C t and encoded into a sequence of vector representations H t,i :\nH t,i = Enc g ([C t ; Êi ]),(9)\nwhere Enc g represents the encoder of the response generator. Then, the representations of all retrieved entities are concatenated into H t :\nH t = [H t,1 ; ...; H t,K ]. (10\n)\nGenerator Decoder Taking H t as input, the generator decoder Dec g produces the system response token by token. During this process, the decoder not only attends to the previously generated tokens through self-attention but also attends to the dialogue context and retrieved entities by cross-attention, which facilitates the generation of an informative response. The probability distribution for each response token in R t is defined as:\nP (R t,i ) = Dec g (R t,i |R t,<i , H t ).(11)\nWe train the response generator by the standard cross-entropy loss as:\nL gen = |Rt| i=1 -logP (R t,i ),(12)\nwhere |R t | denotes the length of R t .\nLastly, the overall loss of the system is the sum of entity selection loss L ent , attribute selection loss L att , and response generation loss L gen :\nL = L ent + L att + L gen .\n(13)" }, { "figure_ref": [], "heading": "Discussions", "publication_ref": [ "b8" ], "table_ref": [], "text": "Although deriving much inspiration from opendomain question answering (QA) (Izacard and Grave, 2020), where the labels for retrieval are also not available, the scenario of this work is quite different. One major difference is that the answer in open-domain QA is completely from the external source of knowledge, while some responses and tokens in dialog systems may not be relevant to the external knowledge base. That means dialog systems need to accommodate both dialog context and external knowledge and generate a fluent and informative natural language response, making this task thornier than open-domain QA. The main differences between our MAKER and existing knowledge retrieval methods in task-oriented dialog systems are twofold. First, MAKER decouples knowledge retrieval from response generation and provides multi-grained knowledge retrieval of both entities and attributes. The retrieval results are explicitly passed to the generator to produce a system response. Second, MAKER is trained by distilling knowledge from the response generator for supervision, which varies from existing attention-based approaches." }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b1", "b2", "b25", "b19", "b15", "b2" ], "table_ref": [], "text": "We evaluate our system on three multi-turn task-oriented dialogue datasets: MultiWOZ 2.1 (MWOZ) (Eric et al., 2020), Stanford Multi-Domain (SMD) (Eric et al., 2017), and CamRest (Wen et al., 2017). Each dialog in these datasets is associated with a condensed knowledge base, which contains all the entities that meet the user goal of this dialog. For MWOZ, each condensed knowledge base contains 7 entities. For SMD and CamRest, the size of condensed knowledge bases is not fixed: it ranges from 0 to 8 with a mean of 5.95 for SMD and from 0 to 57 with a mean of 1.93 for CamRest. We follow the same partitions as previous work (Raghu et al., 2021). The statistics of these datasets are shown in Appendix A.\nBLEU (Papineni et al., 2002) and Entity F1 (Eric et al., 2017) are used as the evaluation metrics. BLEU measures the fluency of a generated response based on its n-gram overlaps with the gold response. Entity F1 measures whether the generated response contains correct knowledge by micro-averaging the precision and recall scores of attribute values in the generated response." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b0", "b18", "b12" ], "table_ref": [], "text": "We employ BERT (Devlin et al., 2019) as the encoder of our entity selector and attribute selector, and employ T5 (Raffel et al., 2020) to implement the response generator. All these models are finetuned using AdamW optimizer (Loshchilov and Hutter, 2018) with a batch size of 64. We train these models for 15k gradient steps with a linear decay learning rate of 10 -4 . We conduct all experiments on a single 24G NVIDIA RTX 3090 GPU and select the best checkpoint based on model performance on the validation set. More detailed settings can be found in Appendix E." }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [ "b24", "b16", "b26", "b17", "b19", "b27", "b28", "b23", "b13", "b7" ], "table_ref": [], "text": "We compare our system with the following baselines, which are organized into three categories according to how they model knowledge retrieval.\nMemory network: These approaches embed the knowledge base into a memory network and query it with the representation of dialog context, including DSR (Wen et al., 2018), KB-Retriever (Qin et al., 2019), GLMP (Wu et al., 2019), DF-Net (Qin et al., 2020), EER (He et al., 2020b), FG2Seq (He et al., 2020a), CD-NET (Raghu et al., 2021), and GraphMemDialog (Wu et al., 2022).\nDirect fusion: These approaches encode serialized knowledge base records together with dialog context by pre-trained language models, including DialoKG (Rony et al., 2022), UnifiedSKG (Xie et al., 2022), and Q-TOD (Tian et al., 2022).\nImplicit retrieval: These approaches embed the knowledge base into model parameters by data augmentation to provide implicit retrieval during response generation, including GPT-2+KE (Madotto et al., 2020) and ECO (Huang et al., 2022)." }, { "figure_ref": [], "heading": "Results and Analysis", "publication_ref": [], "table_ref": [], "text": "In this section, we first show the overall performance of the evaluated systems given a condensed knowledge base for each dialog. Then, we compare them with a more practical setting in which a largescale knowledge base is provided. We also conduct an in-depth analysis of the proposed retriever. More experiments are presented in the appendix." }, { "figure_ref": [], "heading": "Overall Results", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "The overall results are shown in Table 1. We observe that our system with T5-Large as the backbone model achieves the state-of-the-art (SOTA) performance on MWOZ and SMD. Specifically, on MWOZ our system surpasses the previous SOTA, namely Q-TOD, by 1.15 points in BLEU and 4.11 points in Enity F1. On SMD, the improvements over Q-TOD are 4.58 points in BLEU and 0.19 points in Enity F1. On CamRest, our system only achieves the best performance in BLEU but underperforms the best-performing DialoKG slightly. The reason behind this phenomenon is that many dialogues in CamRest contain extremely small knowledge bases, with only 1-2 entities, leaving little space for our retriever to show its advantage.\nNote that with the same backbone generator (T5-Base/T5-Large), our system surpasses Q-TOD even though it relies on human annotations to train a query generator for knowledge retrieval. The possible reason is that while the retriever of Q-TOD is independent of response generation, ours is trained and guided by knowledge distillation from response generation. Moreover, in addition to retrieving entities from the knowledge base, our retriever also conducts a fine-grained attribute selection." }, { "figure_ref": [], "heading": "Large-Scale Knowledge Base", "publication_ref": [ "b16", "b17", "b19", "b23" ], "table_ref": [ "tab_2", "tab_0", "tab_2" ], "text": "The experiments in Section 5.1 are conducted with each dialog corresponding to a condensed knowledge base. Although most previous systems are evaluated in this setting, it is not practical to have such knowledge bases in real scenes, where the systems may need to retrieve knowledge from a largescale knowledge base. Therefore, we examine the performance of several well-recognized E2E TOD systems by implementing them on a large-scale cross-domain knowledge base (referred to as \"full knowledge base\") on MWOZ and CamRest, respectively, where the knowledge base is constructed by gathering the entities for all dialogs in the original (Qin et al., 2019), (Qin et al., 2020), (Raghu et al., 2021), and (Tian et al., 2022), respectively. dataset. 4 The results are shown in Table 2. We observe that our system outperforms baselines by a large margin when the full knowledge base is used. In addition, there are two other observations. First, comparing the results in Table 1 andTable 2, we note existing systems suffer a severe performance deterioration when the full knowledge base is used. For example, the Enity F1 score of DF-Net drops by 7.79 points on MWOZ, while our system only drops by 2.81/2.6 points. Second, our system with the full knowledge base still outperforms other systems when they use condensed knowledge bases, which is easier to retrieve. These 3: Results of ablation study on MWOZ with T5-base, where \"w/o\" means without, \"distillation\" denotes distillation from response generation, \"attr_selector\" denotes the attribute selector, and \"ent_selector\" denotes the entity selector." }, { "figure_ref": [], "heading": "Model", "publication_ref": [], "table_ref": [], "text": "observations verify the superiority of our system when applied to a large-scale knowledge base as well as the feasibility of applying it to real scenes." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "We conduct an ablation study of our retriever MAKER with both condensed and full knowledge bases on MWOZ, and show the results in the first and the second blocks of Table 3, respectively.\nWhen condensed knowledge bases are used, the system suffers obvious performance drops with the removal of distillation (w/o distillation) or entity selection (w/o ent_selector). This indicates that despite the quality of condensed knowledge bases, our retriever can further learn to distinguish between the entities by distilling knowledge from the response generator. Besides, the performance of the system drops when the attribute selector is abandoned (w/o attr_selector), showing that attribute selection is also indispensable in the retriever.\nWhen the full knowledge base is used, entity selection is more necessary for the system. Therefore, we only ablate the distillation component and the attribute selector. The results show that the system suffers significant performance degradation when distillation is removed (w/o distillation). Attribute selection is also shown important as the performance drops upon it is removed (w/o attr_selector)." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Comparison of Retrieval Methods", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "To further demonstrate the effectiveness of our multi-grained knowledge retriever, we compare different retrieval methods on the full knowledge base of MWOZ. Specifically, we first retrieve the top-K entities with different retrieval methods and employ the same response generator to generate the system response. Moreover, we propose a new metric, i.e., Recall@7, to measure whether the suggested entities in the system response appear in the 7 retrieved entities. As shown in Table 4, the proposed retriever achieves the best performance compared with other methods except Oracle, which uses condensed knowledge bases without retrieval, in both generation metrics (BLEU, Entity F1) and the retrieval metric (Recall@7).\nTo investigate the effect of different numbers of retrieved entities on system performance, we report the Entity F1 and Recall@x scores of the above retrieval methods as the number of entities changes, while Oracle is not included because we cannot rank its entities. We observe in Figure 3(a) that the Recall@x scores for all methods improve as the number of entities grows, while our retriever consistently achieves the best performance. In Figure 3(b), we observe no positive correlation between the Entity F1 score and the number of entities, suggesting that noisy entities may be introduced as the number of entities increases. We can also observe that the number of entities corresponding to the peak of the Entity F1 scores varies for different methods, while our retriever only requires a small number of entities to reach the peak performance." }, { "figure_ref": [], "heading": "Attribute Selection Methods", "publication_ref": [], "table_ref": [], "text": "In Section 3.3, we calculate an accumulated importance score for each attribute weighted by entity selection scores to determine which attributes are preserved based on a given threshold. In Table 5, we compare different methods for accumulating the attribute scores as well as different approaches for filtering out irrelevant attributes. It can be observed that direct averaging rather than weighting by entity selection scores hurts the Entity F1 score. This indicates that the retriever can select attributes more appropriately based on the selection scores of retrieved entities. We also observe that using top-K instead of a threshold to select attributes leads to a lower Entity F1 score than preserving all attributes. We believe the reason is that the number of attributes to be selected varies for each dialogue context, and therefore simply selecting the top-K attributes results in sub-optimal attributes." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We propose a novel multi-grained knowledge retriever (MAKER) for end-to-end task-oriented dialog systems. It decouples knowledge retrieval from response generation and introduces an entity selector and an attribute selector to acquire multigrained knowledge from the knowledge base. The retriever is trained by distilling knowledge from the response generator. Empirical results show that our system achieves state-of-the-art performance when either a small or a large-scale knowledge base is provided for each dialog. Through in-depth analysis, our retriever shows great advantages over baselines when the size of knowledge bases grows large. Of the two selectors, the entity selector is shown to be more prominent in the retriever." }, { "figure_ref": [], "heading": "D Domain-Wise Results", "publication_ref": [ "b19", "b20", "b23" ], "table_ref": [ "tab_7", "tab_8" ], "text": "We report the domain-wise results with condensed knowledge bases on MWOZ and SMD in Table 9 and Table 10, respectively. The results of baseline models are cited from (Raghu et al., 2021), (Rony et al., 2022), and (Tian et al., 2022)." }, { "figure_ref": [], "heading": "E More Implementation Details", "publication_ref": [], "table_ref": [ "tab_0", "tab_2", "tab_0", "tab_5" ], "text": "The hyperparameters of our system with condensed and full knowledge bases are shown in Table 11 and Table 12, respectively. Our method has three contributions: knowledge distillation, entity selection, and attribute selection. We list the application of these contributions with condensed and full knowledge base in Table 13 and Table 14, respectively." }, { "figure_ref": [ "fig_2" ], "heading": "F Case Study", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "In Figure 4, we provide a dialogue example from the MWOZ dataset. We can observe that, for a given user utterance, our system can retrieve entities that satisfy the user goal, while masking irrelevant attributes. Then, it generates appropriate system responses. Note that when the user goal changes, e.g., in the second turn of this case when the user wants a cheap restaurant, our retriever can retrieve the corresponding one, with the attribute of price range being preserved. Hyperparameters MWOZ SMD CamRest T5-Base T5-Large T5-Base T5-Large T5-Base T5-Large Table 14: Hyperparameter settings of whether to apply each contribution to our system when the full knowledge base is used on MWOZ and CamRest. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was supported by the National Natural Science Foundation of China (No.62176270), the Guangdong Basic and Applied Basic Research Foundation (No.2023A1515012832), the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (No. 2017ZT07X355), and the Tencent AI Lab Rhino-Bird Focused Research Program. We thank Yingqi Gao and Canbin Huang for their efforts in the preliminary experiments." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Our system employs a modified sequence-tosequence architecture to implement the response generator. Since the length of dialogue context increases as the dialogue continues, the generator needs to input multiple long dialogue contexts to the encoder simultaneously, each for a retrieved entity. This may cause redundancy in the input and lowers the proportion of KB-related information. We will explore more efficient architectures for the response generator in future work." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "All the experiments are conducted on publicly available datasets, which don't include any private information. Our work doesn't involve identity characteristics or any gender and racial discrimination." }, { "figure_ref": [], "heading": "A Statistics of Datasets", "publication_ref": [], "table_ref": [], "text": "The statistics of the datasets are shown in Table 6." }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [ "b1", "b2", "b25" ], "table_ref": [], "text": "Domains # Dialogues Train/Val/Test MWOZ (Eric et al., 2020) Restaurant, Attraction, Hotel 1839/117/141 SMD (Eric et al., 2017) Navigate, Weather, Schedule 2425/302/304 CamRest (Wen et al., 2017) Restaurant 406/135/135 " }, { "figure_ref": [], "heading": "B Preliminary Study", "publication_ref": [ "b19" ], "table_ref": [], "text": "The detailed results of our preliminary study for condensed, in-domain, and cross-domain knowledge bases are shown in Table 7. The results of baseline models on condensed knowledge bases are cited from (Raghu et al., 2021). We produce their results on in-domain and cross-domain knowledge bases by using the officially released code. " }, { "figure_ref": [], "heading": "C Pre-training for Entity Selector", "publication_ref": [ "b3" ], "table_ref": [], "text": "Given a dialogue context and the system response, we use the entity with the most occurrences of its attribute values in the dialogue context and system response as the label. Then we apply supervised contrastive learning for optimization (Gao et al., 2021). Specifically, the positive example of a dialogue context is the corresponding labeled entity, Table 12: Hyperparameter settings of our system when the full knowledge base is used on MWOZ and CamRest.\nContributions MWOZ SMD CamRest T5-Base T5-Large T5-Base T5-Large T5-Base T5-Large" }, { "figure_ref": [], "heading": "Knowledge distillation Entity Selection Attribute Selection", "publication_ref": [], "table_ref": [], "text": "Table 13: Hyperparameter settings of whether to apply each contribution to our system when condensed knowledge bases are used on the MWOZ, SMD, and CamRest datasets." }, { "figure_ref": [], "heading": "Contributions", "publication_ref": [], "table_ref": [], "text": "MWOZ CamRest T5-Base T5-Large T5-Base T5-Large" }, { "figure_ref": [], "heading": "Knowledge distillation Entity Selection Attribute Selection", "publication_ref": [], "table_ref": [], "text": "" } ]
Retrieving proper domain knowledge from an external database lies at the heart of endto-end task-oriented dialog systems to generate informative responses. Most existing systems blend knowledge retrieval with response generation and optimize them with direct supervision from reference responses, leading to suboptimal retrieval performance when the knowledge base becomes large-scale. To address this, we propose to decouple knowledge retrieval from response generation and introduce a multi-grained knowledge retriever (MAKER) that includes an entity selector to search for relevant entities and an attribute selector to filter out irrelevant attributes. To train the retriever, we propose a novel distillation objective that derives supervision signals from the response generator. Experiments conducted on three standard benchmarks with both small and large-scale knowledge bases demonstrate that our retriever performs knowledge retrieval more effectively than existing methods. Our code has been made publicly available.
Multi-Grained Knowledge Retrieval for End-to-End Task-Oriented Dialog
[ { "figure_caption": "Figure 3 :3Figure 3: Performance of different retrieval methods as the number of retrieved entities changes on the full knowledge base in Recall (a) and Entity F1 (b) scores.", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "for a restaurant. The restaurant should be in the north and should serve italian food.Da vinci pizzeria at 20 milton road chesterton. Da vinci pizzeria is located at 20 milton road chesterton.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: An example of dialogue to illustrate our system. Blue font refers to knowledge base-related information.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Overall results of E2E TOD systems with condensed knowledge bases on MWOZ, SMD, and CamRest. The best scores are highlighted in bold, and the second-best scores are underlined. †, ‡, §, * indicates that the results are cited from", "figure_data": "ModelMWOZ BLEU Entity F1 BLEU Entity F1 BLEU Entity F1 SMD CamRestDSR (Wen et al., 2018)9.10 ‡30.00 ‡12.70 †51.90 †18.30 †53.60 †KB-Retriever (Qin et al., 2019)--13.9053.7018.5058.60GLMP (Wu et al., 2019)6.90 ‡32.40 ‡13.90 ‡60.70 ‡15.10 §58.90 §DF-Net (Qin et al., 2020)9.4035.1014.4062.70--GPT-2+KE (Madotto et al., 2020)15.0539.5817.3559.7818.0054.85EER (He et al., 2020b)13.60 §35.60 §17.20 §59.00 §19.20 §65.70 §FG2Seq (He et al., 2020a)14.60 §36.50 §16.80 §61.10 §20.20 §66.40 §CDNET (Raghu et al., 2021)11.9038.7017.8062.9021.8068.60GraphMemDialog (Wu et al., 2022)14.9040.2018.8064.5022.3064.40ECO (Huang et al., 2022)12.6140.87--18.4271.56DialoKG (Rony et al., 2022)12.6043.5020.0065.9023.4075.60UnifiedSKG (T5-Base) (Xie et al., 2022)--17.4166.45--UnifiedSKG (T5-Large) (Xie et al., 2022) 13.69 *46.04 *17.2765.8520.31 *71.03 *Q-TOD (T5-Base) (Tian et al., 2022)--20.1468.22--Q-TOD (T5-Large) (Tian et al., 2022)17.6250.6121.3371.1123.7574.22Ours (T5-Base)17.2353.6824.7969.7925.0473.09Ours (T5-Large)18.7754.7225.9171.3025.5374.36", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Overall results of E2E TOD systems with a large-scale knowledge base on MWOZ and CamRest, respectively. The best scores are highlighted in bold, and the second-best scores are underlined.", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison of different retrieval methods on the full knowledge base. Oracle refers to using the condensed knowledge base for each dialog as the retrieval result. Frequency means measuring the relevance by the frequency of attribute values occurring in the dialogue context. BM25 measures the relevance using the BM25 score between dialogue context and each entity.", "figure_data": "Retrieval Method BLEU Entity F1 Recall@7Oracle16.1751.45100.00MAKER17.1849.0586.47Pre-training16.6748.7782.71Frequency16.6048.0075.94BM2516.2145.5626.320$.(55HFDOO3UHWUDLQLQJ )UHTXHQF\\ %01XPEHURIUHWULHYHGHQWLWLHV", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Domain-wise performance on MWOZ.", "figure_data": "ModelBLEU Entity F1 Hotel Attraction RestaurantDSR9.1030.0027.1028.0033.40GLMP6.9032.4028.1024.4038.40DF-Net9.4035.1030.6028.1040.90GPT-2+KE15.0039.6033.4043.3037.10EER13.6035.6035.7043.0034.20FG2Seq14.6036.5034.4037.2038.90CDNET11.9038.7036.3038.9041.70GraphMemDialog14.9040.2036.4048.8042.80DialoKG12.6043.5037.9039.8046.70Q-TOD (T5-Large) 17.6250.6145.2554.8155.78Ours (T5-Large)18.7754.7246.9765.0862.12ModelBLEU Entity F1 Schedule Weather NavigateDSR12.7051.9052.1050.4052.00GLMP13.9059.6070.2058.0054.30DF-Net14.4062.7073.1057.6057.90GPT-2+KE17.4059.8072.6057.7053.50EER17.2059.0071.8057.8052.50FG2Seq16.8061.1073.3057.4056.10CDNET17.8062.9075.4061.3056.70GraphMemDialog18.8064.5075.9062.3056.30DialoKG20.0065.9077.9072.7058.40Q-TOD (T5-Large) 21.3371.1181.4269.1862.91Ours (T5-Large)25.9171.3078.5672.6962.15", "figure_id": "tab_7", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Domain-wise performance on SMD.", "figure_data": "", "figure_id": "tab_8", "figure_label": "10", "figure_type": "table" } ]
Fanqi Wan; Weizhou Shen; Ke Yang; Xiaojun Quan; Wei Bi; Enc Enc
[ { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Mihail Eric; Rahul Goel; Shachi Paul; Abhishek Sethi; Sanchit Agarwal; Shuyang Gao; Adarsh Kumar; Anuj Goyal; Peter Ku; Dilek Hakkani-Tur", "journal": "European Language Resources Association", "ref_id": "b1", "title": "MultiWOZ 2.1: A consolidated multi-domain dialogue dataset with state corrections and state tracking baselines", "year": "2020" }, { "authors": "Mihail Eric; Lakshmi Krishnan; Francois Charette; Christopher D Manning", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "Key-value retrieval networks for task-oriented dialogue", "year": "2017" }, { "authors": "Tianyu Gao; Xingcheng Yao; Danqi Chen", "journal": "", "ref_id": "b3", "title": "Simcse: Simple contrastive learning of sentence embeddings", "year": "2021" }, { "authors": "Kelvin Guu; Kenton Lee; Zora Tung; Panupong Pasupat; Mingwei Chang", "journal": "PMLR", "ref_id": "b4", "title": "Retrieval augmented language model pre-training", "year": "2020" }, { "authors": "Zhenhao He; Yuhong He; Qingyao Wu; Jian Chen", "journal": "IEEE", "ref_id": "b5", "title": "a. Fg2seq: Effectively encoding knowledge for end-to-end task-oriented dialog", "year": "2020" }, { "authors": "Zhenhao He; Jiachun Wang; Jian Chen", "journal": "", "ref_id": "b6", "title": "Task-oriented dialog generation with enhanced entity representation", "year": "2020" }, { "authors": "Guanhuan Huang; Xiaojun Quan; Qifan Wang", "journal": "", "ref_id": "b7", "title": "Autoregressive entity generation for end-toend task-oriented dialog", "year": "2022" }, { "authors": "Gautier Izacard; Edouard Grave", "journal": "", "ref_id": "b8", "title": "Distilling knowledge from reader to retriever for question answering", "year": "2020" }, { "authors": "Jeff Johnson; Matthijs Douze; Hervé Jégou", "journal": "IEEE Transactions on Big Data", "ref_id": "b9", "title": "Billion-scale similarity search with gpus", "year": "2019" }, { "authors": "Vladimir Karpukhin; Barlas Oguz; Sewon Min; Patrick Lewis; Ledell Wu; Sergey Edunov; Danqi Chen; Wen-Tau Yih", "journal": "", "ref_id": "b10", "title": "Dense passage retrieval for open-domain question answering", "year": "2020" }, { "authors": "Patrick Lewis; Ethan Perez; Aleksandra Piktus; Fabio Petroni; Vladimir Karpukhin; Naman Goyal; Heinrich Küttler; Mike Lewis; Wen-Tau Yih; Tim Rocktäschel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b11", "title": "Retrieval-augmented generation for knowledge-intensive nlp tasks", "year": "2020" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b12", "title": "Decoupled weight decay regularization", "year": "2018" }, { "authors": "Andrea Madotto; Samuel Cahyawijaya; Genta Indra Winata; Yan Xu; Zihan Liu; Zhaojiang Lin; Pascale Fung", "journal": "", "ref_id": "b13", "title": "Learning knowledge bases with parameters for task-oriented dialogue systems", "year": "2020" }, { "authors": "Andrea Madotto; Chien-Sheng Wu; Pascale Fung", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Mem2Seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems", "year": "2018" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "", "ref_id": "b15", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Libo Qin; Yijia Liu; Wanxiang Che; Haoyang Wen; Yangming Li; Ting Liu", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Entity-consistent end-to-end task-oriented dialogue system with KB retriever", "year": "2019" }, { "authors": "Libo Qin; Xiao Xu; Wanxiang Che; Yue Zhang; Ting Liu", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Dynamic fusion network for multidomain end-to-end task-oriented dialog", "year": "2020" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "Journal of Machine Learning Research", "ref_id": "b18", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Dinesh Raghu; Atishya Jain; Sachindra Joshi", "journal": "", "ref_id": "b19", "title": "Constraint based knowledge base distillation in end-to-end task oriented dialogs", "year": "2021" }, { "authors": "Rashad Al Md; Ricardo Hasan Rony; Jens Usbeck; Lehmann", "journal": "", "ref_id": "b20", "title": "Dialokg: Knowledge-structure aware task-oriented dialogue generation", "year": "2022" }, { "authors": "Devendra Sachan; Mostofa Patwary; Mohammad Shoeybi; Neel Kant; Wei Ping; Bryan William L Hamilton; Catanzaro", "journal": "", "ref_id": "b21", "title": "End-to-end training of neural retrievers for open-domain question answering", "year": "2021" }, { "authors": "Devendra Singh; Siva Reddy; Will Hamilton; Chris Dyer; Dani Yogatama", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b22", "title": "End-to-end training of multi-document reader and retriever for opendomain question answering", "year": "2021" }, { "authors": "Xin Tian; Yingzhan Lin; Mengfei Song; Siqi Bao; Fan Wang; Huang He; Shuqi Sun; Hua Wu", "journal": "", "ref_id": "b23", "title": "Qtod: A query-driven task-oriented dialogue system", "year": "2022" }, { "authors": "Haoyang Wen; Yijia Liu; Wanxiang Che; Libo Qin; Ting Liu", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Sequence-to-sequence learning for task-oriented dialogue with dialogue state representation", "year": "2018" }, { "authors": "Tsung-Hsien Wen; David Vandyke; Nikola Mrkšić; Milica Gašić; Lina M Rojas-Barahona; Pei-Hao Su; Stefan Ultes; Steve Young", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "A networkbased end-to-end trainable task-oriented dialogue system", "year": "2017" }, { "authors": "Chien-Sheng Wu; Richard Socher; Caiming Xiong", "journal": "", "ref_id": "b26", "title": "Global-to-local memory pointer networks for task-oriented dialogue", "year": "2019" }, { "authors": "Jie Wu; Ian G Harris; Hongzhi Zhao", "journal": "", "ref_id": "b27", "title": "Graphmemdialog: Optimizing end-to-end task-oriented dialog systems using graph memory networks", "year": "2022" }, { "authors": "Tianbao Xie; Chen Henry Wu; Peng Shi; Ruiqi Zhong; Torsten Scholak; Michihiro Yasunaga; Chien-Sheng Wu; Ming Zhong; Pengcheng Yin; I Sida; Wang", "journal": "", "ref_id": "b28", "title": "Unifiedskg: Unifying and multi-tasking structured knowledge grounding with text-to-text language models", "year": "2022" }, { "authors": "Sohee Yang; Minjoon Seo", "journal": "", "ref_id": "b29", "title": "Is retriever merely an approximator of reader?", "year": "2020" }, { "authors": "Wen-Tau Yih; Kristina Toutanova; John C Platt; Christopher Meek", "journal": "", "ref_id": "b30", "title": "Learning discriminative projections for text similarity measures", "year": "2011" } ]
[ { "formula_coordinates": [ 4, 119.82, 449.4, 120.36, 13.13 ], "formula_id": "formula_0", "formula_text": "s t,i = Enc c (C t ) T Enc e (E i )." }, { "formula_coordinates": [ 4, 105.35, 501.78, 183.79, 10.68 ], "formula_id": "formula_1", "formula_text": "E t = TopK(s t,i ) = {E 1 , ..., E K }.(2)" }, { "formula_coordinates": [ 4, 352.73, 247.78, 171.68, 10.67 ], "formula_id": "formula_2", "formula_text": "a t,i = FFN(Enc a ([C t ; E i ])),(3)" }, { "formula_coordinates": [ 4, 370.92, 363.52, 153.49, 33.71 ], "formula_id": "formula_3", "formula_text": "a t = σ( K i=1 s t,i a t,i ),(4)" }, { "formula_coordinates": [ 4, 336.07, 525.04, 188.34, 13.39 ], "formula_id": "formula_4", "formula_text": "Êt = Clip(E t , a t , τ ) = { Ê1 , ..., ÊK }. (5)" }, { "formula_coordinates": [ 4, 359.5, 657.85, 160.67, 10.67 ], "formula_id": "formula_5", "formula_text": "L att = BCELoss(a t , b t ). (6" }, { "formula_coordinates": [ 4, 520.17, 658.23, 4.24, 9.46 ], "formula_id": "formula_6", "formula_text": ")" }, { "formula_coordinates": [ 5, 117.84, 207.38, 171.3, 37.02 ], "formula_id": "formula_7", "formula_text": "ĉt,i = | Êi | j=1 M m=1 L l=1 C t,i,j,m,l .(7)" }, { "formula_coordinates": [ 5, 133.49, 353.42, 155.64, 10.69 ], "formula_id": "formula_8", "formula_text": "L ent = D KL (s t ||c t ).(8)" }, { "formula_coordinates": [ 5, 129.93, 517.52, 159.21, 13.39 ], "formula_id": "formula_9", "formula_text": "H t,i = Enc g ([C t ; Êi ]),(9)" }, { "formula_coordinates": [ 5, 130.41, 599.67, 154.18, 10.72 ], "formula_id": "formula_10", "formula_text": "H t = [H t,1 ; ...; H t,K ]. (10" }, { "formula_coordinates": [ 5, 284.59, 600.05, 4.54, 9.46 ], "formula_id": "formula_11", "formula_text": ")" }, { "formula_coordinates": [ 5, 107.75, 763.54, 181.39, 10.67 ], "formula_id": "formula_12", "formula_text": "P (R t,i ) = Dec g (R t,i |R t,<i , H t ).(11)" }, { "formula_coordinates": [ 5, 358.75, 111.61, 165.66, 34.74 ], "formula_id": "formula_13", "formula_text": "L gen = |Rt| i=1 -logP (R t,i ),(12)" }, { "formula_coordinates": [ 5, 359.3, 224.41, 111.96, 10.63 ], "formula_id": "formula_14", "formula_text": "L = L ent + L att + L gen ." } ]
10.3390/e24111542
2023-05-17
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b20", "b10", "b30", "b9", "b29", "b32", "b25", "b34", "b16", "b33", "b21", "b28", "b5", "b3", "b28", "b2", "b14", "b4", "b26", "b24", "b7", "b23", "b1", "b6", "b8", "b0", "b4", "b26", "b25", "b32", "b4" ], "table_ref": [], "text": "Semantic categories vary across languages, and it has been proposed that this variation can be explained by functional pressure for efficiency. On this view, systems of categories are under pressure to be both simple and informative (e.g. Rosch, 1978), and different languages arrive at different ways of solving this problem, yielding wide yet constrained crosslanguage variation. There is evidence for this view from semantic domains such as kinship (Kemp & Regier, 2012), container names (Y. Xu et al., 2016), names for seasons (Kemp et al., 2019), and numeral systems (Y. Xu et al., 2020). Zaslavsky et al. (2018) gave this proposal a firm theoretical foundation by grounding it in an independent informationtheoretic principle of efficiency, the Information Bottleneck (IB) principle (Tishby et al., 1999); they also showed that color naming systems across languages are efficient in the IB sense, that optimally IB-efficient systems resemble those found in human languages, and that the IB principle accounts for important aspects of the data that had eluded earlier explanations. Subsequent work has shown that container naming (Zaslavsky et al., 2019), grammatical categories of number, tense, and evidentiality (Mollica et al., 2021), and person systems (Zaslavsky et al., 2021) are also efficient in the IB sense.\nIn a commentary on this line of research, Levinson (2012) asked how semantic systems evolve to become efficient, and suggested that an important role may be played by iterated learning (IL; e.g. Scott-Phillips & Kirby, 2010). In IL, a cultural convention is learned by one generation of agents, who then provide training data from which the next generation learns, and so on. The convention changes as it passes through generations, yielding a cultural evolutionary process.\nThe idea that such a process could eventually lead to efficient semantic systems has since been explored and broadly supported. J. Xu et al. (2013) showed that chains of human learners who were originally given a randomly generated color category system eventually produced systems that were similar to those of the World Color Survey (WCS; Cook et al., 2005), a large dataset of color naming systems from 110 unwritten languages. Although this study did not explicitly address efficiency, Carstensen et al. (2015) drew that link explicitly: they reanalzyed the data of J. Xu et al. (2013) and showed that the color naming systems produced by IL not only became more similar to those of human languages -they also became more informative; the same paper also presented analogous findings for semantic systems of spatial relations. In response, Carr et al. (2020) argued, on the basis of a Bayesian model of IL and experiments with human participants, that learning actually contributes simplicity rather than informativeness. Overall, there is support for the idea that IL can lead to efficient semantic systems, with continuing debate over how and why. There are also recent proposals that non-iterated learning -e.g. in the context of a dyad of communicating agents (e.g. Kågebäck et al., 2020;Chaabouni et al., 2021;Tucker et al., 2022), or in a single agent without communication (e.g. Steinert-Threlkeld & Szymanik, 2020;Gyevnar et al., 2022) -can explain efficient color naming systems. These recent contributions build on an important line of earlier work using agent-based simulations cast as evolutionary models, without explicitly addressing efficiency (e.g. Steels & Belpaeme, 2005;Belpaeme & Bleys, 2005;Dowman, 2007;Jameson & Komarova, 2009;Baronchelli et al., 2010).\nSeveral of these prior studies have engaged efficiency in the IB sense, and two are of particular relevance to our own work. Chaabouni et al. (2021) showed that a dyad of neural network agents, trained to discriminate colors via communication, eventually arrived at color naming systems that were highly efficient in the IB sense. However, these systems did not always resemble those of human languages: their categories \"depart to some extent from those typically defined by human color naming\" (Chaabouni et al., 2021, p. 11 of SI). Tucker et al. (2022) explored a similar color communication game, and found that their neural agents gravitated to color naming systems that are both essentially optimally efficient in the IB sense, and similar to human color naming systems from the WCS. They achieved this by optimizing an Colored regions indicate category extensions, and the color code used for each category is the mean of that category in CIELAB color space. The named color categories are distributions, and for each category we highlight the level sets between 0.75 -1.0 (unfaded area) and 0.3 -0.75 (faded area). The middle and right columns contain randomly-generated systems of complexity comparable to that of the WCS system in the same row. The middle column shows random systems that are similar to the WCS system in the same row. The right column shows random systems that are dissimilar to the WCS system in the same row; at the same time, there is no other WCS system that is more similar to this random system. objective function that is based on the IB objective. To our knowledge, earlier work leaves open whether both high IB efficiency and similarity to human languages can be achieved by other means. We explore that question here.\nIn what follows, we first demonstrate that there exist many possible color naming systems that are highly efficient in the IB sense, but do not closely resemble human systems. The existence of such efficient-yet-not-human-like systems is not surprising given that IB is a non-convex optimization problem (Tishby et al., 1999;Zaslavsky et al., 2018), but it may be helpful in understanding how Chaabouni et al. (2021) achieved high IB efficiency with systems that deviate from human ones. We then show that IL, instantiated in communicating neural networks, gravitates toward efficiency and, within the class of efficient systems, gravitates more toward human color naming systems than toward others. Finally, we show that iterated learning alone, and communication alone, do not yield that outcome as clearly. We conclude that iterated learning and communication jointly provide a plausible explanation of how human color naming systems become efficient." }, { "figure_ref": [ "fig_0", "fig_0", "fig_0", "fig_0", "fig_1" ], "heading": "Not all efficient systems are human-like", "publication_ref": [ "b31", "b32", "b32", "b26" ], "table_ref": [], "text": "We considered a class of artificial color naming systems related to one considered by Zaslavsky et al. (2022). In the class we consider, each named category w is modeled as a spherical Gaussian-shaped kernel with mean (prototype) x w in 3-dimensional CIELAB color space, such that the distribution over words w given a color chip c is:\nS(w|c) ∝ e -η||x c -x w || 2 2 (1)\nwhere η > 0 is a parameter controlling the precision of the Gaussian kernel. We then generated artificial color category systems with K = 3 . . . 10 categories each, by first sampling η randomly from a uniform distribution over the interval [0.001, 0.005] and then sampling the prototype x w of each category w randomly, without replacement, from a uniform distribution over the cells of the color naming grid shown at the top of Figure 1. In analyzing these systems, we drew on the following three quantities from the IB framework as presented by Zaslavsky et al. (2018): the complexity of a category system, gNID (a measure of dissimilarity between two category systems), and ε (a measure of inefficiency, or deviation from the theoretical limit of efficiency). We noted that the range of complexity (in the IB sense) for systems in the World Color Survey (WCS) was [0.84, 2.65], and also noted that our random model sometimes generated systems outside this range; we only considered artificial systems with complexity within this range, and generated 100 such systems for each K; we refer to these systems as RM, for random model.\nThe lower panels of Figure 1 show that some of these RM systems are similar to, and others quite dissimilar to, natural systems in the WCS. In each row, the rightmost system, which is dissimilar to the WCS system in that row, is nonetheless more similar to that WCS system than to any other WCS system, meaning that it is dissimilar to all WCS systems. Thus, there exist RM systems that are quite dissimilar to naturally occurring systems. To quantify this pattern, we separated the RM systems into two groups, based on whether their gNID to the closest WCS system exceeded a threshold. We set this threshold to the smallest gNID between systems in the left (WCS) and right (RM dissimilar) columns of Figure 1, which is 0.29. We then grouped all RM systems with gNID to the closest WCS system below this threshold into one group, RM s (for similar to WCS), and the other RM systems into another group, RM d (for dissimilar to WCS). 38% of the RM systems fell in RM d and they spanned the complexity range [0.86, 2.26]. Thus, a substantial proportion of the RM systems are at least as dissimilar to WCS systems as are those in the right column of Figure 1.\nFigure 2 shows the results of an IB efficiency analysis of the WCS systems (replicating Zaslavsky et al., 2018, and assuming their least-informative prior), and also of our RM systems. It can be seen that all RM systems are highly efficient in the IB sense -i.e. they are close to the IB curve that defines the theoretical limit of efficiency in this domain. Mann-Whitney U tests revealed (1) that the RM systems tend to exhibit greater efficiency (lower inefficiency ε) than do the WCS systems in the same complexity range (P .001), and (2) that the RM d systems, which are dissimilar to WCS systems, are also more efficient than WCS systems (P .001, one- sided), and slightly to marginally more efficient than RM s systems (P = .019 one-sided; Bonferroni corrections do not change the qualitative outcome). These findings suggest that there is a substantial number of color naming systems that are dissimilar to those of human languages, yet more efficient than them. This in turn may help to make sense of Chaabouni et al.'s (2021) finding that their evolutionary process yielded systems that were highly efficient but not particularly similar to human ones: our analysis illustrates that there are many such systems. Given this, we sought an evolutionary process that would yield both efficiency in the IB sense, and similarity to human systems, without specifying IB optimization as a part of that process (cf. Tucker et al., 2022)." }, { "figure_ref": [ "fig_0" ], "heading": "Iterated learning and communication", "publication_ref": [ "b12", "b22", "b15", "b3", "b2", "b14", "b4", "b26", "b19", "b14", "b32", "b27", "b11" ], "table_ref": [], "text": "As noted above, iterated learning (IL; e.g. Kirby, 2001;Smith et al., 2003) is a cultural evolutionary process in which a cultural convention is learned first by one generation of agents, who then pass that convention on to another generation, and so on -and the convention changes during inter-generational transmission. Some of the work we have reviewed above addresses IL (e.g. Levinson, 2012;Carstensen et al., 2015;Carr et al., 2020). However other work we have reviewed instead addresses cultural evolution through communication within a single generation (e.g. Kågebäck et al., 2020;Chaabouni et al., 2021;Tucker et al., 2022). We wished to explore the roles of both IL and communication, and so we adopted an approach that involves both, in a way that allows the role of each to be highlighted. Specifically, we adopted the recently proposed neural iterated learning (NIL) algorithm (Ren et al., 2020). In the NIL algorithm, artificial agents are implemented as neural networks that communicate with each other within a generation, and cultural convention (in our case, a color naming system) evolves both from within-generation communication and from inter-generational transmission, as the convention is iteratively passed down through generations of artificial agents, with each new generation learning from the previous one.1 \nIn the NIL algorithm, each generation t (for time step) consists of two artificial agents, a speaker S t and a listener L t . The NIL algorithm operates in three phases. (1) In the first phase, the learning phase, both agents are exposed to the naming convention of the previous generation. This is done by first training the speaker S t , using cross-entropy loss, on color-name pairs generated by the speaker of the previous generation. The listener L t is then trained via reinforcement learning in a few rounds of a signaling game while keeping S t fixed: that is, the speaker learns from the previous generation, and the listener then learns from the speaker. We had the agents play the signaling game used by Kågebäck et al., 2020, in which the speaker is given a color chip c, sampled from a prior distribution over color chips, and produces a category name describing that color. The listener then attempts to identify the speaker's intended color based on the name produced, by selecting a color chip ĉ from among those of the naming grid shown in Figure 1. A reward is given to the listener depending on how perceptually similar the selected chip is to the original color. (2) In the second phase, the interaction phase, the agents play the same signaling game but this time both agents receive a joint reward and update their parameters during communicative interactions. (3) In the third phase, the transmission phase, color-name pairs are generated by sampling colors from the prior distribution and obtaining names for them from the speaker S t . These colorname pairs are then passed on to the next generation of agents. In all three phases, color chips are sampled according to the least-informative prior of Zaslavsky et al. (2018). We represent both the speaker and listener as neural networks with one hidden layer consisting of 25 units with a sigmoidal activation function. Individual colors are represented in 3-dimensional CIELAB space when supplied as input to the speaker, and category names as one-hot encoded vectors. For the reinforcement learning parts of NIL we use the classical algorithm RE-INFORCE (Williams, 1992). For the transmission phase we sample 300 color-name pairs, out of the 330 chips in the entire stimulus set; this ensures that the new generation will have seen examples from most of color space but it is impossible for them to have seen all color-name pairs. To optimize the neural networks, we use the optimizer Adam (Kingma & Ba, 2015), both in the learning and interaction phase, with learning rate 0.005 and batch size 50. For each phase in the NIL algorithm we take 1000 gradient steps. We stop the NIL algorithm once the maximum difference in IB complexity and accuracy over the ten latest generations is smaller than 0.1 bit, Algorithm 1 Neural Iterated Learning 1: Initialize D 1 uniformly at random 2: for t = 1... do" }, { "figure_ref": [], "heading": "3:", "publication_ref": [], "table_ref": [], "text": "Learning Phase" }, { "figure_ref": [], "heading": "4:", "publication_ref": [], "table_ref": [], "text": "Randomly initialize S t and L t .\n5:\nTrain S t on D t using stochastic gradient descent and cross-entropy loss." }, { "figure_ref": [], "heading": "6:", "publication_ref": [], "table_ref": [], "text": "Play signaling game between S t and L t and update parameters of only L t using the rewards." }, { "figure_ref": [], "heading": "7:", "publication_ref": [], "table_ref": [], "text": "Interaction Phase" }, { "figure_ref": [], "heading": "8:", "publication_ref": [], "table_ref": [], "text": "Play signaling game between S t and L t and update parameters of both agents using the rewards." }, { "figure_ref": [ "fig_2", "fig_4" ], "heading": "9:", "publication_ref": [ "b19", "b14", "b2", "b28", "b17", "b18" ], "table_ref": [], "text": "Transmission Phase 10:\nCreate transmission dataset D t+1 consisting of colorname pairs, (c, w) by sampling colors from the prior p(c) and providing them as input to S t . 11: end for i.e. when the last ten generations are all within a small region of the IB plane. Algorithm 1 presents a schematic overview of the NIL algorithm, and Ren et al. (2020) present a detailed description. The hyperparameters were tuned empirically by studying which parameters yielded the highest reward in a small set of experiments. We found very little difference between different sizes of the network.\nFor each vocabulary size K = 3 . . . 10 and K = 100 we ran 100 independent instances of the NIL algorithm. For each instance, we considered the color naming system of the last speaker to be the result of that instance -we call these systems IL+C, as they are the result of iterated learning plus communication, and we evaluated the IL+C systems in the IB framework. As can be seen in Figure 3 (top panel), the IL+C systems are highly efficient in the IB sense: they lie near the theoretical efficiency limit (median inefficiency ε = 0.07), and they are no less efficient than the random RM systems we considered above (median inefficiency ε = 0.09), which in turn are more efficient than the human systems of the WCS (see above). Thus, iterated learning plus communication as formalized in the NIL algorithm leads to semantic systems that are efficient in the IB sense. This is not entirely surprising: the reward during the signaling game favors informativeness (higher reward for similar colors, following Kågebäck et al., 2020), and it has been argued that learning favors simplicity (e.g. Carr et al., 2020). Interestingly, all the resulting systems lie within the complexity range of the WCS systems even though NIL could theoretically produce much more complex systems, especially when K = 100.\nJ. Xu et al. (2013) showed that chains of iterated human learners tended to gravitate toward color naming systems that were similar to those of the WCS, and we wished to know whether the same was true of computational agents in the NIL framework. For each IL+C system, we determined the dissimilarity (gNID) between that system and the most similar (lowest gNID) WCS system. We also determined the analo- gous quantity (dissimilarity to the most similar WCS system) for each random RM system. Figure 4 shows that IL+C systems tend to be similar to WCS systems to a greater extent than RM systems do, and this was confirmed by a one-sided Mann-Whitney U test (P .001). Thus, the NIL process tends to gravitate toward human (WCS) systems to a greater extent than a random but efficient baseline, RM.\nWe also asked whether NIL would transform efficient systems that were dissimilar to those of the WCS (namely those of RM d ) into comparably efficient systems that were more similar to the WCS. To test this, we initialized the NIL algorithm with a system sampled from RM d , ran the NIL algorithm, and compared the initial system to the one that resulted from NIL. Figure 5 illustrates the beginning and end points of this process for a small set of systems, and shows that NIL transforms systems that are efficient but unlike the WCS into systems that are similar to particular WCS systems. Figure 6 shows the same general pattern but aggregated over all RM d In each row, the left column shows an RM d system that was used to initialize NIL, the middle column shows the result of running NIL from that initialization state, and the right column shows a WCS system (from top to bottom: Bete, Colorado, Dyimini) that is similar to the NIL result.\nsystems. For each NIL chain initialized with an RM d system, we measured the dissimilarity (gNID) of that initialized system to the most similar WCS system, and the gNID of the end result of NIL to its most similar WCS system. It can be seen that NIL transforms RM d systems into systems that are more similar to the human systems of the WCS. The mean gNID to WCS was 0.38 before NIL and 0.25 after, and the reduction in dissimilarity to WCS after applying NIL was significant (onesided (paired) Wilcoxon signed-rank test, n = 302, T = 1113, P .001). The median inefficiency of RM d is ε = 0.09 and the median inefficiency of the results of NIL is slightly lower at ε = 0.07, meaning that NIL made the already-efficient RM d systems slightly more efficient (one-sided (paired) Wilcoxon signed-rank test, n = 302, T = 7716, P .001). Thus, NIL moves already-efficient systems closer to the attested systems of the WCS, while maintaining and even slightly improving efficiency. Finally, it is noteworthy that NIL with 3 terms converges to a system that is similar to a 3-term WCS system (see the top row of Figure 5), because 3-term systems are the one The difference score is dissimilarity to WCS (minimum gNID to any WCS system) before NIL, minus the same quantity after NIL. A higher value indicates that NIL has moved the systems closer to the WCS. There are no values below 0, meaning that NIL never caused a system to become less similar to the WCS. case in which IB optimal systems qualitatively diverge from human data (Zaslavsky et al., 2018, p. 7941). Thus, this is a case in which NIL appears to provide a better qualitative fit to the data than IB does (see also Regier et al., 2007Regier et al., , 2015))." }, { "figure_ref": [ "fig_2" ], "heading": "Other possible evolutionary processes", "publication_ref": [ "b14", "b2" ], "table_ref": [], "text": "So far, we have seen evidence that the NIL algorithm may provide a plausible model of the cultural evolutionary process by which human color naming systems become efficient. We have referred to the result of the full NIL algorithm as IL+C systems, because these systems result from both iterated learning (IL) and communication (C). This raises the question whether iterating learning alone, or communication alone, would yield comparable results.\nTo find out, we ran two variants of the NIL algorithm. One variant included only iterated learning but no communication (i.e. lines 6-8 of Algorithm 1 were omitted). The other variant included communication but no iterated learning (i.e. there was only one pass through the main loop, which stopped at line 9); this is exactly the experiment that was performed by Kågebäck et al. (2020). All other aspects of the algorithm were unchanged. We refer to the results of the iteratedlearning-only algorithm as IL (for iterated learning), and the results of the communication-only algorithm as C (for communication).\nComparison of the three panels of Figure 3 reveals that there are qualitative differences in the profiles of the systems produced by the 3 variants of the NIL algorithm (IL+C, IL, and C). We have already seen that IL+C systems (top panel) are both efficient and similar to human systems; we also note that they lie within roughly the same complexity range as the human systems of the WCS. In contrast, the IL systems (middle panel) skew toward lower complexity than is seen in human systems, and in fact about 6% of the IL systems lie at the degenerate point (0, 0) in the IB plane, at which there is a single category covering the entire color do-main. This skew toward simplicity is compatible with Carr et al.'s (2020) claim that iterated learning provides a bias toward simplicity. At the same time, the IL systems are not only simple but also quite efficient (i.e. informative for their level of complexity), which is in turn compatible with Carstensen et al.'s (2015) claim that iterated learning provides some bias toward informativeness. Finally, the C systems (bottom panel) show the opposite pattern: a bias toward higher informativeness, at the price of higher complexity, extending well above the complexity range observed in the human systems of the WCS. Taken together, these results suggest that iterated learning alone over-emphasizes simplicity, communication alone over-emphasizes informativeness, and iterated learning with communication provides a balance between the two that aligns reasonably well with what is observed in human color naming systems. We found that IL+C systems are slightly more efficient (mean ε = 0.07 ± 0.02) than IL (mean ε = 0.15 ± 0.08) or C (mean ε = 0.11 ± 0.04) systems, where the ± indicates plus or minus one standard deviation. IL+C systems were also closer to the most similar WCS system (mean gNID = 0.21 ± 0.05) than were IL (mean gNID = 0.57 ± 0.17) or C (mean gNID = 0.27 ± 0.07) systems. Overall, these results suggest that iterated learning plus communication is a more plausible model of the cultural evolutionary process that leads to efficient human color naming systems than is either iterated learning alone, or communication alone, as these ideas are formalized in the NIL algorithm." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b25", "b32", "b3", "b2", "b13", "b2", "b32", "b2", "b13", "b26" ], "table_ref": [], "text": "We have shown (1) that there exists a reasonably sized class of color naming systems that are highly efficient in the IB sense but dissimilar from human systems; (2) that iterated learning plus communication, as captured in the NIL algorithm, leads to color naming systems that are both efficient in the IB sense and similar to human systems, and (3) that iterated learning alone, and communication alone, do not yield that result as clearly. These findings help to answer some questions, and also open up others.\nAs we have noted, the existence of highly efficient systems that do not align with human ones is not in itself surprising. IB is a non-convex optimization problem (Tishby et al., 1999;Zaslavsky et al., 2018), so multiple optima and near-optima are to be expected. However we feel that our identification of such systems may nonetheless be helpful, because it highlights just how many such systems exist, and just how dissimilar from human systems they sometimes are -which helps to make sense of Chaabouni et al.'s (2021) finding that simulations of cultural evolution can lead to color naming systems that exhibit high IB efficiency but deviate to some extent from human systems. This in turn highlights the importance of identifying cultural evolutionary processes that avoid these local near-optima and instead converge toward systems we find in human languages.\nWe have argued that iterated learning plus communication, as cast in the NIL algorithm, is such a process, and that it provides a better account than either iterated learning alone, or communication alone. This idea, and our findings supporting it, may help to resolve a question in the literature. As we have noted, Carstensen et al. (2015) argued that iterated learning alone can lead to informative semantic systems, whereas Carr et al. (2020) argued that iterated learning provides a bias for simplicity, and communication provides a bias for informativeness (see also Kirby et al., 2015 for a similar argument concerning linguistic form). Our finding that both forces are needed to account for the data aligns with Carr et al.'s (2020) claim. However our finding that learning alone also converges to efficient systems -although to overly simple ones -helps to make sense of Carstensen et al.'s (2015) findings.\nIt is natural to think of NIL, or any such process of cultural evolution, as a means by which the abstract computational goal of optimal efficiency might be approximatedand for the most part, that seems an accurate and useful way to frame the matter. The optimally efficient color naming systems on the IB curve closely resemble those in human languages (Zaslavsky et al., 2018), and the IL+C systems are likewise highly efficient and similar to those in human languages. However, there is an important exception to this pattern. As noted above, in the case of 3-term systems, the IB optimal system qualitatively differs from the color naming patterns found in the WCS (Zaslavsky et al., 2018, p. 7941), whereas IL+C systems qualitatively match them (see e.g. the top row of Figure 5, middle and right panels). Thus, in this one case, it appears that human languages do not attain the optimal solution or something similar to it, and instead attain a somewhat different near-optimal solution that is apparently more easily reached by a process of cultural evolution -a possibility anticipated by Kemp andRegier (2012, p. 1054).\nA major question left open by our findings is exactly why we obtain the results we do. NIL is just one possible evolutionary process, and we have seen that that process accounts for existing data reasonably well. It makes sense intuitively that NIL strikes a balance between the simplicity bias of iterated learning and the informativeness bias of communication (Carr et al., 2020;Kirby et al., 2015) -but what is still missing is a finer-grained sense for exactly which features of this detailed process are critical, vs. replaceable by others, and what the broader class of such processes is that would account well for the data (e.g. Tucker et al., 2022). A related direction for future research concerns the fact that the evolutionary process we have explored is somewhat abstract and idealized, in that agents communicate with little context or pragmatic inference. Actual linguistic communication is highly context-dependent, and supported by rich pragmatic inference -it seems important to understand whether our results would still hold in a more realistic and richer environment for learning and interaction. Finally, we have focused here on the domain of color, but the ideas we have pursued are not specific to color, so another open question is the extent to which our results generalize to other semantic domains." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "We thank Noga Zaslavsky and 3 anonymous reviewers for helpful comments on an earlier version of this paper. Any errors are our own. Author contributions: EC, DD, and TR designed the research; EC performed the research; EC analyzed the data; and EC, DD, and TR wrote the paper. EC was funded by Chalmers AI Research (CHAIR) and the Sweden-America Foundation (SweAm). Computing resources used for the experiments were provided by the Swedish National Infrastructure for Computing (SNIC)." } ]
It has been argued that semantic systems reflect pressure for efficiency, and a current debate concerns the cultural evolutionary process that produces this pattern. We consider efficiency as instantiated in the Information Bottleneck (IB) principle, and a model of cultural evolution that combines iterated learning and communication. We show that this model, instantiated in neural networks, converges to color naming systems that are efficient in the IB sense and similar to human color naming systems. We also show that iterated learning alone, and communication alone, do not yield the same outcome as clearly.
Iterated learning and communication jointly explain efficient color naming systems
[ { "figure_caption": "Figure 1 :1Figure 1: Top: Color naming stimulus grid. Bottom: 9 color naming systems displayed relative to this grid. The left column contains color naming systems from 3 languages in the WCS (from top to bottom: Bete, Colorado, Dyimini).Colored regions indicate category extensions, and the color code used for each category is the mean of that category in CIELAB color space. The named color categories are distributions, and for each category we highlight the level sets between 0.75 -1.0 (unfaded area) and 0.3 -0.75 (faded area). The middle and right columns contain randomly-generated systems of complexity comparable to that of the WCS system in the same row. The middle column shows random systems that are similar to the WCS system in the same row. The right column shows random systems that are dissimilar to the WCS system in the same row; at the same time, there is no other WCS system that is more similar to this random system.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Efficiency of color naming, following Zaslavsky et al., 2018. The dashed line is the IB theoretical limit of efficiency for color naming, indicating the greatest possible accuracy for each level of complexity. The color naming systems of the WCS are shown in orange, replicating the findings of Zaslavsky et al., 2018. Our RM systems are shown in blue. It can be seen that the RM systems are often closer to the IB curve than the WCS systems are. The inset shows the 9 color systems of Figure 1, with the dissimilar random systems shown as +.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Efficiency of the (top) IL+C, (middle) IL, and (c) C evolved color naming systems, in each case compared with the natural systems of the WCS. The black triangle indicates the end state of one run, shown in the inset color map.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :Figure 5 :45Figure 4: Distribution of dissimilarity to WCS systems (minimum gNID to any WCS system), shown for IL+C and RM systems. The RM systems include both RM s and RM d .", "figure_data": "", "figure_id": "fig_3", "figure_label": "45", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure6: NIL transforms efficient RM d color naming systems to become more similar to the WCS. The difference score is dissimilarity to WCS (minimum gNID to any WCS system) before NIL, minus the same quantity after NIL. A higher value indicates that NIL has moved the systems closer to the WCS. There are no values below 0, meaning that NIL never caused a system to become less similar to the WCS.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" } ]
Emil Carlsson
[ { "authors": "A Baronchelli; T Gong; A Puglisi; V Loreto", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b0", "title": "Modeling the emergence of universality in color naming patterns", "year": "2010" }, { "authors": "T Belpaeme; J Bleys", "journal": "Adaptive Behavior", "ref_id": "b1", "title": "Explaining universal color categories through a constrained acquisition process", "year": "2005" }, { "authors": "J W Carr; K Smith; J Culbertson; S Kirby", "journal": "Cognition", "ref_id": "b2", "title": "Simplicity and informativeness in semantic category systems", "year": "2020" }, { "authors": "A Carstensen; J Xu; C T Smith; T Regier", "journal": "", "ref_id": "b3", "title": "Language evolution in the lab tends toward informative communication", "year": "2015" }, { "authors": "R Chaabouni; E Kharitonov; E Dupoux; M Baroni", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b4", "title": "Communicating artificial neural networks develop efficient color-naming systems", "year": "2021" }, { "authors": "R S Cook; P Kay; T Regier", "journal": "Elsevier", "ref_id": "b5", "title": "The World Color Survey database: History and use", "year": "2005" }, { "authors": "M Dowman", "journal": "Cognitive Science", "ref_id": "b6", "title": "Explaining color term typology with an evolutionary model", "year": "2007" }, { "authors": "B Gyevnar; G Dagan; C Haley; S Guo; F Mollica", "journal": "Entropy", "ref_id": "b7", "title": "Communicative efficiency or iconic learning: Do acquisition and communicative pressures interact to shape colour-naming systems?", "year": "2022" }, { "authors": "K A Jameson; N Komarova", "journal": "Journal of the Optical Society of America. A, Optics, image science, and vision", "ref_id": "b8", "title": "Evolutionary models of color categorization i population categorization systems based on normal and dichromat observers", "year": "2009-07" }, { "authors": "C Kemp; A Gaby; T Regier", "journal": "", "ref_id": "b9", "title": "Season naming and the local environment", "year": "2019" }, { "authors": "C Kemp; T Regier", "journal": "Science", "ref_id": "b10", "title": "Kinship categories across languages reflect general communicative principles", "year": "2012" }, { "authors": "D P Kingma; J Ba", "journal": "", "ref_id": "b11", "title": "Adam: A method for stochastic optimization", "year": "2015" }, { "authors": "S Kirby", "journal": "IEEE Transactions on Evolutionary Computation", "ref_id": "b12", "title": "Spontaneous evolution of linguistic structure -an iterated learning model of the emergence of regularity and irregularity", "year": "2001" }, { "authors": "S Kirby; M Tamariz; H Cornish; K Smith", "journal": "Cognition", "ref_id": "b13", "title": "Compression and communication in the cultural evolution of linguistic structure", "year": "2015" }, { "authors": "M Kågebäck; E Carlsson; D Dubhashi; A Sayeed", "journal": "PLoS ONE", "ref_id": "b14", "title": "A reinforcement-learning approach to efficient communication", "year": "2020" }, { "authors": "S C Levinson", "journal": "Science", "ref_id": "b15", "title": "Kinship and human thought", "year": "2012" }, { "authors": "F Mollica; G Bacon; N Zaslavsky; Y Xu; T Regier; C Kemp", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b16", "title": "The forms and meanings of grammatical markers support efficient communication", "year": "2021" }, { "authors": "T Regier; P Kay; N Khetarpal", "journal": "Proceedings of the National Academy of Sciences of the United States of America", "ref_id": "b17", "title": "Color naming reflects optimal partitions of color space", "year": "2007" }, { "authors": "T Regier; C Kemp; P Kay", "journal": "", "ref_id": "b18", "title": "Word meanings across languages support efficient communication", "year": "2015-01" }, { "authors": "Y Ren; S Guo; M Labeau; S B Cohen; S Kirby", "journal": "", "ref_id": "b19", "title": "Compositional languages emerge in a neural iterated learning model", "year": "2020" }, { "authors": "E Rosch", "journal": "Lawrence Erlbaum Associates", "ref_id": "b20", "title": "Principles of categorization", "year": "1978" }, { "authors": "T C Scott-Phillips; S Kirby", "journal": "Trends in Cognitive Sciences", "ref_id": "b21", "title": "Language evolution in the laboratory", "year": "2010" }, { "authors": "K Smith; S Kirby; H Brighton", "journal": "Artificial life", "ref_id": "b22", "title": "Iterated learning: A framework for the emergence of language", "year": "2003-02" }, { "authors": "L Steels; T Belpaeme", "journal": "Behavioral and brain sciences", "ref_id": "b23", "title": "Coordinating perceptually grounded categories through language: A case study for colour", "year": "2005" }, { "authors": "S Steinert-Threlkeld; J Szymanik", "journal": "Cognition", "ref_id": "b24", "title": "Ease of learning explains semantic universals", "year": "2020" }, { "authors": "N Tishby; F C Pereira; W Bialek", "journal": "", "ref_id": "b25", "title": "The information bottleneck method", "year": "1999" }, { "authors": "M Tucker; R P Levy; J Shah; N Zaslavsky", "journal": "", "ref_id": "b26", "title": "Trading off utility, informativeness, and complexity in emergent communication", "year": "2022" }, { "authors": "R J Williams", "journal": "Machine Learning", "ref_id": "b27", "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "year": "1992" }, { "authors": "J Xu; M Dowman; T L Griffiths", "journal": "Proceedings of the Royal Society B: Biological Sciences", "ref_id": "b28", "title": "Cultural transmission results in convergence towards colour term universals", "year": "2013" }, { "authors": "Y Xu; E Liu; T Regier", "journal": "Open Mind", "ref_id": "b29", "title": "Numeral systems across languages support efficient communication: From approximate numerosity to recursion", "year": "2020" }, { "authors": "Y Xu; T Regier; B C Malt", "journal": "Cognitive Science", "ref_id": "b30", "title": "Historical semantic chaining and efficient communication: The case of container names", "year": "2016" }, { "authors": "N Zaslavsky; K Garvin; C Kemp; N Tishby; T Regier", "journal": "Journal of Language Evolution", "ref_id": "b31", "title": "The evolution of color naming reflects pressure for efficiency: Evidence from the recent past", "year": "2022" }, { "authors": "N Zaslavsky; C Kemp; T Regier; N Tishby", "journal": "Proceedings of the National Academy of Sciences of the United States of America", "ref_id": "b32", "title": "Efficient compression in color naming and its evolution", "year": "2018" }, { "authors": "N Zaslavsky; M Maldonado; J Culbertson", "journal": "", "ref_id": "b33", "title": "Let's talk (efficiently) about us: Person systems achieve nearoptimal compression", "year": "2021" }, { "authors": "N Zaslavsky; T Regier; N Tishby; C Kemp", "journal": "", "ref_id": "b34", "title": "Semantic categories of artifacts and animals reflect efficient coding", "year": "2019" } ]
[ { "formula_coordinates": [ 2, 394.09, 119.26, 163.91, 12.95 ], "formula_id": "formula_0", "formula_text": "S(w|c) ∝ e -η||x c -x w || 2 2 (1)" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b18", "b27", "b28", "b12", "b26", "b1", "b19", "b5", "b10", "b1", "b11" ], "table_ref": [], "text": "Ushered by advances in deep-learning in recent years, reinforcement learning (RL) has become a successful framework for solving sequential decision making problems [19,28,29,13]. Goalconditioned RL (GC-RL) is a variant of the general RL problem where an agent is tasked with fulfilling a goal that is given to it as input, instead of maximizing rewards [27,2]. In principle, GC-RL can be solved as a regular RL problem using general purpose RL algorithms, with a reward function that is high only when the goal is met. However, due to sparsity of this reward structure, RL is difficult to apply successfully [20,6]. Crafting a suitable dense reward for such tasks, on the other hand, requires significant domain knowledge.\nRecently, a simple algorithm for GC-RL, goal-conditioned supervised-learning (GCSL), was proposed [11]. GCSL works by inferring in hindsight about which goals would have been fulfilled by a trajectory, regardless if it succeeded in reaching the intended goal or not. Moreover, unlike other popular hindsight methods such as hindsight experience replay (HER, [2]), GCSL optimizes a supervised-learning objective instead of an RL objective, making GCSL easier to train in practice, as common RL objectives, which are based on policy gradients or temporal difference learning, are known to be very sensitive to hyperparameters [12]. GCSL, however, comes with its own limitations. As we found, the sampling of targets in GCSL is biased towards learning short sub-trajectories of the data, making it difficult for GCSL to correctly predict actions for goals that require a long trajectory to reach. Furthermore, learning to predict actions often disregards important \"geometric\" state-space information, such as the proximity between two states, that is very useful in GC-RL. To tackle these issues we propose Trajectory Iterative Learner (TraIL), an extension of GCSL, which learns to predict a complete trajectory from the current state towards the goal. We use TraIL's trajectory prediction for replacing the goal input of GSCL with a sub-goal, selected from the predicted trajectory to the goal. This effectively shortens the horizon for the GCSL action predictor, allowing it to predict actions with higher accuracy, so long as the sub-goal prediction is accurate enough. Key in our approach is the observation that we can learn to predict the trajectory using the exact same data that is available to GCSL. This allows us to build on the already established GCSL machinery when devising our algorithm.\nIn our empirical investigation, we seek to understand when TraIL can outperform GCSL. We handcraft environments to study this, and also evaluate performance on popular benchmarks. We find that in most cases TraIL leads to better goal coverage than GCSL, without requiring additional data.\nWe briefly summarize our contributions:\n• We show that GCSL is biased towards learning actions for closer goals.\n• We propose TraIL, a sub-goal based extension of GCSL, and explore regularization techniques and architecture design choices for implementing TraIL successfully.\n• An empirical study of the benefits and limitations of our approach, as well an ablation study of the different TraIL components." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b16", "b26", "b1", "b25", "b22", "b14", "b20", "b4", "b13", "b6", "b29" ], "table_ref": [], "text": "The sub field of GC-RL dates back to the classical goal-reaching RL algorithm by Kaelbling [17].\nMore recent ideas, such as universal value function approximation (UVFA [27]), incorporated deeplearning in GC-RL effectively. To address the issue of rewards sparsity, the idea of hindsight was developed to re-evaluate past failed experience as successes. Some hindsight methods, such as HER [2], rely on the algorithm being off-policy and inject the hindsight transitions into the replay buffer, while others apply importance-sampling corrections within the RL algorithm [26].\nSeveral previous studies suggested using sub-goals to solve GC-RL tasks [23,15,21,5]. A key difference between these approaches and our work is that we, similarly to GCSL, only require a supervised learning objective, while all previous sub-goal based approaches optimized an RL objective, which can be more difficult to train.\nRecently, several works [14,7] used attention in the form of Transformers [30] in order to learn future rewards with a supervised learning objective, without using sub-goals. We believe that these methods could also be improved to use sub-goals, based on ideas similar to the ones presented here." }, { "figure_ref": [], "heading": "Problem Formulation and Background", "publication_ref": [], "table_ref": [], "text": "Let M be a goal-conditioned MDP (S, G, A, P, T, ρ 0 ), where S and A are the state and action spaces (either discrete or continuous), with G ⊆ S being the goal space. P (s |s, g, a) is the transition function, where we assume that P (g|g, g, •) = 1 (i.e., goals are absorbing states), ρ 0 is the distribution over initial states and goals, and T is the horizon of the episode. The objective is to learn a (possibly stochastic) goal-conditioned policy π(a|s, g) that successfully reaches goals in T steps as follows:\nJ goal-reaching (π) = E s0,g∼ρ0,st∼Pπ(s|st-1) I(s T == g) ,(1)\nwhere P π (s|s t-1 ) = a∈A P (s|s t-1 , a)π(a|s t-1 ) and I is the indicator function." }, { "figure_ref": [], "heading": "GCSL:", "publication_ref": [ "b10" ], "table_ref": [], "text": "The main idea in GCSL [11] is that, in-hindsight, states visited by the agent could be considered as goals, even if they were not the intended goals when the agent executed its policy. This self-supervised policy optimization is carried out via a maximum likelihood objective on the actions given the achieved goals. Similarly to other online RL algorithms, GCSL interleaves policy optimization steps (described below) with collecting new data and appending it to a replay buffer D. However, unlike other prevalent RL methods, the learning objective in GCSL is purely supervised (no value learning or estimation). Formally, let τ = [s 0 , a 0 , . . . s T -1 , a T -1 , s T ] be a trajectory from the buffer where τ s (i) and τ a (i) denote the i'th state and action in τ . GCSL samples i ∼ U (1, T -1) which determines the current state τ s (i) and action τ a (i). The goal is then selected from the preceding Figure 1: Histograms of distances between start and goal states when computing the GCSL targets in the discrete 9 rooms (left), continuous 4 rooms (middle), and lunar lander (right) environments. The right end of the X axis is the horizon T . Y axis is the percent of the bin from the total updates. We see that GCSL implicitly learns targets for shorter sub-trajectories more frequently.\nvisited states j ∼ U (i + 1, T ). The optimization objective in GCSL is:\nJ gcsl = E τ ∼D,i∼U (1,T -1),j∼U (i+1,T ) log π(τ a (i)|τ s (i), τ s (j)) . (2\n)\n4 GCSL is biased towards learning short trajectories\nOur main observation, and the motivation for our work, is that GCSL's learning objective has an inherent bias toward goals that are short horizoned, i.e., a short trajectory is required to reach them. Thus, we expect that for goals that require a longer trajectory to reach, the predictions made by GCSL will be less accurate. In this section, we provide a mathematical justification this observation.\nThe GCSL objective (Eq. 2) can be written as an optimization on trajectory-suffixes. Denote by τ i the suffix of τ that starts in state τ s (i). The objective can be written as:\nJ gcsl = E τ ∼D,i∼U (1,T -1),j∼U (1,T -i) log π(τ i a (1)|τ i s (1), τ i s (j))\nWe ask, what is the probability that GCSL updates a target that is exactly j = K steps away, and we denote this event as U (K). Define p k as the probability of selecting a trajectory-suffix that is of length k. Thus, we can write U (K) as the sum:\nPr U (K) = k≥K p k k = k≥K p k k + k<K p k • 0.(3)\nThe first equality follows since τ i is required to be of length at least K (the restriction in the summation), and then specifically selecting a goal that is exactly K steps away (GCSL selects j under a uniform distribution). The leftmost expression just adds zero elements for k < K.\nFor targets that are exactly K + 1 steps away, plugging in Eq. 3,\nPr U (K + 1) = k≥K+1 p k k + k<K+1 p k • 0.\nTherefore, we have that Pr U (K) = p K K + Pr U (K + 1) , and since p K ≥ 0, we have that Pr U (k) is monotonically decreasing Pr U (K) ≥ Pr U (K + 1) . This result is independent on the exact value of p k , which can depend both on the dynamics P and the data collection policy.\nWe conclude that closer targets get updated more frequently in GCSL, biasing the model to learn more from data that corresponds to short horizoned goals. We further illustrate this issue in Figure 1, where we plot a histogram of the lengths of sub-trajectories that GCSL learns from. We see that most updates correspond to relatively short sub-trajectories suffixes, despite the fact that collected trajectories may, and often do, reach the maximal length T (the right end of the X axis). idea is that in some scenarios, learning future states leading to a goal, could be an easier task than directly learning the action to take at the first step of the trajectory. Once we have such a sub-goal at hand, we can use it instead of the original goal when querying GCSL, potentially receiving a more accurate action prediction.\nTo realize this idea, during learning a GCSL model π, we concurrently learn a trajectory encoder π S (m|s, g, t) to predict the state between a current s ∈ S and goal g ∈ G states from previous data. π S is indexed by t ∈ [0, 1] such that if t = 0 the required result should be m = s, and if t = 1 then it should be m = g. Furthermore, π S should generate subsequent states that are feasible according to P (including the transition to g). If the above requirements hold for any two reachable s and g, then we call the trajectory encoder consistent.\nNext, we describe how to train our algorithm, TraIL, in Section 5.1, and how to use TraIL sub-goals to predict GCSL actions in Section 5.2." }, { "figure_ref": [], "heading": "TraIL optimization and training", "publication_ref": [ "b2" ], "table_ref": [], "text": "To train π S (m|s, g, t) we assume the same algorithmic settings as in GCSL, of an iterative process that shifts between collecting data and adding it to a replay buffer, and optimizing a supervisedlearning objective. The objective is based on log-likelihood to fit the data that was collected by GCSL so far during training (see below), simialr to Eq. 2. As the GCSL policy π trains, TraIL π S trains in the background on the same data, i.e., we compute more update steps but consume the same data. Our loss function for π S is given by:\nJ sub-goal = E τ ∼D,i∼U (1,T -1),j∼U (i+1,T ),k∼U (i,j) log π S (τ s (k)|τ s (i), τ s (j), k -i j -i ) . (4\n)\nThe objective J sub-goal maximizes the likelihood of the visited state τ s (k) according to its relative position in the sub-trajectory t = k-i j-i . Next, we explain our neural network architecture design. Additionally, for a state space where the Euclidean distance is meaningful, we propose a novel regularization technique to train TraIL. 1Architecture: Unlike the classical supervised learning settings where a reasonable assumption is a single data collection policy, in GCSL, where the policy iterates between optimization and data collection, this assumption no longer holds. At every point in time the replay buffer contains data of several modalities corresponding to past versions of the current policy. We thus use a model capable of representing several modalities -Mixture Density Network (MDN [3]), a neural network predicting a mixture of Gaussians, for π S . Let K be the number of Gaussians, and d be the state dimension. The network predicts p ∈ ∆ K , c 1 . . . c K ∈ R d , and σ1 . . . σK ∈ R d . The output p is the logits for selecting each mode, and the Gaussian N k for mode k is\nN (µ k = s + c k , σ 2 k = exp (2 σk\n)) (we center the means around s).\nFor ease of notation, in the next sections we define: Let k * (s, g, t) be the most likely mode (arg max k∈[K] (p)) and µ k (s, g, t) the mean of mode k (emphasizing the input dependencies). Also, let k(s, g) denote the mode that starts closest to s and terminates closest to g (for explicit description see the BestMode Algorithm 1 in the supplementary section).\nRegularization: motivated by the properties of a consistent trajectory encoder, we propose two regularization terms J edge and J self -consistency (see below), that bias π S towards smooth consistent trajectories. Both losses only require sampling states s, g from the data-set D. Our complete loss J T raIL for the trajectory encoder is (with α edge and α self -consistency are hyper-parameters):\nJ T raIL = J sub-goal + α edge • J edge + α self -consistency • J self -consistency . (5\n)\nEdge loss: For a consistent encoder we require that π S (s, g, 0) = s and π S (s, g, 1) = g (with a slight abuse of notation as π S is stochastic). Thus, we add the following loss that predicts the location of the start and goal:\nJ edge = E s,g∼D µ k * (s, g, 0) -s 2 + µ k * (s, g, 1) -g 2 . (6\n)\nSelf-consistency loss: For a consistent encoder, let s m be a state in trajectory τ , all sub-trajectories of τ that contain s m should predict it (with appropriate t values). Namely, the predictions of π S for a trajectory τ and a sub-trajectory τ i should agree on overlapping states (see Figure 2 for a graphical illustration). The \"self-consistency\" loss based on this motivation is:\nJ self -consistency = E s,g∼D,t1,t2∼U (0,1) µ k (s, g, t 1 • t 2 ) -m 2 2 = E s,g∼D,t1,t2∼U (0,1) µ k (s, g, t 1 • t 2 ) -µ k (s, µ k (s, g, t 1 ), t 2 ) 2(7)\nWith k = k * (s, g, t 1 ) being the mode that we regularize for.\nFigure 2: An illustration of the self-consistency regularization. We sample m 1 from s and g, and m 2 from s and m 1 , and minimize the residual of predicting m 2 using s and g with an appropriate t value.\nTrajectory post-processing: In the GC-RL setting, we are interested in reaching MDP states, therefore subsequent identical states in the data provide no value towards reaching any goal. Thus, when a trajectory τ contains subsequent identical states τ s (i) = τ s (i + 1) we trim the repeating state τ s (i + 1) (and action τ a (i)). The process repeats until all subsequent states are different and only then the trimmed τ is pushed into the replay buffer D. We found this pre-processing step to greatly enhance results (Section B), and we apply it to both GCSL and TraIL in the experimental section." }, { "figure_ref": [ "fig_0" ], "heading": "Predicting an action by first predicting a sub-goal", "publication_ref": [ "b22", "b14", "b20", "b4", "b23" ], "table_ref": [], "text": "It was previously shown that divide-and-conquer prediction approaches are effective at solving long horizon predictions [23,15,21,5]. Following these sub-goal prediction methods, when predicting an action for data collection, we query π S with t = 0.5 and generate a sub-goal m and then replace g with m in GCSL action prediction a ∼ π(a|s, m). 2 During test-time, the only difference is that the most likely action is taken (highest logit if A is discrete, or the mean of the most likely mixture component µ k * (s, g) if A is continuous). Formally:\nGetAction(s, g, i)\nInputs: current and goal states s, g ∈ S, current time index i Output: action a ∈ A To investigate this question, we design an experiment that side steps the data collection process common in RL scenarios, and instead we focus on behavioral cloning (BC [24]) scenarios where we are given a set of demonstrations, generated as the shortest path trajectories between random states and random goals, and must learn to imitate the demonstrations. This formulation allows us to investigate the prediction problem in isolation, without additional RL difficulties such as exploration. We designed two specialized mini-grid environments for this investigation (Figure 3). The first scenario, large-rooms, is a 25 rooms grid, with each room being 15x15 cells. Every room is connected to adjacent rooms by a single cell door. We expect sub-goals to be easy to predict in this domain, as prediction errors for sub-goals could be proportional to the room size without affecting the resulting action prediction. The second scenario, double-spiral, has 2 intertwined corridors that twist and change directions often. The corridors are a single cell wide, meaning that for every goal, exactly one action takes the agent in the correct direction, another one in the opposite direction, and the remaining two actions keep the agent in place. We hypothesize that learning sub-goals in this scenario would be more difficult, as two close states on different corridors correspond to very different trajectories towards the goal.\n1. t = max(0.5, i+1 T ) 2. k ← k(s, g) 3. m ← µ k (s, g, t) when deterministic, or m ∼ N k (s, g, t) if stochastic 4. a ∼ π(s, m)\nIn this experiment, we compare action predictions between the GCSL model, i.e. a ∼ π(a|s, g), and the TraIL model that first predicts a sub-goal m ∼ π S (m|s, g, t), and then predicts an action a ∼ π(a|s, m). For TraIL, we compare three variants: t = 1.0, without regularization, and t = 0.5 with and without regularization. 3 Success is measured by the accuracy of predicting the first action of the shortest-path demonstrated trajectory, for a fixed set of start-goals pairs that were not seen by the agents during training. The results, in Table 1, agree with our intuition. In large rooms, adding more features of TraIL improves the success rate of the model, while in double spiral it worsens it." }, { "figure_ref": [], "heading": "GCSL", "publication_ref": [], "table_ref": [], "text": "TraIL: t = 1 TraIL: t = 0.5 TraIL: t = 0. 1: Accuracy of GCSL vs. TraIL action predictions in the large-rooms and double-spiral domains. In TraIL columns, t is the index of the sub-goal, and \"reg\" indicates that both edge and self-consistency losses were active in the optimization (see text for more details). " }, { "figure_ref": [ "fig_1" ], "heading": "Experiments", "publication_ref": [ "b10", "b10", "b10", "b24", "b15" ], "table_ref": [ "tab_1", "tab_1" ], "text": "We now turn to study TraIL in the full GC-RL settings, and compare to GCSL. 4 Our scenarios (Figure 4) include:\nDiscrete 9 rooms: is a harder version of the minigrid 4 rooms problem, where 9 rooms are aligned in a 3x3 grid, and each room is 5x5 cells. The doors to adjacent rooms are in random locations (not necessarily in the center of the wall). This scenario has both discrete state and action spaces, and the dynamics function is deterministic. We chose this relatively simple scenario as it is easy to visualize the resulting policies.\nContinuous 4 rooms: a grid of 2x2 rooms similar to the previous scenario, except the state and action spaces are continuous and the dynamics function has 3 levels of difficulty no-noise, moderate-noise, and heavy-noise, corresponding to noise levels of 0.0, 0.1, and 0.5.\nLunar lander: Lunar lander is a scenario with challenging dynamics that also appeared in the original GCSL paper. [11] adapted lunar lander to the GC-RL setting as follows: (1) the goal is defined by positions (without velocities), and (2) the goal region is limited around the landing-pad flags. We further modify the settings to (1) use the velocities in goals and sub-goals, and (2) define three goal ranges: landing-pad (as in [11]), close-proximity, and wide-proximity. The latter two goal regions require the agent to perform a more sophisticated trajectory than just falling straight down. We denote the environment setting of [11] as Lunar-GCSL-settings.\nPanda motion planning: We investigate a setting of neural motion planing (NMP), a challenging GC-RL task [25,16], in which a 7-DoF Franka Panda robot must navigate between pole-like objects. 5To evaluate our agent, we measure the success rates of both agents on a held-out set of start-goal queries. Full results are shown in Tables 2 and3 We start with the most crucial question -does TraIL improve upon GCSL? According to the results in Tables 2 and3, we observe that in most of our tested scenarios TraIL indeed offers significant success rate improvements compared to GCSL. 6 Even in the challenging high-dimensional panda motion planning scenario, we find that TraIL is able to improve upon GCSL with a substantial gap. A notable exception is lunar lander in the GCSL settings, which we will discuss later on. We next emphasize a few interesting findings." }, { "figure_ref": [ "fig_2" ], "heading": "GCSL-settings (no-velocities)", "publication_ref": [], "table_ref": [], "text": "Extended state-space landing-pad landing-pad close-proximity wide-proximity GCSL 0.847 ± [0.019] 0.472 ± [0.149] 0.479 ± [0.060] 0.341 ± [0.054] TraIL (ours) 0.815 ± [0.018] 0.585 ± [0.142] 0.706 ± [0.059] 0.543 ± [0.013] Table 3: Success rates of GCSL and TraIL (ours) in all lunar lander. See main text for breakdown of results.\nEffects of increasing noise in dynamics function: in the continuous 4 rooms we investigated the relationship between noise in the dynamics function and the algorithms' performance. We observe that, as expected, both algorithms suffer due to increased noise, but TraIL still outperforms GCSL for the moderate noise level. We hypothesize that since GCSL breaks under heavy noise, TraIL, which uses GCSL as a low-level policy, is also unable to perform well. In general, we can expect that GCSL performing reasonably is a requirement for TraIL to succeed.\nexploration is an important consideration in most GC-RL scenarios. GC-RL tasks are more challenging when the agent's initial state distribution does not cover the full support of state-goal pairs. In such cases, the agent must explore effectively, and incorporate new information into its learned model.\nLunar lander is a good example for such a scenario. The agent starts above the flags and (in the GC-RL settings) needs to fly to specific goals. Table 3 shows that TraIL manages to cover more goals than GCSL under the challenging close-proximity and wide-proximity goal regions. We note, that because TraIL does not control the data collection policy, this gap must be due to better usage of the same collected data.\nIn Section D in the supplementary material we experimented with allowing TraIL to control the data collection process with favorable results: we modified the continuous 4 rooms scenario such that the agent can only start in the top left room. We saw that with GCSL data both algorithms had low success rates of only 0.25 for GCSL and 0.27 for TraIL. By using more data collected with TraIL, success rates increased to 0.45 for GCSL and 0.47 for TraIL (compared to a baseline that kept collecting data with GCSL and had drop in performance), suggesting that at least in some settings, using TraIL to collect data could lead to more performance gains. We leave a large scale investigation into this direction for future work.\nLength of successful trajectories: in Section 4 we hypothesized that GCSL is biased toward optimizing shorter sub-trajectories. We thus investigate the lengths of successful trajectories of both algorithms (Figure 5). Interestingly, the cases where TraIL succeeds and GCSL fails are not uniformly distributed, but concentrate on the longer trajectories. This observation confirms our intuition -TraIL improves the performance of GCSL by allowing more accurate predictions for more distant goals." }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "Ablation study:", "publication_ref": [ "b10" ], "table_ref": [], "text": "we used the discrete 9 rooms, continuous 4 rooms, and lunar lander as a benchmark to test the key features in our method -the number of mixture components K, and the effects of regularization terms. Our key finding are that adding a mixture usually helps, but a lower mixture count (k = 2) seems like a good general purpose choice. For regularization, not adding regularization at all usually results in models of lower success rates, however, setting the regularization coefficients too high also negatively impacts performance. We saw that α edge = α self -consistency = 0.01 is a good choice in the tested scenarios. See Section C for more information.\nLunar lander: Upon close inspection of the Lunar-GCSL-settings results, we saw that GCSL learns to fly head first into the ground. This undesirable behavior, which was reported in [11] as a success of reaching the goal, also makes reaching sub-goals harder, which explain the worse performance of TraIL in this experiment (see Figure 6 left).\nTo solve this environment we made two modifications. First, we added velocities to sub-goals, which is crucial information for learning to reach a sub-goal in the correct heading for later continuing to reach the goal. Second, we defined three goal regions: landing-pad, close-proximity, and wideproximity (see Figure 6 top right) to diversify the data the agent sees during data collection. Under these new settings, as shown in Table 3, TraIL has superior performance compared to GCSL. In " }, { "figure_ref": [], "heading": "Discussion and Limitations", "publication_ref": [], "table_ref": [], "text": "In this work we presented TraIL, an extension of GCSL tackling the GC-RL problem. The key idea in TraIL, is that GCSL's performance can be boosted by providing it with sub-goals, thereby reducing GCSL's prediction horizon. TraIL learns sub-goals using the same data as GCSL itself. We showed that for common scenarios, using TraIL leads to learning a better coverage of the state space. We also discussed TraIL's limitations -it may not work well in domains where predicting a sub-goal is harder than predicting actions. Interestingly, in many benchmark tasks this was not the case. An interesting future direction is to extend TraIL to visual domains, by incorporating ideas from visual planning." }, { "figure_ref": [], "heading": "A Full results discrete 9 rooms, lunar lander", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "In this section we report additional finding regarding the discrete 9 rooms and the lunar lander scenarios. We found that in both environments when we diverge from the original GCSL training of updating the model once for every environment step, and instead train three times, the success rate significantly increases. We therefore include the full results for both discrete 9 rooms in Table 4 and for lunar lander in Table 5 5: Success rates in lunar lander. We compare GCSL and TraIL on several instances of the problem: GCSL-settings the agent is only provided with positional B Does post-processing help? A study in the discrete 9 rooms scenario We investigate our scheme of post-processing episodes (Section 5.1), in the discrete 9 rooms environment. This environment is ideal as a test-bed for this feature since it is discrete and deterministic, thus repeating states are likely. The results are summarized in table 4, in the first row (labeled as \"GCSL\"). As table 4 demonstrates, post-processing of repeating states greatly improves performance. The time complexity of this operation is linear in the original episode, thus as cheap as as inserting the episode into the replay buffer, which amounts to negligible time costs. Therefore for the results in the main text (Section 7) we present results for GCSL and TraIL with post processing applied (in effect also boosting the GCSL baseline)." }, { "figure_ref": [], "heading": "C Ablation -mixture components and regularization term coefficients", "publication_ref": [], "table_ref": [], "text": "In this section we investigate how the GMM mixture count (K) and the edge and self-consistency coefficients (α edge and α self -consistency ) effect the success rates of TraIL. To achieve that, we took a single trained GCSL model, fixed π and only trained the TraIL policy π S according to the feature under investigation. We trained on the discrete 9 rooms with 1 update per environment step, cont 4 rooms, and lunar with close-proximity goals, as these are variants of every scenario that have a noticeable gap to perfect success. The success rate of the GCSL model is specified directly under the scenario name for reference." }, { "figure_ref": [], "heading": "C.1 Ablation I: effects of mixture components", "publication_ref": [], "table_ref": [], "text": "We start by investigating the effects of TraIL's mixture count K on the success rates (Table 6). In both discrete 9 rooms and in continuous 4 rooms incorporating a mixture (K > 1) is beneficial, and in the lunar lander environment K = 1 works marginally better. We hypothesize that this due to the nature of the problem, in both room scenarios, there are many situations where a goal-reaching policy can take several directions to reach the goal, whereas in lunar lander (with non-trivial dynamics), it is probably more difficult to find a diverse goal-reaching strategies to reach the same goal." }, { "figure_ref": [], "heading": "C.2 Ablation II: self-consistency and edge loss", "publication_ref": [], "table_ref": [], "text": "Next we investigate the J edge and J self -consistency (Eq. 6 and 7) by modifying their coefficients α edge and α self -consistency in the set {0, 0.01, 1} (Table 7). We can see that setting α self -consistency = α edge = 0 is inferior to the other options. As we would like a single set of parameters that works well, we see that selecting α self -consistency = α edge = 0.01 appears to provide good results across the board. " }, { "figure_ref": [], "heading": "D Data collection with TraIL: the continuous 4 rooms with limited start states", "publication_ref": [], "table_ref": [], "text": "In many GC-RL MDPs it is often the case that many states are not in the initial state distribution support. We study this case by limiting the agent to start only in the top-left room in continuous 4 rooms -moderate noise. Using the same parameters as before, we find that after training for 10k episodes, and although the models converged, the success rates for GCSL and TraIL are 0.258 ± [0.011] and 0.272 ± [0.048] respectively. We observe that the results are much inferior to the previous experiment where the starting state was not limited, we thus allow the TraIL policy π S to control the data collection, and observe that after 10k more episodes, the gap to original scenario narrows (GCSL 0.483 ± [0.073], TraIL 0.496 ± [0.055]). To assert that the performance is not attributed to training longer, we compared against a baseline agent that doesn't switch to collect data using TraIL (training 40K episodes on GCSL collected data). We find that this model performance remains similar (and even drops by a small margin) and attains success rates of 0.227 ± [0.019] with GCSL and 0.245 ± [0.016] with TraIL. To conclude, we found that for this scenario where the initial state distribution does not cover the entire state-space, collecting more data with TraIL provides data of greater quality allowing both GCSL and TraIL to obtain better results." }, { "figure_ref": [], "heading": "E TraIL sub-procedures", "publication_ref": [], "table_ref": [], "text": "We explicitly describe the BestMode procedure (see Algorithm 1).\nBestMode(s, g) Get GMM means for s: µ s 1 . . . µ s k from π S (s, g, 0)\n5 # t = 1 predicts close to g:\nGet GMM means for g: µ g 1 . . . µ g k from π S (s, g, 1)" }, { "figure_ref": [], "heading": "F Technical Details", "publication_ref": [ "b21", "b3", "b7", "b3", "b8", "b9" ], "table_ref": [], "text": "In this section we provide essential technical details that was not covered in the main text.\nCode: Our code was developed using PyTorch [22], and the RL scenarios use the OpenAI gym [4] package. The minigrid scenarios (large rooms, double spiral, and discrete 9 rooms) are based on the gym-minigrid package [8]. Lunar Lander is based on OpenAI gym's Lunar lander [4]. The panda motion planning scenario is simulated in PyBullet [9] extending the PandaGym [10] package." }, { "figure_ref": [], "heading": "F.1 Behavioral Cloning Environments", "publication_ref": [ "b7" ], "table_ref": [], "text": "Both environments use the gym-minigrid package [8]. In the large rooms scenario, doors between rooms are placed in a random wall segment and are the size of a single grid cell. In the double spiral the two corridors are symmetric except for a corridor on the right that break symmetry and connects the two isolated spiraling corridors." }, { "figure_ref": [], "heading": "F.2 Behavioral Cloning Experiments", "publication_ref": [ "b17", "b0" ], "table_ref": [], "text": "Data: For both environments we collected 400 training episodes and 300 test episodes from a shortest-path planner. We verified that no test query was present in the training set.\nTraining: we conducted 4 isolated experiments per configuration and reported their means and standard deviations. We initialized a replay buffer with the training demonstrations and used the same code to sample targets and update the model as in GCSL and TraIL versions in the RL experiments. The models were trained for 80K batches in double spiral and 160K batches in large rooms using the Adam optimizer [18] and batch size of 256.\nArchitecture: all networks have two layers of 400 neurons with Relu activations [1]. The MDN mixture count for TraIL is 2 and the regularization coefficients are specified in the main text.\nMetrics: To estimate the number of batches for each scenario we waited until the loss function and accuracy converged. As the main text specifies, the label to measure accuracy is the first action in the reference shortest-path trajectory." }, { "figure_ref": [], "heading": "F.3 Reinforcement Learning Environments", "publication_ref": [], "table_ref": [], "text": "We provide specific implementation details that were not covered in the main text. 1. To allow accurate test time comparisons we froze a test set of start-goal pairs (which are published as assets within our code repository), and extended the gym interface to include the ability to reset from a specific start-goal pair.\n2. The observations of the agent are normalized to be in [-1, 1] in every data dimension." }, { "figure_ref": [], "heading": "Continuous 4 rooms:", "publication_ref": [], "table_ref": [], "text": "The agent is limited to an action of norm 0.1. A state is considered close to the goal if the difference is of norm 0.2 or less. Noise in moderate noise and heavy noise is a 2D diagonal Gaussian with standard-deviation of 0.1 and 0.5 respectively.\nLunar lander: Following the GCSL definition, we measure goal success by the closeness to goal in (x, y) coordinate positions only. A state is considered close to the goal if the norm of the difference is at most 0.1 (in the original lunar lander coordinates, not the normalized state the agent sees).\nPanda motion planning: The action in this environment is translated to a relative position to the current joints of the robot (with a limited distance of 0.03 in the normalized state-space). Then, the PyBullet built in position-controller takes a single simulation step using the relative position as target, and under a maximum velocity of 1. A state is considered close to the goal if the distance in the normalized state space of the joint positions it is at most 0.1 (namely, like in lunar lander we discard the velocities for measuring success)." }, { "figure_ref": [], "heading": "F.4 Reinforcement Learning Experiments", "publication_ref": [ "b17", "b0" ], "table_ref": [], "text": "Training: Like the original GCSL we use a batch size of 256, learning rate of 5e-4, and the Adam optimizer [18] (same parameters for GCSL and TraIL). Unlike GCSL we found that for both GCSL and TraIL a replay buffer of 2K episodes works best (instead of a non-limited buffer as specified in GCSL). Also, we clip the TraIL gradients in the panda motion planning to 10k.\nArchitecture: all networks have two layers of 400 neurons with Relu activations [1]. The MDN mixture count for TraIL is 2 and the regularization coefficients are as specified in the main text.\nImplementation notes:\n1. We also use MDN for π when the A is continuous (the original GCSL quantized continuous spaces, which is incompatible when A is high dimensional and hard to find the correct resolution in other scenarios). 2. As mentioned earlier, we use the suggested trajectory post-processing method to remove subsequent identical states in the data. 3. Unlike other environments where the same test set of start-goal queries (s, g) is used, in lunar lander due to technical difficulties we were unable to fix a single set of queries and instead each evaluation randomly starts for a new query." } ]
Recently, a simple yet effective algorithm -goal-conditioned supervised-learning (GCSL) -was proposed to tackle goal-conditioned reinforcement-learning. GCSL is based on the principle of hindsight learning: by observing states visited in previously executed trajectories and treating them as attained goals, GCSL learns the corresponding actions via supervised learning. However, GCSL only learns a goal-conditioned policy, discarding other information in the process. Our insight is that the same hindsight principle can be used to learn to predict goal-conditioned sub-goals from the same trajectory. Based on this idea, we propose Trajectory Iterative Learner (TraIL), an extension of GCSL that further exploits the information in a trajectory, and uses it for learning to predict both actions and sub-goals. We investigate the settings in which TraIL can make better use of the data, and discover that for several popular problem settings, replacing real goals in GCSL with predicted TraIL sub-goals allows the agent to reach a greater set of goal states using the exact same data as GCSL, thereby improving its overall performance.
Goal-Conditioned Supervised Learning with Sub-Goal Prediction
[ { "figure_caption": "Figure 3 :3Figure 3: The environments used in the behavioral-cloning experiments: double spiral on the left and large rooms on the right. The start (red triangle) and goal (green cell) change in each demonstration.", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: GC-RL environments, and the executions of successful TraIL trajectories. From the left: discrete 9 rooms, continuous 4 rooms, lunar lander, and panda motion planning.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Histogram of lengths of successful trajectories in the discrete 9 rooms (left), continuous 4 rooms (middle), and lunar lander (right) environments. X axis is the length of the successful trajectory, Y axis is the bin count for that length. The histograms show that TraIL successes are more concentrated on long trajectories compared to GCSL.", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Lunar investigation: Left: GCSL agent in GCSL-settings learns to fly head-first into the ground. Right top: Goal visualization colored by success status of a TraIL agent, green for success and purple for failure. Right bottom: Visualization of full successful trajectories by TraIL, in each trajectory the agent starts located in the turquoise color, and as the trajectory evolves corresponding states transition to light pink. Left: landing-pad, middle: close-proximity, and right wide-proximity.", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 66Figure 6 top right we visualize the goal regions (green indicates success) and on the bottom right we plot successful trajectories (bottom) showing the diverse set of goals TraIL can reach. This experiment also highlights that TraIL requires a sub-goal representation with enough information for stable control (in this case -velocities).", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": ". Success rates of GCSL and TraIL (ours) in discrete 9 rooms, continuous 4 rooms, and panda motion planning. See main text for breakdown of results.", "figure_data": "discrete 9 roomsno-noisecontinuous 4 rooms moderate-noise heavy-noisepanda motion planningGCSL0.567 ± [0.051] 0.652 ± [0.015] 0.616 ± [0.056] 0.158 ± [0.003] 0.693 ± [0.017]TraIL (ours) 0.719 ± [0.059] 0.913 ± [0.015] 0.74 ± [0.039] 0.158 ± [0.005] 0.767 ± [0.006]", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": ". We can see that in both cases TraIL maintains a success rate gap compared to GCSL. Success rates in discrete 9 rooms. We compare GCSL and TraIL with and without the trajectory post-processing from Section 5.1, and with 1 and 3 updates per environment steps. [0.019] 0.418 ± [0.247] 0.393 ± [0.060] 0.319 ± [0.047] TraIL 0.815 ± [0.018] 0.450 ± [0.241] 0.582 ± [0.089] 0.583 ± [0.057] [0.142] 0.706 ± [0.059] 0.543 ± [0.013] Table", "figure_data": "Post process=FalsePost process=Trueupdates=1updates=3updates=1updates=3GCSL0.472 ± [0.012] 0.553 ± [0.021] 0.567 ± [0.051] 0.931 ± [0.010]TraIL (ours) 0.592 ± [0.020] 0.69 ± [0.031]0.719 ± [0.059] 0.99 ± [0.002]GCSL-settings (no-velocities)Extended state-spacelanding-padlanding-padclose-proximity wide-proximityupdates=1 GCSL 0.847 ± updates=3 GCSL -TraIL -0.472 ± [0.149] 0.479 ± [0.060] 0.341 ± [0.054] 0.585 ±", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" } ]
Tom Jurgenson; Aviv Tamar
[ { "authors": "Abien Fred; Agarap ", "journal": "", "ref_id": "b0", "title": "Deep learning using rectified linear units (relu)", "year": "2018" }, { "authors": "Marcin Andrychowicz; Filip Wolski; Alex Ray; Jonas Schneider; Rachel Fong; Peter Welinder; Bob Mcgrew; Josh Tobin; Openai ; Pieter Abbeel; Wojciech Zaremba", "journal": "Advances in neural information processing systems", "ref_id": "b1", "title": "Hindsight experience replay", "year": "2017" }, { "authors": "M Christopher; Bishop", "journal": "", "ref_id": "b2", "title": "Mixture density networks", "year": "1994" }, { "authors": "Greg Brockman; Vicki Cheung; Ludwig Pettersson; Jonas Schneider; John Schulman; Jie Tang; Wojciech Zaremba", "journal": "", "ref_id": "b3", "title": "Openai gym", "year": "2016" }, { "authors": "Elliot Chane-Sane; Cordelia Schmid; Ivan Laptev", "journal": "PMLR", "ref_id": "b4", "title": "Goal-conditioned reinforcement learning with imagined subgoals", "year": "2021" }, { "authors": "Henry Charlesworth; Giovanni Montana", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b5", "title": "Plangan: Model-based planning with sparse rewards and multiple goals", "year": "2020" }, { "authors": "Lili Chen; Kevin Lu; Aravind Rajeswaran; Kimin Lee; Aditya Grover; Misha Laskin; Pieter Abbeel; Aravind Srinivas; Igor Mordatch", "journal": "Advances in neural information processing systems", "ref_id": "b6", "title": "Decision transformer: Reinforcement learning via sequence modeling", "year": "2021" }, { "authors": "Maxime Chevalier-Boisvert; Lucas Willems; Suman Pal", "journal": "", "ref_id": "b7", "title": "Minimalistic gridworld environment for openai gym", "year": "2018" }, { "authors": "Erwin Coumans; Yunfei Bai", "journal": "", "ref_id": "b8", "title": "Pybullet, a python module for physics simulation for games, robotics and machine learning", "year": "" }, { "authors": "Quentin Gallouédec; Nicolas Cazin; Emmanuel Dellandréa; Liming Chen", "journal": "", "ref_id": "b9", "title": "Multigoal reinforcement learning environments for simulated franka emika panda robot", "year": "2021" }, { "authors": "Dibya Ghosh; Abhishek Gupta; Justin Fu; Ashwin Reddy; Coline Devin; Benjamin Eysenbach; Sergey Levine", "journal": "", "ref_id": "b10", "title": "Learning to reach goals without reinforcement learning", "year": "2019" }, { "authors": "Peter Henderson; Riashat Islam; Philip Bachman; Joelle Pineau; Doina Precup; David Meger", "journal": "", "ref_id": "b11", "title": "Deep reinforcement learning that matters", "year": "2018" }, { "authors": "Julian Ibarz; Jie Tan; Chelsea Finn; Mrinal Kalakrishnan; Peter Pastor; Sergey Levine", "journal": "The International Journal of Robotics Research", "ref_id": "b12", "title": "How to train your robot with deep reinforcement learning: lessons we have learned", "year": "2021" }, { "authors": "Michael Janner; Qiyang Li; Sergey Levine", "journal": "Advances in neural information processing systems", "ref_id": "b13", "title": "Offline reinforcement learning as one big sequence modeling problem", "year": "2021" }, { "authors": "Tom Jurgenson; Or Avner; Edward Groshev; Aviv Tamar", "journal": "PMLR", "ref_id": "b14", "title": "Sub-goal trees a framework for goal-based reinforcement learning", "year": "2020" }, { "authors": "Tom Jurgenson; Aviv Tamar", "journal": "", "ref_id": "b15", "title": "Harnessing reinforcement learning for neural motion planning", "year": "2019" }, { "authors": "Leslie Pack; Kaelbling ", "journal": "Citeseer", "ref_id": "b16", "title": "Learning to achieve goals", "year": "1993" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b17", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "Volodymyr Mnih; Koray Kavukcuoglu; David Silver; Alex Graves; Ioannis Antonoglou; Daan Wierstra; Martin Riedmiller", "journal": "", "ref_id": "b18", "title": "Playing atari with deep reinforcement learning", "year": "2013" }, { "authors": "Vitchyr Ashvin V Nair; Murtaza Pong; Shikhar Dalal; Steven Bahl; Sergey Lin; Levine", "journal": "Advances in neural information processing systems", "ref_id": "b19", "title": "Visual reinforcement learning with imagined goals", "year": "2018" }, { "authors": "Giambattista Parascandolo; Lars Buesing; Josh Merel; Leonard Hasenclever; John Aslanides; Jessica B Hamrick; Nicolas Heess; Alexander Neitz; Theophane Weber", "journal": "", "ref_id": "b20", "title": "Divide-and-conquer monte carlo tree search for goal-directed planning", "year": "2020" }, { "authors": "Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga; Alban Desmaison; Andreas Köpf; Edward Z Yang; Zach Devito; Martin Raison; Alykhan Tejani; Sasank Chilamkurthy; Benoit Steiner; Lu Fang; Junjie Bai; Soumith Chintala", "journal": "", "ref_id": "b21", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "2019" }, { "authors": "Karl Pertsch; Oleh Rybkin; Frederik Ebert; Shenghao Zhou; Dinesh Jayaraman; Chelsea Finn; Sergey Levine", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b22", "title": "Long-horizon visual planning with goal-conditioned hierarchical predictors", "year": "2020" }, { "authors": "A Dean; Pomerleau", "journal": "Advances in neural information processing systems", "ref_id": "b23", "title": "Alvinn: An autonomous land vehicle in a neural network", "year": "1988" }, { "authors": "Mayur J Ahmed H Qureshi; Michael C Bency; Yip", "journal": "", "ref_id": "b24", "title": "Motion planning networks", "year": "2018" }, { "authors": "Paulo Rauber; Avinash Ummadisingu; Filipe Mutz; Juergen Schmidhuber", "journal": "", "ref_id": "b25", "title": "Hindsight policy gradients", "year": "2017" }, { "authors": "Tom Schaul; Daniel Horgan; Karol Gregor; David Silver", "journal": "PMLR", "ref_id": "b26", "title": "Universal value function approximators", "year": "2015" }, { "authors": "David Silver; Aja Huang; Chris J Maddison; Arthur Guez; Laurent Sifre; George Van Den; Julian Driessche; Ioannis Schrittwieser; Veda Antonoglou; Marc Panneershelvam; Lanctot", "journal": "nature", "ref_id": "b27", "title": "Mastering the game of go with deep neural networks and tree search", "year": "2016" }, { "authors": "David Silver; Thomas Hubert; Julian Schrittwieser; Ioannis Antonoglou; Matthew Lai; Arthur Guez; Marc Lanctot; Laurent Sifre; Dharshan Kumaran; Thore Graepel", "journal": "Science", "ref_id": "b28", "title": "A general reinforcement learning algorithm that masters chess, shogi, and go through self-play", "year": "2018" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b29", "title": "Attention is all you need", "year": "2017" } ]
[ { "formula_coordinates": [ 2, 186.48, 586.35, 317.52, 9.99 ], "formula_id": "formula_0", "formula_text": "J goal-reaching (π) = E s0,g∼ρ0,st∼Pπ(s|st-1) I(s T == g) ,(1)" }, { "formula_coordinates": [ 3, 175.41, 259.96, 324.72, 9.98 ], "formula_id": "formula_1", "formula_text": "J gcsl = E τ ∼D,i∼U (1,T -1),j∼U (i+1,T ) log π(τ a (i)|τ s (i), τ s (j)) . (2" }, { "formula_coordinates": [ 3, 500.13, 260.31, 3.87, 8.64 ], "formula_id": "formula_2", "formula_text": ")" }, { "formula_coordinates": [ 3, 175, 389.31, 257.3, 12.69 ], "formula_id": "formula_3", "formula_text": "J gcsl = E τ ∼D,i∼U (1,T -1),j∼U (1,T -i) log π(τ i a (1)|τ i s (1), τ i s (j))" }, { "formula_coordinates": [ 3, 209.44, 443.58, 294.56, 26.88 ], "formula_id": "formula_4", "formula_text": "Pr U (K) = k≥K p k k = k≥K p k k + k<K p k • 0.(3)" }, { "formula_coordinates": [ 3, 213.3, 533.27, 185.4, 26.88 ], "formula_id": "formula_5", "formula_text": "Pr U (K + 1) = k≥K+1 p k k + k<K+1 p k • 0." }, { "formula_coordinates": [ 4, 131.27, 320.17, 368.85, 22.31 ], "formula_id": "formula_6", "formula_text": "J sub-goal = E τ ∼D,i∼U (1,T -1),j∼U (i+1,T ),k∼U (i,j) log π S (τ s (k)|τ s (i), τ s (j), k -i j -i ) . (4" }, { "formula_coordinates": [ 4, 500.13, 327.23, 3.87, 8.64 ], "formula_id": "formula_7", "formula_text": ")" }, { "formula_coordinates": [ 4, 369.73, 482.17, 127.02, 12.55 ], "formula_id": "formula_8", "formula_text": "N (µ k = s + c k , σ 2 k = exp (2 σk" }, { "formula_coordinates": [ 4, 152.18, 612.19, 347.95, 9.65 ], "formula_id": "formula_9", "formula_text": "J T raIL = J sub-goal + α edge • J edge + α self -consistency • J self -consistency . (5" }, { "formula_coordinates": [ 4, 500.13, 612.51, 3.87, 8.64 ], "formula_id": "formula_10", "formula_text": ")" }, { "formula_coordinates": [ 4, 182.69, 673.54, 317.43, 11.72 ], "formula_id": "formula_11", "formula_text": "J edge = E s,g∼D µ k * (s, g, 0) -s 2 + µ k * (s, g, 1) -g 2 . (6" }, { "formula_coordinates": [ 4, 500.13, 675.94, 3.87, 8.64 ], "formula_id": "formula_12", "formula_text": ")" }, { "formula_coordinates": [ 5, 124.37, 126.64, 379.63, 33.95 ], "formula_id": "formula_13", "formula_text": "J self -consistency = E s,g∼D,t1,t2∼U (0,1) µ k (s, g, t 1 • t 2 ) -m 2 2 = E s,g∼D,t1,t2∼U (0,1) µ k (s, g, t 1 • t 2 ) -µ k (s, µ k (s, g, t 1 ), t 2 ) 2(7)" }, { "formula_coordinates": [ 5, 131.41, 550.54, 287.22, 58.05 ], "formula_id": "formula_14", "formula_text": "1. t = max(0.5, i+1 T ) 2. k ← k(s, g) 3. m ← µ k (s, g, t) when deterministic, or m ∼ N k (s, g, t) if stochastic 4. a ∼ π(s, m)" } ]
10.1021/ja508154e
2024-02-12
[ { "figure_ref": [ "fig_1" ], "heading": "MOTIVATION AND PRIOR WORK", "publication_ref": [ "b32", "b9", "b22", "b38", "b16", "b13", "b40", "b7", "b23", "b9", "b14", "b32", "b0", "b38", "b31", "b1", "b12", "b34", "b26", "b6", "b3" ], "table_ref": [], "text": "Understanding the behavior of machine learning models is receiving considerable attention. Many researchers seek to identify what features are important and to describe their effects. Recently, several works have taken one step further from explaining feature attribution towards interaction attribution. This shift aims to explore the interplay between features and uncover how their interactions shape the behavior of the model. Such interaction occurs when the influence of one feature, denoted as x i , on the model's prediction can not be decomposed into a sum of subfunctions that do not involve the corresponding feature Sorokina et al. (2008); Friedman & Popescu (2008); Lou et al. (2013); Tsang et al. (2021), denoted as X \\i , statistically defined as: f ̸ = i∈I f i (X \\i ). For example, there exists feature interaction effects in sin(•) function between x 1 and x 2 , where sin(x 1 + x 2 ) ̸ = f 1 (x 2 ) + f 2 (x 1 ). Higher-order interactions among features are defined similarly. The feature interaction can provide a more comprehensive and accurate understanding of the underlying factors driving model predictions, leading to robust and improved models; prominent applications include recommendation systems Guo et al. (2017); Tsang et al. (2020a), DNA sequence analysis and modelling Greenside et al. (2018), and text-image information retrieval Wang et al. (2019).\nExisting approaches that detect and explain the feature interactions mainly include (a) traditional statistical approaches such as analysis of variance (ANOVA) and H-statistics Fisher (1992); Mandel (1961); Friedman & Popescu (2008); Greenwell et al. (2018) and (b) more recent machine learning model-based methods such as Additive Groves (AG) Sorokina et al. (2008). AG is a non-parametric method of identifying interactions by imposing structural constraints on an additive model of regression trees that compare the performance of unrestricted and restricted prediction models. LASSO-based methods Bien et al. (2013) select interactions by shrinking the coefficients of insignificant terms to zero. Specifically, neural network-based models, e.g. feed-forward neural networks, Bayesian neural networks and convolutional neural networks, are utilized to examine the weight matrices Tsang et al. (2021); Singh et al. (2019); Cui et al. (2020). Several more advanced explainable frameworks have been proposed to capture feature interactions from well-defined models, e.g. Shapley Taylor Interaction Index (STI) Grabisch & Roubens (1999); Sundararajan et al. (2020), Integrated Hessians (IH) Janizek et al. ( 2021) and Archipelago Tsang et al. (2020b).\nLimitations of previous work Previous works can be summarized as detecting and explaining feature interaction effects in a single trained model f . Here, we argue that the high accuracy of a welltrained model can not guarantee the preservation of true feature interactions. As a simple example, we consider the two error-free models\nf 1 (a, b, c) = -b+ √ b 2 -4ac 2a and f 2 (a, b, c) = -b- √ b 2 -4ac 2a\nfor finding a root of the quadratic equation ax 2 + bx + c = 0; the input variables a, b, and c exhibit different feature interactions in these two models as there is a sign difference in the numerator. See Fig. 1(a) for another general example where a well-performing linear model with no feature interaction might be modelled using a deep neural network with the same model performance but with feature interaction. A detailed proof of concept is provided in Appendix B.\nIn general, for a given task, there exists a class of equally accurate models that lead to different feature interactions. To address the identified limitations, one potential solution is to analyze the feature interaction in a set of well-performing models. Emerging research Rudin (2019); Fisher et al. (2019); Dong & Rudin (2020); Li & Barnard (2022a) has demonstrated the necessity of examining feature importance not just within a single well-trained model but across a set of high-performing models, but none of these works delved into a comprehensive analysis of feature interactions1 ." }, { "figure_ref": [], "heading": "Our contributions", "publication_ref": [], "table_ref": [], "text": "In this paper, we study feature interaction and demonstrate that investigating feature interactions in a single model may fall short of accurately identifying the complex relationships between features. A more meaningful approach involves exploring feature interactions within a class of models. Our primary contributions can be summarized as:\n• We introduce feature interaction score (FIS) as a means to quantify the feature interaction strength in a pre-specified predictive model. In Sec. 2.1 and Sec. 4.1, we establish and validate the connection between FIS and the existing literature.\n• We consider FIS in a model set and introduce FIS cloud (FISC) in Sec. 2.1. We demonstrate in Sec. 4 how feature interactions contribute to predictions and how feature interactions can vary across models with reasonable and explainable accurate predictions in terms of FIS.\n• We present a non-linear example in multilayer perceptron (MLP) to mathematically characterize the Rashomon set and FISC. For the general case, we propose a greedy search algorithm to explore FIS and to characterize FISC in a Rashomon set with two novel visualization tools, namely Halo and swarm plots." }, { "figure_ref": [], "heading": "FEATURE INTERACTION SCORE AND THE CLOUD OF SCORES", "publication_ref": [ "b38", "b14", "b32" ], "table_ref": [], "text": "The terms \"statistical interaction\", \"feature interaction\", and \"variable interaction\" in the literature are highly related (Tsang et al., 2021;Greenwell et al., 2018;Sorokina et al., 2008;Tsang et al., 2020b). Here, we use the term \"feature interaction\" and \"feature interaction score\" to indicate a statistical non-additive interaction of features in a model. In general, we use bold lowercase letters such as v to represent a vector and v i denotes its i-th element. Let the bold uppercase letters such as A denote a matrix with a [i,j] being its i-th row and j-th column entry. The vectors a [i,•] and a [•,j] are its i-th row and j-th column, respectively. Let (X, y) ∈ R n×(p+1) denote the dataset where\nX = [x [•,1] , x [•,2] , ..., x [•,p]\n] is a n × p covariate input matrix and y is a n-length output vector. Let I be a subset of feature indices: I ⊂ {1, 2, ..., p} and its cardinality is denoted by |I|. All possible subsets are referred as I = {I | I ⊂ {1, 2, ..., p}}. In the context of no ambiguity, we drop the square brackets on the vector and simply use x s to denote the feature. X \\s is the input matrix when the feature of interest (denoted as s here) is replaced by an independent variable. Let f : R n×p → R n be a predictive model and L : (f (X), y) → R be the loss function. The expected loss and empirical loss are defined as\nL exp = E[L(f (X), y)] and L emp = n i=1 L(f (x [i,•]\n), y i ), respectively." }, { "figure_ref": [ "fig_1" ], "heading": "DEFINITIONS", "publication_ref": [ "b6", "b30", "b22", "b33", "b18", "b6", "b43", "b15", "b2" ], "table_ref": [], "text": "Mask-based Rashomon set Our goal is to explore feature interactions in a set of well-performing models, also known as a Rashomon set (Fisher et al., 2019). For a given pre-specified class of predictive models\nF ⊂ {f | f : X → y}, the Rashomon set is defined as R(ϵ, f * , F) = {f ∈ F | E[L(f (X), y)] ≤ E[L(f * (X), y)] + ϵ},(1)\nwhere f * is a trained model (or reference model), and ϵ > 0 is a pre-defined acceptable loss tolerance. In view of the existence of equivalent simple and complex models discussed in Semenova et al. (2022), one way to characterize a model is to concatenate a layer, denoted as\nm ∈ R, into a backbone model. That is, ∀f ∈ F with E[L(f (X), y)], ∃m s.t. E[L(f • m(X), y)] ≤ E[L(f * (X), y)] + ϵ.\n(2) With this \"input pre-processing\" layer, we refer to the resulting Rashonmon set as the Mask-based Rashomon set and claim that there always exists an alternative for any model in the Rashomon set, logically expressed as:\n∀f ∈ F (E[L(f (X), y)] → (∃m (E[L(f * • m(X), y)] ≃ E[L(f (X), y)]).\nSee Appendix A for the detailed natural deduction.\nFeature interaction score (FIS) The common way to quantify feature interaction based on the statistical non-additivity Lou et al. (2013) is to measure the performance change of feature importance from the baseline (Sundararajan et al., 2017;Janizek et al., 2021). The performance change can be decomposed into main feature effects and feature interaction effects. Several feature attribution methods have identified the necessity of a baseline value representing a lack of information when generating explanations. We refer to individual feature effects which do not interact with other features as the main effect. Inspired by model reliance and similar methods Fisher et al. (2019); Zhu et al. (2015); Gregorutti et al. (2017) that measure the change in the loss by replacing the variable of interest with a new random independent variable, we denote by\nφ i (f ) = E[L(f (X \\i ), y)] -E[L(f (X), y)] and φ I (f ) = E[L(f (X \\I ), y)] -E[L(f (X), y)]\nas the main effect and joint effect respectively. Herein, E[L(f (X), y)] is the baseline effect that provides interpretability. In practice, we usually permute the features of interest multiple times to achieve a similar measurement (Datta et al., 2016).\nWith this in mind, the FIS is defined as the difference between the loss change of replacing multiple features simultaneously and the sum of the loss change of replacing multiple features individually:\nF IS I (f ) = φ I (f ) - i∈I φ i (f ).\n(3)\nFeature interaction score cloud (FISC) We can generate the set of FISs as:\nF IS I (f ) = {F IS I1 (f ), F IS I2 (f ), ..., F IS I 2 p (f )}.\nThus, the FIS of certain feature set I in the Rashomon set R(ϵ, f * , F) is given by F ISC I (R) = {F IS I (f ) : f ∈ R}, illustrated in Fig. 1 (b). We refer to the set of FISs with respect to a feature set in a model class as FISC. This cloud of scores has the range\n[F ISC I (R) min , F ISC I (R) max ] = [min f ∈R F IS I (f ), max f ∈R F IS I (f )]. (4) The complete set of FISs in a Rashomon set is F ISC I (R) = {F IS I (f ) : f ∈ R}." }, { "figure_ref": [], "heading": "RELATION TO EXISTING FEATURE INTERACTION ATTRIBUTION APPROACH", "publication_ref": [], "table_ref": [], "text": "Since FISC focuses on explaining feature interactions across a set of models rather than a single model, existing explainable methods for a single model should be viewed as a special case of our framework. We demonstrate that the state-of-the-art (SOTA) ArchDetect method Tsang et al. (2020b) can be derived from our approach; see Appendix C." }, { "figure_ref": [], "heading": "COMPUTATIONAL CHALLENGES OF FISC", "publication_ref": [ "b30", "b17", "b6", "b3" ], "table_ref": [], "text": "The first challenge is establishing the theoretical upper and lower bound of FISC for non-linear models characterized by uncertain, potentially non-convex loss functions. The second challenge arises from the NP-hard nature of searching the Rashomon set defined by hypothesis space Semenova et al. (2022); Hsu & Calmon (2022). We first show that it is possible to find the boundary of FISC under certain assumptions and conditions in Sec. 3.1, aligning with previous work in this area (Fisher et al., 2019;Dong & Rudin, 2020). Then we present a sampling algorithm for more general scenarios in Sec. 3.2, where model structures and loss functions might be uncertain. This approach enables us to empirically approximate the FISC." }, { "figure_ref": [], "heading": "AN EXAMPLE APPLICATION OF FISC IN MULTILAYER PERCEPTRON (MLP)", "publication_ref": [], "table_ref": [], "text": "Here, we consider a non-trivial MLP with a sigmoid activation function as an example. Let f : X → y be a predictive model defined as\nf (X) = α T 1 1 + e -β T X + b.\n(5)\nThe expected RMSE loss is\nE[L(f (X), y)] = E[ (y -f (X)) 2 ] that is mathematically equiv- alent to E[y -f (X)] or E[f (X) -y].\nFor simplicity, we consider the case E[y -f (X)] to derive the condition on the mask m to characterize the Rashomon set. Without loss of generality, we assume α > 0 and a scaling on ϵ by α. We further introduce an assumption2 that\nϵ ≤ min{E[ 1 1+e -m T •β T X ], E[ e -m T •β T X 1+e -m T •β T X ]}.\nInvoking Eq. 2, this assumption allows us to derive the following inequality\nE ln 1 -ϵ -ϵe -β T X e -β T X + ϵ + ϵe -β T X ≤ E[(β T X)m T ] ≤ E ln 1 + ϵ + ϵe -β T X e -β T X -ϵ -ϵe -β T X , (6)\nwhich gives the condition on m that characterises the mask-based Rashomon set. We refer to Appendix D for its derivation." }, { "figure_ref": [], "heading": "GREEDY SEARCH FOR EMPIRICAL FISC LOWER AND UPPER BOUNDS", "publication_ref": [], "table_ref": [], "text": "Unfortunately, due to the nonlinear nature of FIS, it is difficult to get a closed-form expression of the exact hypothetical bounds in general, especially when f and L functions vary. Here we propose a general way to sample from the Rashomon set and approximate the range of FISs. The RMSE loss function and a pairwise feature interaction I = {i, j} are chosen to illustrate the idea, as RMSE will show the connection between our method and ArchDetect in Archipelago." }, { "figure_ref": [], "heading": "SAMPLING FROM A MASK-BASED RASHOMON SET", "publication_ref": [], "table_ref": [], "text": "Instead of searching for all possible models, we sample models from the mask-based Rashomon set by concatenating a multi-valued mask\nm = {m 1 , m 2 , • • • , m p } into a backbone f , resulting in a new model M m = f • m. We can sample the Rashomon set by exploring different m such that R(ϵ, f, F) = {M m ∈ F | E[L(M m (X), y)] ≤ E[L(f (X), y)] + ϵ}.(7)\nAll models are characterized by m and only masks that potentially affect the FIS are of interest.\nRedundant models such as the composition of identity functions can be ignored to save resources.\nWe illustrate this through a one-layer mask, denoted by m I with the component of feature x i :\n(m I ) i = m i i ∈ I, 1 i / ∈ I.(8)\nThe FIS defined in Eq. 3 with the above setting can be rewritten as:\nF IS I (M m I ) = φ I (M m I ) - i∈I φ i (M m I ).(9)\nIn practice, instead of calculating all possible I, we can calculate all main effects in the feature space {M mi ∈ F | φ i (M mi )} p i=1 first and then inversely calculate any order of interaction by decomposing m I = i∈I m i ." }, { "figure_ref": [], "heading": "GREEDY SEARCH ALGORITHM", "publication_ref": [], "table_ref": [], "text": "To search all possible (m i ) p i=1 for features' main effects within the Rashomon set, we propose a greedy search algorithm. For each feature's main effect φ i (M mi ), we set two p-length vectors of ones m i+ and m i-for upper bound search and for lower bound search, as both actions can increase loss values if the model f is well-trained. During the search process, we set a uniform learning rate to regulate the quantity of models generated for all dimensions. Higher learning rates lead to a reduced number of models. The searching process continues until the loss condition is not satisfied M mi / ∈ F. This condition is imposed on the loss difference, denoted as\nϕ i = E[L(M mi (X), y)] -E[L(f (X), y)].\nHere, each mask corresponds to a concatenated model, resulting in an associated loss value. Any mask m i during training meets M mi ∈ F and is one of the target models for feature x i in R(ϵ, f * , F). Finally, we can obtain a model class with main effects\n{M mi ∈ F | φ i (M mi )} p i=1\n, and calculate any order of feature interaction scores:\nF IS I (R) = {F IS I (M m I ) : M m I ∈ R}\nwith Eq. 9; see the pseudocode in Algorithm 1." }, { "figure_ref": [], "heading": "THE INFLUENCE OF JOINT EFFECTS ON THE BOUNDARY OF THE RASHOMON SET", "publication_ref": [], "table_ref": [], "text": "When searching for all main effects of features in the Rashomon set through the loss difference, we isolate the feature of interest and record the loss difference to ensure the Rashomon set condition. In theory, the joint effects of features should be the sum of main effects of features when there is no feature interaction. If the sum of the main effects is less than ϵ, then the joint effect would also be less than ϵ. However, in reality, the presence of feature interaction causes the joint effects to surpass the boundary. This motivates us to study the influences of interaction and the extent to which it varies.\nFigure 2: Exploring feature interaction of function f = x i + x j + x i * x j and f = x i +x j +x k +x i * x j * x k in Rashomon set. The data points are sampled from both functions. Exploring ϕ i and ϕ j separately enables us to draw red circles representing i∈I ϕ i = ϵ, where ϵ is the radius, e.g. ϵ = 0.1 can be potentially expressed by ϕ i = 0.01 and ϕ j = 0.09; a detailed example is provided in Appendix F. From inner to outer, the radii are 0.1, 0.2, and 0.3, respectively and the corresponding joint effects ε can be calculated by ϕ i,j and ϕ i,j,k , from which we plot the blue curves. This can be achieved in 2D (left) and higher dimensions, e.g. 3D (right)." }, { "figure_ref": [ "fig_13" ], "heading": "Algorithm 1 Greedy search algorithm", "publication_ref": [], "table_ref": [], "text": "Require: Halo plot A Halo plot fixes the sum of main effects within the boundary of the Rashomon set and visualizes the change in the joint effect. The fixed loss change is formulated as: i∈I ϕ i (M mi , f ) = t, where t ∈ (0, ϵ] to ensure the loss condition. Then we map an n-sphere with radius t and plot corresponding empirical loss ϕ i,j (M mi,j , f ) : E[L(M mi,j (X), y)] -E[L(f (X), y)], from which we can directly visualize how the joint effect varies and affects the Rashomon set boundary. Any order of interaction Halo plot can be similarly derived and two simple examples are used to illustrate the idea when |I| = 2 (2D) and |I| = 3 (3D) in Fig. 10. For higher order interaction visualizations, we discussed in Appendix E.\nlearning rate > 0 ∧ ϵ ≥ 0 Ensure: L(f • m(X), y) ≤ L(f * (X), y) + ϵ m+ = 1 p i=1 , m-= 1 p i=1 , counter = 0 loss ref ⇐ L(f * (X), y) while counter ≤ 4 do if searching for the upper bound of m then m ′ + ⇐ Copy(m+) m ′ +[s] ⇐ m ′ +[s] + learning rate ϕs ⇐ L(f * • m ′ +(X s), y) -loss ref if ϕs ≤ ϵ then m+ ⇐ m ′ + else learning rate ⇐ learning rate × 0.1 counter ⇐ counter + 1 else searching for the lower bound of m m ′ -⇐ Copy(m-) m ′ -[s] ⇐ m ′ -[s] -learning rate ϕs ⇐ L(f * • m ′ -(X s), y) -loss ref if ϕs ≤ ϵ then m-⇐ m ′ - else learning rate ⇐ learning rate × 0.1 counter ⇐ counter + 1\nComputational time Our framework effectively addresses the computational expense associated with calculating interactions, eliminating the need to calculate all pairs, especially in scenarios involving higher-order interactions. By performing a single-time search for all main effects and subsequently calculating the desired interactions, the proposed algorithm significantly reduces computational complexity. Specifically, a subset of FISC range [F ISC I (R) min , F ISC I (R) max ] bounded by t can be approximated using representative points on i∈I ϕ i (M mi , f ) = t. Collecting ordered pairs S = {(ϕ i x, ϕ j y) | (x, y) ∈ R, 0.1 ≤ x, y ≤ 0.9, x+y = 1} is sufficient to characterize pairwise interactions, which requires |S| × 2 2 calculations. Similarly, any order of interaction requires |S| × 2 |I| model predictions." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "In order to showcase the practical value of FISC, we have set two primary objectives and they are: (a) to demonstrate the concept of FISC using a well-trained model that overlooks true feature interactions, and (b) to examine the impact of feature interactions and explore the variation in FIS on accurate model predictions from our greedy sampling method. In the experiments below, we identify the main feature effects within the Rashomon set and calculate the FIS and FISC using representative models, followed by visualizations using Halo and swarm plots. Noted that the meaning of axis in Halo plots refers to Sec. 3.2.3." }, { "figure_ref": [], "heading": "QUANTITATIVE COMPARISON AND EXTENSION", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_3", "fig_5" ], "heading": "Synthetic Validation", "publication_ref": [], "table_ref": [ "tab_0", "tab_0", "tab_1" ], "text": "In order to validate the performance of FIS, we use SOTA Tsang et al. (2020b) to generate ground truth feature interactions and apply our framework to the synthetic functions in Table 1 in the same context. The interaction detection area under the curve (AUC) on these functions from different baseline methods is provided in Tables 1 and2. As the relation to ArchDetect provided in Appendix C, our FIS is designed under certain conditions to detect the interactions of their proposed contexts and the results are as expected. More details can be found in the original paper (Tsang et al., 2020b).\nExploring FISC from an accurate MLP relying on \"inaccurate\" interactions The above accuracy is based on the knowledge of given functions, but in practice, we usually have a dataset with unknown functional relationships. This prompts us to consider how we account for the performance of a well-trained model that achieves high accuracy in making predictions, yet fails to preserve ground truth interactions with FISC. In this study, we conduct a general black box explanation task. Specifically, we utilize a dataset generated by a model F 4 (assuming unknown to us) and train an MLP using the same neural interaction detection (NID) setting. \n* = [1, 1, 1..., 1] ∈ R 40 , x ′ = [-1, -1, -1..., -1] ∈ R 40 and z[i] =\nx i is a key-value pair function. (x; z) is defined as 1 if index of x belongs to any keys of function z, otherwise -1. Based on the ranking metrics AUC of the receiver operating characteristic curve (ROC), the MLP achieves an interaction detection accuracy of 56%. The trained MLP is set as the reference model and we explore FISC in the Rashomon set with t = [0.2ϵ, 0.4ϵ, 0.6ϵ, 0.8ϵ, ϵ] respectively, to observe that the range of each pairwise feature interaction exceeds the threshold extracted above; see Fig. 3 and Fig. 5(a). This provides insights into the role of FISC in bridging the gap between unknown ground truth function f and a well-trained model that fails to preserve true feature interactions. The potential loss of feature interaction preservation in a model can be mitigated by other models in the Rashomon set. In this case, one can conclude that there always exists a model in the Rashomon set in which certain features interact.\nF 1 (x) = 10 i=1 10 j=1 x i x j + 20 i=11 30 j=21 x i x j + 40 k=1 x k F 2 (x) = (x; {x * i } 20 i=1 ) + (x; {x * i } 30 i=11 ) + 40 j=1 x j F 3 (x) = (x; {x ′ i } 20 i=1 ) + (x; {x * i } 30 i=11 ) + 40 j=1 x j F 4 (x) = (x; {x * 1 , x * 2 } ∪ x ′ 3 ) + (x; {x * i } 30 i=11 ) + 40 j=1 x j" }, { "figure_ref": [], "heading": "RECIDIVISM PREDICTION", "publication_ref": [ "b8", "b27", "b6", "b3" ], "table_ref": [], "text": "The use of the COMPAS score, as reported by the ProPublica news organization, has been instrumental in examining its applicability in proprietary recidivism prediction, as evidenced by previous studies (Flores et al., 2016;Rudin et al., 2020). However, it has been demonstrated that interpreting a pre-specified model has its limitations, as exemplified by the Rashomon set (Fisher et al., 2019;Dong & Rudin, 2020). While the Rashomon set has been studied, the significance of feature interactions within the set has not yet been explored." }, { "figure_ref": [ "fig_4" ], "heading": "COMPARISON WITH EXISTING RELEVANT LITERATURE", "publication_ref": [ "b3", "b41", "b3", "b3" ], "table_ref": [ "tab_2" ], "text": "Feature importance comparison-variable importance cloud (VIC) Given the absence of an identical task in existing literature, we adopt the experiment settings utilized in prior research Dong & Rudin (2020) that aligns with our research focus. We use the dataset of 7,214 defendants in Broward County, Florida Washington (2018), the same logistic model, and set ϵ = 0.05. To guarantee a fair and unbiased comparison, we explore the Rashomon set using our greedy search algorithm and evaluate the feature importance (main effect in our definition) using model class reliance, which is the range of feature importance values in the Rashomon set. The resulting outcomes are presented in Table 3 and it demonstrates the promising range of feature importance in a general case. Although the feature importance is not the focus of our study, it's still worthy of further work. Dong Dong & Rudin (2020) proposed the information compensation structure and stated that information loss in \"prior\" was most efficiently compensated by 'juvenile', but the underlying reason was missing. We find that VIC can not capture the higher-order information compensation structure. Going beyond the literature The logistic model from Dong & Rudin (2020) is not optimal so we train and improve the model as in Appendix G. We apply FISC to this model with settings t = [0.2ϵ, 0.4ϵ, 0.6ϵ, 0.8ϵ, ϵ], and the results are shown in Fig. 5(b). From the Fig. 5(b) we observe that the interaction of 'prior' and 'juvenile' or 'prior' and 'charge' can be extended to a similar extent. However, the compensation structure proposed by VIC fails to acknowledge the potential of the latter interaction. Additionally, we provide confirmation and explanation for their statement, as the latter case necessitates a greater loss change (indicated by a darker red color) compared to the former, thereby facilitating easier compensation. We present the pairwise interaction and a triplewise interaction in Fig. 4, allowing us to observe the reliance on higher-order interactions. For instance, we observe that the decrease in reliance on the feature interaction 'prior' and 'juvenile' (x-y plane) can be compensated by the interaction between 'juvenile' and 'charge' (y-z plane) or 'prior' and 'charge' (x-z plane). Likewise, the color scheme depicted in the figure conveys equivalent information." }, { "figure_ref": [ "fig_6", "fig_6", "fig_6", "fig_6" ], "heading": "FEATURE INTERACTION IN IMAGE CLASSIFICATION WITH FISC", "publication_ref": [ "b28", "b4", "b39" ], "table_ref": [], "text": "Over the past decade, large and pre-trained transformer architectures have achieved SOTA performance on a wide variety of tasks (Sarzynska-Wawer et al., 2021;Dosovitskiy et al., 2021). Here we aim to study how a large-scale model relies on feature interactions and how reasonably accurate predictions can differ in terms of their variances in FIS. We adopt the pre-trained SOTA vision transformer ViT-B-16 developed in Dosovitskiy et al. ( 2021 The image is segmented as features by the quickshift method Vedaldi & Soatto (2008), following the protocol Tsang et al. (2020b) shown in Fig. 6(a). We use FISC to explain the interaction effects. The top 5 important segmentations are coloured blue in Fig. 6(b), where importance increases according to colour lightness. The most important part of the image is the forehead of the dog. A similar observation is reported in (Tsang et al., 2020b). Considering the joint effect resulting in the dog classification, the top 5 crucial pairwise interactions are given in Fig. 6(c). Notably, the lightest component interacts with other components more than once. Results shows that the model relies heavily on single forehead segmentation to predict a dog, but this segmentation is disappearing in the top 5 features for predicting a dog relying on interactions. If our objective is to classify the image as a cat, our findings suggest in Fig. 6(d) that the model would rely on both the dog and cat segmentations." }, { "figure_ref": [], "heading": "DISCUSSION", "publication_ref": [], "table_ref": [], "text": "The motivation behind the development of FIS and FISC is to explain feature interaction in a model class, rather than focusing on a specific model. In this study, we highlight that feature interaction can vary significantly across models with similar performance. This work initializes the exploration of feature interaction in a Rashomon set. By leveraging the FISC, we approach the true feature interaction from a set of well-trained models. The variation of FIS allows for the explanation of diverse models that rely on distinct feature interactions and this property holds significant potential for selecting models that exhibit the desired feature interaction dependency, illustrated in synthetic validation, recidivism prediction and image classification with promising results. Further exploration of feature interaction with other ways of characterizing the Rashomon set is subject to future work. Additionally, we have developed a visualization tool that effectively illustrates the joint effect on the Rashomon set exploration and we adopt swarm plots to depict the variability of FISs in a model class. These tools has the potential to inspire future investigations in various domains. Our code is readily available and will be made publicly accessible upon acceptance.\n∀f ∈ F (E[L(f (X), y)] → (∃m (E[L(f * • m(X), y)] ≃ E[L(f (X), y)]\n) with details below: .\n1 ∀f ∈ F (E[L(f (X), y)] → E[L(f (X), y)] ≤ EL(f * (X), y) + ϵ) 2 ∀f ∈ F (E[L(f (X), y)] → ∃m (E[L(f • m(X), y)] ≤ E[L(f (X), y)])) 3 f ′ (E[L(f ′ (X), y)] → E[L(f ′ (X), y)] ≤ E[L(f * (X), y)] + ϵ) 4 (E[L(f ′ (X), y)] → ∃m (E[L(f ′ • m(X), y)] ≤ E[L(f ′ (X), y)])) 5 E[L(f ′ (X), y)] 6 E[L(f ′ (X), y)] ≤ E[L(f * (X), y)] + ϵ) →-E, 3, 5 7 ∃m (E[L(f * • m(X), y)] ≤ E[L(f * (X), y)]) →-E, 4, 5 8 m ′ E[L(f * • m ′ (X), y)] ≃ E[L(f * (X), y)] 9 E[L(f ′ (X), y)] ≤ E[L(f * • m ′ (X), y)] + ϵ R, 6, 8 10 E[L(f ′ (X), y)] ≤ E[L(f ′ (X), y)] + ϵ Logic, 6, 8 11 ∃m E[L(f * • m(X), y)] ≃ EL(f ′ (X), y) ∃-I, 9, 10 12 ∃m (E[L(f * • m(X), y)] ≃ EL(f ′ (X), y) ∃-E, 7, 8-11 13 E[L(f ′ (X), y)] → ∃m (EL(f * • m(X), y) ≃ E[L(f ′ (X), y)] →-I, 5-12 14 ∀f ∈ F (E[L(f (X), y)] → (∃m (E[L(f * • m(X), y)] ≃ E[L(f (X), y)]) ∀-I,\nGiven the function, we randomly sampled 12,000 data points from the uniform distribution a, b, c ∼ U(0, 1) with fixed seed as input and calculated outputs accordingly. It's noted that the outputs might be complex numbers. The train/test/validation set is split as 0.8/0.1/0.1. The regression problem can be fit using different types of models. Here we applied a traditional artificial neural network and achieved mse 0.3353 in the training set. " }, { "figure_ref": [], "heading": "B.2 GROUND TRUTH INTERACTIONS", "publication_ref": [], "table_ref": [], "text": "The given function\nf * = -b± √ b 2 -4ac 2a\ncan be seen as an ideal model with E[L(f * (X))] = 0, which corresponds to a true feature interaction. The expected FIS of a, b can be calculated as follows:\nF IS a,b (f * ) =φ a,b (f * ) -(φ a (f * ) + φ b (f * )) (10) =E[L(f * (X \\{a,b} ))] -E[L(f * (X \\{a} ))] -E[L(f * (X \\{b} ))] ≃ n i=1 (f * (X \\{a,b} ) -y) 2 - n i=1 (f * (X \\{a} ) -y) 2 - n i=1 (f * (X \\{b} ) -y) 2\nWhere n is the number of samples and interaction between b, c and a, c can be calculated in the same way. The interaction values are absolute to indicate the strength. The interaction values are absolute to indicate the strength, summarized in the Table. 5.\nTable 5: Ground truth of feature importance and interaction\nφ a (f * ) φ b (f * ) φ c (f * ) φ a,b (f * ) φ a,c (f * ) φ b,c (f * )\n116.4602 24.2023 0.2924 21.2866 1.3813 0.5945" }, { "figure_ref": [ "fig_9" ], "heading": "B.3 APPROXIMATED FISC", "publication_ref": [], "table_ref": [], "text": "Now, assume that we do not have an optimal model in the real-world and our objective is to train a model to fit the data, the regression problem is fitted using a simple neural network and achieved MSE=0.33 in the test set. The learning curve and prediction versus ground truth are illustrated in Fig. 7. After training the model, we applied the FISC to it and plotted the swarm plots, which are provided in Fig. 8. Each point in the swarm plot represents an FIS value from a specific model.\nwhen we consider the leftmost point among the feature pair a, b, it implies that there exists a model with a lower FIS for a, b compared to a, c. This means that there was a model trained that happened to have a lower FIS for a, b than b, c using the same dataset with promising loss. This observation contradicts the truth interaction calculated previously. However, through the analysis of FISC, we found that the range of FIS values for each feature pair covers the truth interaction. By examining the statistics of the results, we can approach true feature interactions, which will be the focus of our further work." }, { "figure_ref": [], "heading": "C RELATION TO AR C HDE T E C T", "publication_ref": [], "table_ref": [], "text": "The framework ArchDetectTsang et al. (2020b) uses δ hihj to detect the interaction existence δ defined as\nδ = f (x * {i,j} + x \\{i,j} ) + f (x ′ {i,j} + x \\{i,j} ) -f (x ′ {i} + x * {j} + x \\{i,j} ) -f (x ′ {j} + x * {i} + x \\{i,j} )(11)\nbased on mixed partial derivatives, where f is a black box model, x * represents the sample we want to explain and x ′ represents a neutral reference sample, which is often zero vectors. If δ > 0, then interaction exists, otherwise not. The author claimed 100% accuracy in a synthetic dataset and defined the feature interaction strength as the square of δ hihj . Here we show that for any model M m in the model class, the FIS value is equivalent to δ under conditions that the loss is set as RMSE. In our setting, for any model M m , the expected RMSE loss is\nE[L(M m I (X), y)] = E[ (y -M m I (X)) 2 ].\nFor simplicity, we consider E[y-M m I (X)] (meaning assuming the model approximate the true solutions from either below or above). For I = {i, j} and ∀M mij ∈ R(ϵ, g, F), following the definition, we can derive\nF IS I (M m I ) = (E[M mi,j (X \\{i,j} )] + E[M mi,j (X)] -E[M mij (X \\{i} )] -E[M mi,j (X \\{j} )]),(12)\nwhich is mathematically equivalent to (11) in view of the decomposition of the interaction into four terms. " }, { "figure_ref": [], "heading": "D DERIVATION OF THE", "publication_ref": [], "table_ref": [], "text": "E[L(M m (X))] ≤ E[L(f (X))] + ϵ. (13\n)\nAfter applying the above settings, we can rewrite Eq. 13 as:\nE[y -α T 1 1 + e -β T X + b] -ϵ ≤ E[y -α T 1 1 + e -m T •β T X + b] ≤ E[y -α T 1 1 + e -β T X + b] + ϵ.\nWithout loss of generality, we assume α > 0 and a scaling on ϵ by α. This can be simplified to\nE[ 1 1 + e -β T X ] -ϵ ≤ E[ 1 1 + e -m T •β T X ] ≤ E[ 1 1 + e -β T X ] + ϵ.(14)\nWith further assumption3 that ϵ ≤ min{E[\n1 1+e -m T •β T X ], E[ e -m T •β T X 1+e -m T •β T X ]}, this inequality is re- duced to E ln 1 -ϵ -ϵe -β T X e -β T X + ϵ + ϵe -β T X ≤ E[(β T X)m T ] ≤ E ln 1 + ϵ + ϵe -β T X e -β T X -ϵ -ϵe -β T X , (15)\nwhich gives the condition on m that characterises the mask-based Rashomon set." }, { "figure_ref": [], "heading": "D.0.2 CALCULATING THE EXPECTED FISC LOWER AND UPPER BOUNDS", "publication_ref": [], "table_ref": [], "text": "With the above setting, we calculate the FIS: \nF IS I (M m ) = E[y -(α T 1 1 + e -m T •β T X \\i,j + b)] -E[y -(α T 1 1 + e -m T •β T X + b)]- (E[y -(α T 1 1 + e -m T •β T X \\i + b)] -E[y -(α T 1 1 + e -m T •β T X + b)]+ E[y -(α T 1 1 + e -m T •β T X \\j + b)] -E[y -(α T 1 1 + e -m T •β T X + b)]). (16)\nm 1 = 1 β T X ln 1 -ϵ -ϵe -β T X e -β T X + ϵ + ϵe -β T X , m 2 = 1 β T X ln 1 + ϵ + ϵe -β T X e -β T X -ϵ -ϵe -β T X . (17)\nNow, one can denote\nF IS min := inf m F IS I (M m ) = min{F IS I (M m1 ), F IS I (M m2 ), F IS I (M m⋆ )}, F IS max := sup m F IS I (M m ) = max{F IS I (M m1 ), F IS I (M m2 ), F IS I (M m⋆ )}. (18\n)\nThe range of FISC is then characterized as\n[F ISC I (R) min , F ISC I (R) max ] = [F IS min , F IS max ].\nRemark. The exact critical point m = m ⋆ such that F IS I (Mm) ∂m = 0 is difficult to obtain as it requires solutions involving a polynomial of degree seven and exponential/logarithmic functions. However, this can be approximately by root-finding algorithms such as Newton's method. Another approximation is to use a first-order Taylor expansion of F IS I (M m ) at m = 1. The analytical expression is still extremely complicated, posing difficulties in finding extreme values of FIS, so we obtain this critical point by a root-finding algorithm. We present a generic method in Sec. 3.2." }, { "figure_ref": [ "fig_12" ], "heading": "E HIGHER ORDER INTERACTION VISUALIZATION DISCUSSION", "publication_ref": [ "b0", "b3" ], "table_ref": [], "text": "In our paper, we provide visualizations of pairwise interaction in halo plot and swarm plot, and triplewise interaction in halo plot. These two-level interactions are the most studied in the literature and higher-order interactions are sometimes considered redundant in terms of attribution (Bien et al., 2013;Dong & Rudin, 2020). Theoretically, halo plot and swarm plot can visualize any order of interactions and swarm plots can visualize any order of interactions empirically. However, visualizing higher-dimensional spaces (>3) can be challenging due to the limitations of human perception, which makes directly visualizing higher-order interactions using halo plots not feasible.\nWe acknowledge this limitation and offer suggestions for users who insist visualizing higher-order interactions (>3) using halo plots. To visualize higher-order interactions (>3) in halo plots, we can apply some commonly used higher-order visualization methods, e.g., encoding color information as 4th dimension and applying dimensionality reduction. To demonstrate this, we provided an example of visualizing 4-way interactions. The experiment is based on the recidivism prediction in Sec. 4. On the existence of 3-way interaction visualization, we encoded color as 4th feature, where lighter colors indicate the larger interactions, as shown in Fig. 9. " }, { "figure_ref": [ "fig_13" ], "heading": "F ILLUSTRATION OF HA L O PLOT WITH A CONCRETE EXAMPLE", "publication_ref": [], "table_ref": [], "text": "Following the setting in the main text that function is f = x i + x j + x i * x j and ϵ = 0.1. We choose a scenario when the main effect of feature x i is ϕ i = 0.01 and the main effect of feature x j is ϕ j = 0.09. Given ϕ i = 0.01, we can derive the following with Mean Squared Error (MSE) loss function\nϕ i = L(f • m i (X) -L(f (X)) = (y -(m i x i + x j + m i x i * x j )) 2 -(y -(x i + x j + x i * x j )) 2 = 0.01.(19)\nSimilarly,\nϕ j = L(f • m j (X) -L(f (X)) = (y -(x i + m j x j + x i * m j x j )) 2 -(y -(x i + x j + x i * x j )) 2 = 0.09.(20)\nConsidering variables sampling from the normal distribution x i ∼ N (µ, σ 2 ) ; x j ∼ N (µ, σ 2 ) and y is ground truth from the function. We can determine two possible values for m i , namely 0.95 and 1.05, as well as two possible values for m j , namely 0.85 and 1.15. From these options, we can identify four potential sets that guarantee the sum of main effects to be ϵ = 0.1: (m i , m j ) = (0.95, 0.85), (0.95, 1.15), (1.05, 0.85), (1.05, 1.15). Next, we proceed to compute ϕ i,j = (y -\n(m i x i + m j x j + m i x i * m j x j )) 2 -(y -(x i + x j + x i * x j )\n) 2 using the aforementioned sets. These calculations allow us to generate four values of ε, which can be plotted along a circle with a radius of ϵ. Following the above procedure, we collected 9 ordered pairs {(ϕ i * x, ϕ j * y) | (x, y) ∈ R, 0.1 ≤ x, y ≤ 0.9, x + y = 1} and plotted 36 points, shown in Fig. 10." }, { "figure_ref": [ "fig_14" ], "heading": "G OPTIMIZING THE REGRESSOR FROM VIC", "publication_ref": [], "table_ref": [], "text": "We downloaded the public code from https://zenodo.org/record/4065582# .ZGIHVHZBzHI and extracted model parameters from the logistic model in VIC. We realized that the model can be optimized further and showed the comparison between two models in loss vs individual masks of each feature in Fig. 11." }, { "figure_ref": [], "heading": "H COMPLETE RESULTS FROM THREE SAMPLING METHODS", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "Due to the page limitation, here we provided the complete results from three sampling methods: sampling from adversarial weight perturbations (AWP), sampling with different weight initialization seeds and greedy search approach, summarized in Table 6. To provide a more comprehensive understanding of FISC, swarm plots are displayed below. In these plots, each data point represents a FIS derived from the sampled models, without absolute value transformation, as this comparison more directly illustrates the distribution of FISC across different sampling methods." }, { "figure_ref": [], "heading": "I AN APPLICATION IN REAL-WORLD DATASET", "publication_ref": [ "b10", "b11", "b5" ], "table_ref": [], "text": "To illustrate the usage of FISC, we applied our method to MXenes, an early transition metal carbides with two-dimensional (2D) structures exhibiting metallic conductivity and hydrophilicity. These materials are denoted by M n+1 X n T x , with M representing an early transition metal such as Sc or Ti, X signifying either Carbon (C) or Nitrogen (N) and functional groups T (e.g., O, F, OH) for surface termination Gogotsi & Anasori (2019); Gogotsi & Huang (2021). By intercalating different ions and molecules like Li + and K + , MXenes' electrochemical attributes such as voltage, induced charge and capacity can be modulated, serving as a potential battery material.\nWe used a dataset that contains 360 compounds intercalated with Li + , Na + , K + , and Mg 2+ ions from Eames & Islam (2014); Li & Barnard (2022b). We represented the dataset by their categories, for instance, category Z encoded by 0-3 to describe Li, Na, K, and Mg, respectively, aiming to discover the interactions between structures. The dataset is split 90/10 for training/testing. We trained 3 MLPs for predicting their 3 properties: voltage, induced charge and capacity, individually. The R 2 for each reference model on testing set is reported in Figs. 15,16,17. We searched the Rashomon set based The results reveal a noteworthy observation: FIS exhibits a broad range when category Z is involved in interactions during the prediction of induced charge. This aligns with our common knowledge, given that ions, which form the category Z, are the cause of induced charge. These insightful results offer a more comprehensive understanding of feature interactions within the Rashomon set and potentially guide researchers in making informed decisions for future research. " }, { "figure_ref": [], "heading": "A UNVEILING THE MASK-BASED SAMPLING RASHOMON SET", "publication_ref": [], "table_ref": [], "text": "As stated that it's always easier to find an accurate but complex model, it refers to a model with more complex structures. It's safely claimed that concatenating layers into a backbone model increases the complexity and therefore we logically demonstrated that such a model with more mask layers can achieve similar or better performance. Then we applied propositional logic to prove, for any model in the Rashomon set, we can find an alternative by adding mask layers into the reference model (same as the backbone model). Our aim can be mathematically represented as: Published as a conference paper at ICLR 2024 " } ]
Interactions among features are central to understanding the behavior of machine learning models. Recent research has made significant strides in detecting and quantifying feature interactions in single predictive models. However, we argue that the feature interactions extracted from a single pre-specified model may not be trustworthy since: a well-trained predictive model may not preserve the true feature interactions and there exist multiple well-performing predictive models that differ in feature interaction strengths. Thus, we recommend exploring feature interaction strengths in a model class of approximately equally accurate predictive models. In this work, we introduce the feature interaction score (FIS) in the context of a Rashomon set, representing a collection of models that achieve similar accuracy on a given task. We propose a general and practical algorithm to calculate the FIS in the model class. We demonstrate the properties of the FIS via synthetic data and draw connections to other areas of statistics. Additionally, we introduce a Halo plot for visualizing the feature interaction variance in high-dimensional space and a swarm plot for analyzing FIS in a Rashomon set. Experiments with recidivism prediction and image classification illustrate how feature interactions can vary dramatically in importance for similarly accurate predictive models. Our results suggest that the proposed FIS can provide valuable insights into the nature of feature interactions in machine learning models.
EXPLORING THE CLOUD OF FEATURE INTERACTION SCORES IN A RASHOMON SET
[ { "figure_caption": "(a) A linear model y = p i wi * xi shows no feature interaction (left); see e.g. Sorokina et al. (2008). A neural network with activation layer y = α( p i wi * xi) shows feature interaction (right). (b) The x-axis shows the FIS for the feature set I in each model f while the y-axis shows the expected loss of the model in the model class. The FISC range is highlighted in blue.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: Two accurate models with different feature interactions and an illustration of FIS in a hypothetical Rashomon set R(ϵ, f * , F).", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "(a) x0 and x1 (b) x0 and x2 (c) x0 and x3 (d) x1 and x2", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Pairwise Halo plot from FISC applied to an MLP trained from NID setting. The ground truth interaction defined in function f (x) = (x; {x * 0 , x * 1 } ∪ x ′ 2 ) + (x; {x * i } 30 i=11 ) + 40 j=1 x j from left to right are: true, true, false, true, respectively.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Pairwise Halo: 'prior' and 'juvenile' (top); and triplewise Halo: 'prior', 'juvenile' and 'charge' (bottom).", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "( a )aFigure 5: Illustration of FISC using swarm plots, where the black vertical line is the threshold and the yellow star is the reference FIS. Each point in the plot is coloured based on loss value.", "figure_data": "", "figure_id": "fig_5", "figure_label": "a", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: The utility of FISC in an image classification task. From left to right: the original segmented image, top 5 main effect segmentations for accurate dog predictions by the transformer (coloured in blue), top 5 important interacted segmentations for accurate dog predictions by the transformer (coloured in red), and top 5 important interacted segmentations for accurate cat predictions by the transformer (coloured in purple).", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": ");Paszke et al. (2019) with a top-5 accuracy of 95.32% on ImageNet and we chose an image from the COCO dataset that encompasses two objects, a cat and a dog, sharing similar visual characteristicsLin et al. (2014). The 3 primary predictions from the model classifying the image ranks as: Pekinese 0.49, Japanese spaniel 0.16, and Tiger cat 0.06. The model identifies the image as resembling a dog more than a cat.", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "idea, we designed an additional experiment in the synthetic dataset and solved the quadratic problem. The problem is to train a model to solve the quadratic equation ax 2 + bx + c = 0, where the variables a, b, and c are inputs and x 1 and x 2 are the outputs. Based on mathematical principles, two error-free models can be found as x = -b± √ b 2 -4ac 2a", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Left: Prediction VS Ground Truth. Right: Learning curve.", "figure_data": "", "figure_id": "fig_9", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8: Results from a set of models sampled from the well-trained neural network.", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "UPPER AND LOWER BOUND OF FISC IN MLP D.0.1 CHARACTERIZING THE MASK-BASED RASHOMON SET We first apply the mask m = {m 1 , m 2 , ..., , m p } to the reference model as M m = f • m. To characterize the mask-based Rashomon set, the goal is to find the conditions for m such that:", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Left: 3-D halo plot. Right: 4-D halo plot", "figure_data": "", "figure_id": "fig_12", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Exploring feature interaction of function f = x i + x j + x i * x j in the Rashomon set.From inner to outer, the radii are 0.1, 0.2, 0.3, 0.4, and 0.5, respectively.", "figure_data": "", "figure_id": "fig_13", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Left: greedy search algorithm applied in a logistic classifier extracted from VIC Dong & Rudin (2020). Right: greedy algorithm applied in another optimal logistic classifier. The x-axis refers to individual masks while the y-axis represents the loss change from the base model.", "figure_data": "", "figure_id": "fig_14", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 14 :14Figure 14: FISC from random sampling method", "figure_data": "", "figure_id": "fig_15", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15 :15Figure 15: FISC from charge prediction. The R 2 of the reference model (depicted by the yellow star) achieves 0.98 on the testing set.", "figure_data": "", "figure_id": "fig_16", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Figure 16 :16Figure 16: FISC from voltage prediction. The R 2 of the reference model (depicted by the yellow star) achieves 0.71 on the testing set.", "figure_data": "", "figure_id": "fig_17", "figure_label": "16", "figure_type": "figure" }, { "figure_caption": "Figure 17 :17Figure 17: FISC from capacity prediction. The R 2 of the reference model (depicted by the yellow star) achieves 0.90 on the testing set.", "figure_data": "", "figure_id": "fig_18", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Functions with ground truth interactions, where x", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison of pairwise feature interaction detection AUC. FIS is set in the same context to detect feature interactions.", "figure_data": "MethodF 1F 2F 3F 4Two-way ANOVA1.00.51 0.51 0.55Neural Interaction Detection0.94 0.52 0.48 0.56Shapley Interaction Index1.00.50 0.50 0.51Shapley Taylor Interaction Index 1.00.50 0.50 0.51ArchDetect1.01.01.01.0FIS in the context (this work)1.01.01.01.0", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": "MCRAgeRacePriorGenderJuvenileChargeAVG(MCR)1.113 1.021 0.969 1.013 1.330 1.048 0.951 1.009 1.367 1.038 1.065 1.004MAX(MCR+)1.141 1.060 1.007 1.037 1.372 1.116 0.986 1.032 1.390 1.097 1.095 1.021MIN(MCR-)1.072 0.987 0.919 0.994 1.123 0.988 0.900 0.989 1.278 0.992 1.004 0.990(MCR+)-(MCR-) 0.068 0.073 0.088 0.043 0.249 0.129 0.086 0.043 0.112 0.105 0.091 0.031", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Comparison of FISC calculated from different sampling methods (partial). We use vs to denote the interaction pairs and best results are highlighted in bold.Semenova et al. (2019). Together with our introduced greedy algorithm, we utilized these methods to sample models from the Rashomon set. We adopt an MLP as the reference model and keep it consistent among all methods. With set ϵ = 0.05, we constructed the Rashomon set and calculated the corresponding FISC, partial results presented in Table.4. It is clear that our approach explored a broader range of FISC in comparison to the alternatives. Complete results are provided in Appendix H, with separate swarm plot visualizations.", "figure_data": "AWP samplingGreedy sampling (ours)Random samplingInteraction pairsF ISmin F ISmax F ISmin F ISmaxF ISminF ISmaxage vs race-0.00026 0.000113 -0.00089 0.002069-8.04E-06 6.32E-06age vs prior-0.00017 0.00014-0.00122 0.001975-1.02E-05 1.19E-05age vs gender-0.00011 0.000135 -0.00178 0.001341-1.99E-05 1.68E-05age vs juvenilecrime -0.00019 0.000139 -0.00167 0.002496-1.31E-05 8.15E-06age vs currentcharge -0.00030.000351 -0.00257 0.00268-1.24E-05 1.61E-05FISC calculated from different sampling methods-AWP and random sampling Due to thelimited availability of relevant sampling benchmarks in the current literature, we established anevaluation framework by considering two well-accepted methods as baselines: sampling fromadversarial weight perturbations (AWP) Wu et al. (2020); Tsai et al. (2021) and sampling withdifferent weight initialization seeds Li & Barnard (2022a);", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Comparison of FISC calculated from different sampling methods (complete). We use vs to denote the interaction pairs and best results are highlighted in bold.", "figure_data": "AWP samplingGreedy samplingRandom samplingInteraction pairsF ISmin F ISmax F ISmin F ISmax F ISminF ISmaxage vs race-0.00026 0.000113 -0.00089 0.002069 -8.04E-06 6.32E-06age vs prior-0.00017 0.00014-0.00122 0.001975 -1.02E-05 1.19E-05age vs gender-0.00011 0.000135 -0.00178 0.001341 -1.99E-05 1.68E-05age vs juvenilecrime-0.00019 0.000139 -0.00167 0.002496 -1.31E-05 8.15E-06age vs currentcharge-0.00030.000351 -0.00257 0.00268-1.24E-05 1.61E-05race vs prior-0.00029 0.000225 -0.00151 0.006728 -1.23E-05 1.81E-05race vs gender-0.00020.000158 -0.00224 0.000606 -9.43E-06 1.34E-05race vs juvenilecrime-0.00019 0.000189 -0.00851 0.001587 -2.13E-05 1.13E-05race vs currentcharge-0.00034 0.00027-0.00719 0.002191 -1.87E-05 1.43E-05prior vs gender-0.00018 0.000191 -0.00060.002526 -1.58E-05 2.50E-05prior vs juvenilecrime-0.00019 0.00022-0.00680.002592 -2.06E-05 1.91E-05prior vs currentcharge-0.00029 0.00022-0.00423 0.002219 -2.23E-05 2.16E-05gender vs juvenilecrime-0.00012 0.000148 -0.00828 0.000754 -1.24E-05 1.04E-05gender vs currentcharge-0.00035 0.000211 -0.01362 0.003708 -2.01E-05 1.61E-05juvenilecrime vs currentcharge -0.00022 0.000498 -0.00917 0.000693 -2.14E-05 2.28E-05", "figure_id": "tab_4", "figure_label": "6", "figure_type": "table" } ]
Sichao Li; Rong Wang; Quanling Deng; Amanda S Barnard
[ { "authors": "Jacob Bien; Jonathan Taylor; Robert Tibshirani", "journal": "Annals of statistics", "ref_id": "b0", "title": "A lasso for hierarchical interactions", "year": "2013" }, { "authors": "Tianyu Cui; Pekka Marttinen; Samuel Kaski", "journal": "", "ref_id": "b1", "title": "Learning global pairwise interactions with Bayesian neural networks", "year": "2020" }, { "authors": "Anupam Datta; Shayak Sen; Yair Zick", "journal": "IEEE", "ref_id": "b2", "title": "Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems", "year": "2016" }, { "authors": "Jiayun Dong; Cynthia Rudin", "journal": "Nature Machine Intelligence", "ref_id": "b3", "title": "Exploring the cloud of variable importance for the set of all good models", "year": "2020" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly; Jakob Uszkoreit; Neil Houlsby", "journal": "ICLR", "ref_id": "b4", "title": "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale", "year": "2021" }, { "authors": "Christopher Eames; M Saiful Islam", "journal": "Journal of the American Chemical Society", "ref_id": "b5", "title": "Ion intercalation into two-dimensional transition-metal carbides: global screening for new high-capacity battery materials", "year": "2014" }, { "authors": "Aaron Fisher; Cynthia Rudin; Francesca Dominici", "journal": "J. Mach. Learn. Res", "ref_id": "b6", "title": "All models are Wrong, but Many are Useful: Learning a Variable's Importance by Studying an Entire Class of Prediction Models Simultaneously", "year": "2019" }, { "authors": "Aylmer Ronald; Fisher", "journal": "Springer", "ref_id": "b7", "title": "Statistical methods for research workers", "year": "1992" }, { "authors": "Anthony W Flores; Kristin Bechtel; Christopher T Lowenkamp", "journal": "Fed. Probation", "ref_id": "b8", "title": "False positives, False negatives, and False analyses: A rejoinder to \"Machine Bias: There's software used across the country to predict future criminals. and it's biased against blacks", "year": "2016" }, { "authors": "H Jerome; Bogdan E Friedman; Popescu", "journal": "The annals of applied statistics", "ref_id": "b9", "title": "Predictive learning via rule ensembles", "year": "2008" }, { "authors": "Yury Gogotsi; Babak Anasori", "journal": "ACN Nano", "ref_id": "b10", "title": "The rise of MXenes", "year": "2019" }, { "authors": "Yury Gogotsi; Qing Huang", "journal": "", "ref_id": "b11", "title": "Mxenes: two-dimensional building blocks for future materials and devices", "year": "2021" }, { "authors": "Michel Grabisch; Marc Roubens", "journal": "International Journal of game theory", "ref_id": "b12", "title": "An axiomatic approach to the concept of interaction among players in cooperative games", "year": "1999" }, { "authors": "Peyton Greenside; Tyler Shimko; Polly Fordyce; Anshul Kundaje", "journal": "Bioinformatics", "ref_id": "b13", "title": "Discovering epistatic feature interactions from neural network models of regulatory DNA sequences", "year": "2018" }, { "authors": "Bradley C Brandon M Greenwell; Andrew J Boehmke; Mccarthy", "journal": "", "ref_id": "b14", "title": "A simple and effective model-based variable importance measure", "year": "2018" }, { "authors": "Baptiste Gregorutti; Bertrand Michel; Philippe Saint-Pierre", "journal": "Statistics and Computing", "ref_id": "b15", "title": "Correlation and variable importance in random forests", "year": "2017" }, { "authors": "Huifeng Guo; Tang Ruiming; Yunming Ye; Zhenguo Li; Xiuqiang He", "journal": "", "ref_id": "b16", "title": "DeepFM: A Factorization-Machine based Neural Network for CTR prediction", "year": "2017" }, { "authors": "Hsiang Hsu; Flavio Calmon", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b17", "title": "Rashomon capacity: A metric for predictive multiplicity in classification", "year": "2022" }, { "authors": "Pascal Joseph D Janizek; Su-In Sturmfels; Lee", "journal": "The Journal of Machine Learning Research", "ref_id": "b18", "title": "Explaining explanations: Axiomatic feature interactions for deep networks", "year": "2021" }, { "authors": "Sichao Li; Amanda Barnard", "journal": "", "ref_id": "b19", "title": "Variance tolerance factors for interpreting neural networks", "year": "2022" }, { "authors": "Sichao Li; Amanda S Barnard", "journal": "Chemistry of Materials", "ref_id": "b20", "title": "Inverse design of MXenes for high-capacity energy storage materials using multi-target machine learning", "year": "2022" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "Springer", "ref_id": "b21", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Yin Lou; Rich Caruana; Johannes Gehrke; Giles Hooker", "journal": "", "ref_id": "b22", "title": "Accurate intelligible models with pairwise interactions", "year": "2013" }, { "authors": "John Mandel", "journal": "Journal of the American Statistical Association", "ref_id": "b23", "title": "Non-additivity in two-way analysis of variance", "year": "1961" }, { "authors": "Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga; Alban Desmaison; Andreas Kopf; Edward Yang; Zachary Devito; Martin Raison; Alykhan Tejani; Sasank Chilamkurthy; Benoit Steiner; Lu Fang; Junjie Bai; Soumith Chintala", "journal": "", "ref_id": "b24", "title": "Pytorch: An imperative style, highperformance deep learning library", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b25", "title": "", "year": "2019" }, { "authors": "Cynthia Rudin", "journal": "Nature machine intelligence", "ref_id": "b26", "title": "Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead", "year": "2019" }, { "authors": "Cynthia Rudin; Caroline Wang; Beau Coker", "journal": "Harvard Data Science Review", "ref_id": "b27", "title": "The age of secrecy and unfairness in recidivism prediction", "year": "2020" }, { "authors": "Justyna Sarzynska-Wawer; Aleksander Wawer; Aleksandra Pawlak; Julia Szymanowska; Izabela Stefaniak; Michal Jarkiewicz; Lukasz Okruszek", "journal": "Psychiatry Research", "ref_id": "b28", "title": "Detecting formal thought disorder by deep contextualized word representations", "year": "2021" }, { "authors": "Lesia Semenova; Cynthia Rudin; Ronald Parr", "journal": "", "ref_id": "b29", "title": "A study in rashomon curves and volumes: A new perspective on generalization and model simplicity in machine learning", "year": "2019" }, { "authors": "Lesia Semenova; Cynthia Rudin; Ronald Parr", "journal": "", "ref_id": "b30", "title": "On the existence of simpler machine learning models", "year": "2022" }, { "authors": "Chandan Singh; W James Murdoch; Bin Yu", "journal": "", "ref_id": "b31", "title": "Hierarchical interpretations for neural network predictions", "year": "2019" }, { "authors": "Daria Sorokina; Rich Caruana; Mirek Riedewald; Daniel Fink", "journal": "", "ref_id": "b32", "title": "Detecting statistical interactions with additive groves of trees", "year": "2008" }, { "authors": "Mukund Sundararajan; Ankur Taly; Qiqi Yan", "journal": "PMLR", "ref_id": "b33", "title": "Axiomatic attribution for deep networks", "year": "2017" }, { "authors": "Mukund Sundararajan; Kedar Dhamdhere; Ashish Agarwal", "journal": "PMLR", "ref_id": "b34", "title": "The Shapley Taylor interaction index", "year": "2020" }, { "authors": "Yu-Lin Tsai; Chia-Yi Hsu; Chia-Mu Yu; Pin-Yu Chen", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b35", "title": "Formalizing generalization and adversarial robustness of neural networks to weight perturbations", "year": "2021" }, { "authors": "Michael Tsang; Dehua Cheng; Hanpeng Liu; Xue Feng; Eric Zhou; Yan Liu", "journal": "", "ref_id": "b36", "title": "Feature interaction interpretability: A case for explaining ad-recommendation systems via neural interaction detection", "year": "2020" }, { "authors": "Michael Tsang; Sirisha Rambhatla; Yan Liu", "journal": "Advances in neural information processing systems", "ref_id": "b37", "title": "How does this interaction affect me? Interpretable attribution for feature interactions", "year": "2020" }, { "authors": "Michael Tsang; Dehua Cheng; Yan Liu", "journal": "ICLR", "ref_id": "b38", "title": "Detecting Statistical Interactions from Neural Network Weights", "year": "2021" }, { "authors": "Andrea Vedaldi; Stefano Soatto", "journal": "Springer", "ref_id": "b39", "title": "Quick shift and kernel methods for mode seeking", "year": "2008" }, { "authors": "Zihao Wang; Xihui Liu; Hongsheng Li; Lu Sheng; Junjie Yan; Xiaogang Wang; Jing Shao", "journal": "", "ref_id": "b40", "title": "Camp: Cross-Modal Adaptive Message Passing for Text-Image Retrieval", "year": "2019-10" }, { "authors": "Anne L Washington", "journal": "Colo. Tech. LJ", "ref_id": "b41", "title": "How to argue with an algorithm: Lessons from the compas-propublica debate", "year": "2018" }, { "authors": "Dongxian Wu; Shu-Tao Xia; Yisen Wang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b42", "title": "Adversarial weight perturbation helps robust generalization", "year": "2020" }, { "authors": "Ruoqing Zhu; Donglin Zeng; Michael R Kosorok", "journal": "Journal of the American Statistical Association", "ref_id": "b43", "title": "Reinforcement learning trees", "year": "2015" } ]
[ { "formula_coordinates": [ 2, 263.54, 375.61, 225.17, 19.27 ], "formula_id": "formula_0", "formula_text": "f 1 (a, b, c) = -b+ √ b 2 -4ac 2a and f 2 (a, b, c) = -b- √ b 2 -4ac 2a" }, { "formula_coordinates": [ 3, 108, 187.02, 108.32, 9.99 ], "formula_id": "formula_1", "formula_text": "X = [x [•,1] , x [•,2] , ..., x [•,p]" }, { "formula_coordinates": [ 3, 183.26, 252.4, 213.83, 14.11 ], "formula_id": "formula_2", "formula_text": "L exp = E[L(f (X), y)] and L emp = n i=1 L(f (x [i,•]" }, { "formula_coordinates": [ 3, 173.48, 322.02, 331.18, 23 ], "formula_id": "formula_3", "formula_text": "F ⊂ {f | f : X → y}, the Rashomon set is defined as R(ϵ, f * , F) = {f ∈ F | E[L(f (X), y)] ≤ E[L(f * (X), y)] + ϵ},(1)" }, { "formula_coordinates": [ 3, 108, 371.31, 396, 33.92 ], "formula_id": "formula_4", "formula_text": "m ∈ R, into a backbone model. That is, ∀f ∈ F with E[L(f (X), y)], ∃m s.t. E[L(f • m(X), y)] ≤ E[L(f * (X), y)] + ϵ." }, { "formula_coordinates": [ 3, 202.86, 429.95, 302.89, 10.87 ], "formula_id": "formula_5", "formula_text": "∀f ∈ F (E[L(f (X), y)] → (∃m (E[L(f * • m(X), y)] ≃ E[L(f (X), y)])." }, { "formula_coordinates": [ 3, 108, 553.02, 396, 22 ], "formula_id": "formula_6", "formula_text": "φ i (f ) = E[L(f (X \\i ), y)] -E[L(f (X), y)] and φ I (f ) = E[L(f (X \\I ), y)] -E[L(f (X), y)]" }, { "formula_coordinates": [ 3, 240.15, 632.25, 131.71, 20.06 ], "formula_id": "formula_7", "formula_text": "F IS I (f ) = φ I (f ) - i∈I φ i (f )." }, { "formula_coordinates": [ 3, 108, 662.84, 396, 21.47 ], "formula_id": "formula_8", "formula_text": "F IS I (f ) = {F IS I1 (f ), F IS I2 (f ), ..., F IS I 2 p (f )}." }, { "formula_coordinates": [ 3, 107.69, 709.39, 396.98, 23.71 ], "formula_id": "formula_9", "formula_text": "[F ISC I (R) min , F ISC I (R) max ] = [min f ∈R F IS I (f ), max f ∈R F IS I (f )]. (4) The complete set of FISs in a Rashomon set is F ISC I (R) = {F IS I (f ) : f ∈ R}." }, { "formula_coordinates": [ 4, 246.03, 343, 119.94, 23.04 ], "formula_id": "formula_10", "formula_text": "f (X) = α T 1 1 + e -β T X + b." }, { "formula_coordinates": [ 4, 108, 375.06, 397.65, 21.09 ], "formula_id": "formula_11", "formula_text": "E[L(f (X), y)] = E[ (y -f (X)) 2 ] that is mathematically equiv- alent to E[y -f (X)] or E[f (X) -y]." }, { "formula_coordinates": [ 4, 108, 418.73, 182.82, 18.66 ], "formula_id": "formula_12", "formula_text": "ϵ ≤ min{E[ 1 1+e -m T •β T X ], E[ e -m T •β T X 1+e -m T •β T X ]}." }, { "formula_coordinates": [ 4, 121.42, 455.23, 383.25, 26.26 ], "formula_id": "formula_13", "formula_text": "E ln 1 -ϵ -ϵe -β T X e -β T X + ϵ + ϵe -β T X ≤ E[(β T X)m T ] ≤ E ln 1 + ϵ + ϵe -β T X e -β T X -ϵ -ϵe -β T X , (6)" }, { "formula_coordinates": [ 4, 108, 644.37, 396.67, 39.05 ], "formula_id": "formula_14", "formula_text": "m = {m 1 , m 2 , • • • , m p } into a backbone f , resulting in a new model M m = f • m. We can sample the Rashomon set by exploring different m such that R(ϵ, f, F) = {M m ∈ F | E[L(M m (X), y)] ≤ E[L(f (X), y)] + ϵ}.(7)" }, { "formula_coordinates": [ 5, 258.34, 99.38, 246.33, 19.7 ], "formula_id": "formula_15", "formula_text": "(m I ) i = m i i ∈ I, 1 i / ∈ I.(8)" }, { "formula_coordinates": [ 5, 210.8, 141.05, 293.87, 20.06 ], "formula_id": "formula_16", "formula_text": "F IS I (M m I ) = φ I (M m I ) - i∈I φ i (M m I ).(9)" }, { "formula_coordinates": [ 5, 108, 294.59, 396, 20.61 ], "formula_id": "formula_17", "formula_text": "ϕ i = E[L(M mi (X), y)] -E[L(f (X), y)]." }, { "formula_coordinates": [ 5, 161.31, 335.68, 121.12, 13.68 ], "formula_id": "formula_18", "formula_text": "{M mi ∈ F | φ i (M mi )} p i=1" }, { "formula_coordinates": [ 5, 108, 349.38, 176.19, 10.27 ], "formula_id": "formula_19", "formula_text": "F IS I (R) = {F IS I (M m I ) : M m I ∈ R}" }, { "formula_coordinates": [ 5, 309.32, 499.95, 179.2, 228.07 ], "formula_id": "formula_20", "formula_text": "learning rate > 0 ∧ ϵ ≥ 0 Ensure: L(f • m(X), y) ≤ L(f * (X), y) + ϵ m+ = 1 p i=1 , m-= 1 p i=1 , counter = 0 loss ref ⇐ L(f * (X), y) while counter ≤ 4 do if searching for the upper bound of m then m ′ + ⇐ Copy(m+) m ′ +[s] ⇐ m ′ +[s] + learning rate ϕs ⇐ L(f * • m ′ +(X s), y) -loss ref if ϕs ≤ ϵ then m+ ⇐ m ′ + else learning rate ⇐ learning rate × 0.1 counter ⇐ counter + 1 else searching for the lower bound of m m ′ -⇐ Copy(m-) m ′ -[s] ⇐ m ′ -[s] -learning rate ϕs ⇐ L(f * • m ′ -(X s), y) -loss ref if ϕs ≤ ϵ then m-⇐ m ′ - else learning rate ⇐ learning rate × 0.1 counter ⇐ counter + 1" }, { "formula_coordinates": [ 7, 237.73, 129.97, 257.4, 21.83 ], "formula_id": "formula_21", "formula_text": "* = [1, 1, 1..., 1] ∈ R 40 , x ′ = [-1, -1, -1..., -1] ∈ R 40 and z[i] =" }, { "formula_coordinates": [ 7, 243.71, 186.2, 251.58, 55.66 ], "formula_id": "formula_22", "formula_text": "F 1 (x) = 10 i=1 10 j=1 x i x j + 20 i=11 30 j=21 x i x j + 40 k=1 x k F 2 (x) = (x; {x * i } 20 i=1 ) + (x; {x * i } 30 i=11 ) + 40 j=1 x j F 3 (x) = (x; {x ′ i } 20 i=1 ) + (x; {x * i } 30 i=11 ) + 40 j=1 x j F 4 (x) = (x; {x * 1 , x * 2 } ∪ x ′ 3 ) + (x; {x * i } 30 i=11 ) + 40 j=1 x j" }, { "formula_coordinates": [ 13, 108, 83.45, 296.12, 10.87 ], "formula_id": "formula_23", "formula_text": "∀f ∈ F (E[L(f (X), y)] → (∃m (E[L(f * • m(X), y)] ≃ E[L(f (X), y)]" }, { "formula_coordinates": [ 13, 112.98, 108.29, 399.93, 267.36 ], "formula_id": "formula_24", "formula_text": "1 ∀f ∈ F (E[L(f (X), y)] → E[L(f (X), y)] ≤ EL(f * (X), y) + ϵ) 2 ∀f ∈ F (E[L(f (X), y)] → ∃m (E[L(f • m(X), y)] ≤ E[L(f (X), y)])) 3 f ′ (E[L(f ′ (X), y)] → E[L(f ′ (X), y)] ≤ E[L(f * (X), y)] + ϵ) 4 (E[L(f ′ (X), y)] → ∃m (E[L(f ′ • m(X), y)] ≤ E[L(f ′ (X), y)])) 5 E[L(f ′ (X), y)] 6 E[L(f ′ (X), y)] ≤ E[L(f * (X), y)] + ϵ) →-E, 3, 5 7 ∃m (E[L(f * • m(X), y)] ≤ E[L(f * (X), y)]) →-E, 4, 5 8 m ′ E[L(f * • m ′ (X), y)] ≃ E[L(f * (X), y)] 9 E[L(f ′ (X), y)] ≤ E[L(f * • m ′ (X), y)] + ϵ R, 6, 8 10 E[L(f ′ (X), y)] ≤ E[L(f ′ (X), y)] + ϵ Logic, 6, 8 11 ∃m E[L(f * • m(X), y)] ≃ EL(f ′ (X), y) ∃-I, 9, 10 12 ∃m (E[L(f * • m(X), y)] ≃ EL(f ′ (X), y) ∃-E, 7, 8-11 13 E[L(f ′ (X), y)] → ∃m (EL(f * • m(X), y) ≃ E[L(f ′ (X), y)] →-I, 5-12 14 ∀f ∈ F (E[L(f (X), y)] → (∃m (E[L(f * • m(X), y)] ≃ E[L(f (X), y)]) ∀-I," }, { "formula_coordinates": [ 14, 185.08, 337.55, 72.98, 19.27 ], "formula_id": "formula_25", "formula_text": "f * = -b± √ b 2 -4ac 2a" }, { "formula_coordinates": [ 14, 125.31, 374.05, 379.36, 59.91 ], "formula_id": "formula_26", "formula_text": "F IS a,b (f * ) =φ a,b (f * ) -(φ a (f * ) + φ b (f * )) (10) =E[L(f * (X \\{a,b} ))] -E[L(f * (X \\{a} ))] -E[L(f * (X \\{b} ))] ≃ n i=1 (f * (X \\{a,b} ) -y) 2 - n i=1 (f * (X \\{a} ) -y) 2 - n i=1 (f * (X \\{b} ) -y) 2" }, { "formula_coordinates": [ 14, 173.94, 512.44, 264.12, 11.22 ], "formula_id": "formula_27", "formula_text": "φ a (f * ) φ b (f * ) φ c (f * ) φ a,b (f * ) φ a,c (f * ) φ b,c (f * )" }, { "formula_coordinates": [ 15, 142.47, 135.22, 362.2, 29.37 ], "formula_id": "formula_28", "formula_text": "δ = f (x * {i,j} + x \\{i,j} ) + f (x ′ {i,j} + x \\{i,j} ) -f (x ′ {i} + x * {j} + x \\{i,j} ) -f (x ′ {j} + x * {i} + x \\{i,j} )(11)" }, { "formula_coordinates": [ 15, 108, 228.09, 396, 23.1 ], "formula_id": "formula_29", "formula_text": "E[L(M m I (X), y)] = E[ (y -M m I (X)) 2 ]." }, { "formula_coordinates": [ 15, 130.34, 278.68, 374.33, 25.15 ], "formula_id": "formula_30", "formula_text": "F IS I (M m I ) = (E[M mi,j (X \\{i,j} )] + E[M mi,j (X)] -E[M mij (X \\{i} )] -E[M mi,j (X \\{j} )]),(12)" }, { "formula_coordinates": [ 15, 231.89, 418.44, 268.63, 9.65 ], "formula_id": "formula_31", "formula_text": "E[L(M m (X))] ≤ E[L(f (X))] + ϵ. (13" }, { "formula_coordinates": [ 15, 500.52, 418.76, 4.15, 8.64 ], "formula_id": "formula_32", "formula_text": ")" }, { "formula_coordinates": [ 15, 108, 448.73, 409.62, 23.04 ], "formula_id": "formula_33", "formula_text": "E[y -α T 1 1 + e -β T X + b] -ϵ ≤ E[y -α T 1 1 + e -m T •β T X + b] ≤ E[y -α T 1 1 + e -β T X + b] + ϵ." }, { "formula_coordinates": [ 15, 174.29, 490.95, 330.38, 23.04 ], "formula_id": "formula_34", "formula_text": "E[ 1 1 + e -β T X ] -ϵ ≤ E[ 1 1 + e -m T •β T X ] ≤ E[ 1 1 + e -β T X ] + ϵ.(14)" }, { "formula_coordinates": [ 15, 108, 518.76, 397.65, 58.13 ], "formula_id": "formula_35", "formula_text": "1 1+e -m T •β T X ], E[ e -m T •β T X 1+e -m T •β T X ]}, this inequality is re- duced to E ln 1 -ϵ -ϵe -β T X e -β T X + ϵ + ϵe -β T X ≤ E[(β T X)m T ] ≤ E ln 1 + ϵ + ϵe -β T X e -β T X -ϵ -ϵe -β T X , (15)" }, { "formula_coordinates": [ 15, 114.53, 638.53, 390.13, 76.45 ], "formula_id": "formula_36", "formula_text": "F IS I (M m ) = E[y -(α T 1 1 + e -m T •β T X \\i,j + b)] -E[y -(α T 1 1 + e -m T •β T X + b)]- (E[y -(α T 1 1 + e -m T •β T X \\i + b)] -E[y -(α T 1 1 + e -m T •β T X + b)]+ E[y -(α T 1 1 + e -m T •β T X \\j + b)] -E[y -(α T 1 1 + e -m T •β T X + b)]). (16)" }, { "formula_coordinates": [ 16, 117.18, 324.8, 387.48, 26.82 ], "formula_id": "formula_37", "formula_text": "m 1 = 1 β T X ln 1 -ϵ -ϵe -β T X e -β T X + ϵ + ϵe -β T X , m 2 = 1 β T X ln 1 + ϵ + ϵe -β T X e -β T X -ϵ -ϵe -β T X . (17)" }, { "formula_coordinates": [ 16, 126.19, 391.72, 374.33, 34.7 ], "formula_id": "formula_38", "formula_text": "F IS min := inf m F IS I (M m ) = min{F IS I (M m1 ), F IS I (M m2 ), F IS I (M m⋆ )}, F IS max := sup m F IS I (M m ) = max{F IS I (M m1 ), F IS I (M m2 ), F IS I (M m⋆ )}. (18" }, { "formula_coordinates": [ 16, 500.52, 404.46, 4.15, 8.64 ], "formula_id": "formula_39", "formula_text": ")" }, { "formula_coordinates": [ 16, 274.54, 439.26, 231.2, 9.65 ], "formula_id": "formula_40", "formula_text": "[F ISC I (R) min , F ISC I (R) max ] = [F IS min , F IS max ]." }, { "formula_coordinates": [ 17, 167.63, 352.24, 337.04, 38.78 ], "formula_id": "formula_41", "formula_text": "ϕ i = L(f • m i (X) -L(f (X)) = (y -(m i x i + x j + m i x i * x j )) 2 -(y -(x i + x j + x i * x j )) 2 = 0.01.(19)" }, { "formula_coordinates": [ 17, 166.31, 416, 338.36, 38.78 ], "formula_id": "formula_42", "formula_text": "ϕ j = L(f • m j (X) -L(f (X)) = (y -(x i + m j x j + x i * m j x j )) 2 -(y -(x i + x j + x i * x j )) 2 = 0.09.(20)" }, { "formula_coordinates": [ 17, 106.83, 518.44, 241.65, 11.23 ], "formula_id": "formula_43", "formula_text": "(m i x i + m j x j + m i x i * m j x j )) 2 -(y -(x i + x j + x i * x j )" } ]
2023-05-17
[ { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b5", "b6", "b8", "b11", "b4", "b18", "b26", "b31", "b3", "b15" ], "table_ref": [], "text": "Multilingual neural machine translation (MNMT) (Dong et al., 2015;Firat et al., 2016;Ha et al., 2016;Johnson et al., 2017;Dabre et al., 2021) systems enable translation between multiple language pairs within a single model by learning shared representations across different languages. One of the key challenges in building effective MNMT systems is zero-shot translation performance involving unseen language pairs. Previous work reveals that improving the language-independency of encoded representations is critical for zero-shot translation performance, with neural interlingua representations (Lu et al., 2018;Vázquez et al., 2019;Zhu et al., 2020) being proposed as an effective method for achieving this. Neural interlingua representations are shared, language-independent representations that behave as a neural pivot between different natural languages. As shown in Figure 1 (a), it enables sentences in different languages with the same meaning to have the same interlingua representations. Previous work has shown the effective-ness of fixed-length neural interlingua representations for zero-shot translation. However, a fixed length can limit neural interlingua representations' flexibility and representation ability. It is highly model size and training data size-sensitive according to our experimental results for different settings of model and training data size.\nThis paper proposes a novel method for improving neural interlingua representations by making their length variable. As shown in Figure 1 (b), our method enables the length of the interlingua representations to vary according to different lengths of source sentences, which may provide more flexible neural interlingua representations. Specifically, we utilize the sentence length in the centric language1 (e.g., English) as the length of neural interlingua representations. We propose a variablelength interlingua module to project sentences in different source languages with the same meaning into an identical neural interlingua representation sequence. To enable translating from noncentric language source sentences during inference, we also introduce a length predictor within the variable-length interlingua module. Moreover, as for the initialization of the interlingua module, we propose a novel method that facilitates knowledge sharing between different interlingua lengths, which can avoid introducing redundant model parameters. We expect that variable-length interlingua representations provide enhanced representations according to different source sentence lengths, which mitigates the model size and training data size-sensitive problem of previous work in low-resource scenarios and improves performance for zero-shot translation.\nWe conduct experiments on three MNMT datasets, OPUS (Zhang et al., 2020), IWSLT (Cettolo et al., 2017), and Europarl (Koehn, 2005) with different settings of training data size and model size. Results demonstrate that our proposed method yields superior results for zero-shot translation compared to previous work. Our method exhibits stable convergence in different settings while previous work (Zhu et al., 2020) is highly sensitive to different model and training data sizes. However, we also observe the inferior performance of our method for translation from non-centric language source languages. We attribute it to the accuracy of the interlingua length predictor and point out the possible directions of this research line." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "This paper focuses on variable-length interlingua representations for zero-shot NMT." }, { "figure_ref": [], "heading": "Zero-shot Translation", "publication_ref": [ "b5", "b6", "b8", "b11", "b0", "b24", "b4", "b31", "b17", "b1", "b2", "b30", "b19", "b7", "b31", "b28", "b18", "b26" ], "table_ref": [], "text": "In recent years, MNMT (Dong et al., 2015;Firat et al., 2016;Ha et al., 2016;Johnson et al., 2017;Aharoni et al., 2019;Tan et al., 2019;Dabre et al., 2021;Zhang et al., 2020) has been a popular research topic, where the generalization ability of MNMT models to zero-shot translation is a critical problem as obtaining sufficient training data for all translation directions is often impractical. An MNMT model's zero-shot translation performance usually benefits from the encoder-side representations being language-independent and decoderside representations being language-specific. To achieve this, some studies have proposed removing encoder-side residual connections (Liu et al., 2021) or introducing language-independent constraints (Al-Shedivat and Parikh, 2019;Pham et al., 2019;Arivazhagan et al., 2019;Yang et al., 2021;Mao et al., 2023). Other methods involve decoder pre-training and back-translation (Gu et al., 2019;Zhang et al., 2020), denoising autoencoder objectives (Wang et al., 2021), and encoderside neural interlingua representations (Lu et al., 2018;Vázquez et al., 2019;Zhu et al., 2020)." }, { "figure_ref": [], "heading": "Neural Interlingua Representations for Zero-shot Translation", "publication_ref": [ "b18", "b26", "b25" ], "table_ref": [], "text": "As mentioned above, constructing neural interlingua representations is a powerful method to improve shared encoder representations across various source languages and enhance zero-shot translation. Lu et al. (2018) first proposed the concept of neural interlingua representations for MNMT, intending to bridge multiple language-specific encoders and decoders using an intermediate interlingua attention module, which has a fixed sequence length. Vázquez et al. (2019) Language-specific Interlingua Query 𝐐 𝐈 (zh-1, zh-2, …, zh-5) former (Vaswani et al., 2017) model architecture and proposed a position-wise alignment objective to ensure consistent neural interlingua representations across different languages. However, these methods utilized fixed-length neural interlingua representations, which may reduce the model's representation ability for source sentences with different lengths. This paper focuses on revisiting and improving neural interlingua approaches." }, { "figure_ref": [ "fig_2" ], "heading": "Variable-length Neural Interlingua Representations", "publication_ref": [], "table_ref": [], "text": "We present an MNMT model that comprises three distinct components: a source language encoder, a neural interlingua module, and a decoder. The source language encoder converts source sentences to language-specific representations, the neural interlingua module generates language-agnostic representations, and the decoder converts these representations into the target language translation. In this section, we introduce a novel neural interlingua module. Specifically, we propose variable-length neural interlingua representations surpassing prior work's fixed-length constraint. To achieve this breakthrough, we have developed a module that includes interlingua encoder layers, an interlingua length predictor, and a language-specific interlin-gua query. Our module uses an embedding sharing mechanism, as shown in Figure 2. Moreover, we introduce the objectives that guide the training of variable-length neural interlingua representations." }, { "figure_ref": [ "fig_2" ], "heading": "Variable-length Interlingua Module", "publication_ref": [ "b25", "b18", "b26", "b27" ], "table_ref": [], "text": "Interlingua Encoder Layers In accordance with Zhu et al. ( 2020), we construct a variable-length interlingua module within a Transformer model architecture. Our model utilizes N Transformer encoder layers and 6 Transformer decoder layers, with M interlingua encoder layers introduced between them. To maintain consistency with a standard 6-layer Transformer encoder, we set M + N = 6, ensuring that the number of model parameters remains almost the same. Each interlingua encoder layer consists of a sequential series of operations, including self-attention mechanisms (or feed-forward networks),2 encoder-interlingua attention, and feed-forward networks, as illustrated in Figure 2.\nThe input representations for interlingua encoder layers are denoted as Q I ∈ R d×len I (X) , where d and len I (X) respectively indicates the di-mension of hidden representations and the length of the neural interlingua representations given a source sentence X = x 1 , x 2 , ..., x k . Specifically, we define len I (X) as follows:\nlen I (X) = len(X),\nXis in centric len(CT(X)), Xis in non-centric ,\n(1) where CT(X) denotes the translation of X in the centric language. We use teacher forcing to generate interlingua length during training. For instance, if we use English-centric parallel sentences as training data, len I (X) for each sentence pair will be the length of English sentences. Thus, sentences that convey the same semantic meaning can have the same interlingua length, and interlingua length is variable according to different sentences. For the initialization of Q I , we will provide a detailed explanation of how to generate it later in this section.\nSubsequently, Q I undergoes self-attention (or feed-forward networks), and we obtain the output Q I . Assume that the contextualized representations on top of N Transformer encoder layers are H S ∈ R d×k . Then we establish an encoderinterlingua attention mechanism:\nH EI = Attn(Q I , H S , H S ),(2)\nwhere Attn(Q, K, V) indicates the multi-head attention mechanism (Vaswani et al., 2017). This encoder-interlingua attention inherits the design in previous studies of neural interlingua representations (Lu et al., 2018;Vázquez et al., 2019;Zhu et al., 2020). Finally, we pass H EI through position-wise feed-forward networks to obtain H I , the output of the interlingua encoder layers. H I serves as a language-agnostic neural interlingua and can vary in length depending on the source sentence. Once we have H I , we feed it into a standard Transformer decoder to generate the translation. Interlingua Length Predictor Length of interlingua representations is not readily available during inference when translating from non-centric source sentences (e.g., non-English source sentences) using Eq. (1). To address this, we propose using an interlingua length predictor to obtain len I (X) for inference. Specifically, we treat the length prediction of translation in the centric language as a classification task, addressed utilizing mean pooled contextualized representations atop the Transformer encoder. 3 More precisely, we predict X's interlingua length as:\nlen I (X) = arg max i softmax( 1 H T S k W + b) i ,\n(3) where k is the length of X, 1 ∈ R 1×k denotes a vector with all the elements of 1, W ∈ R d×K and b ∈ R 1×K indicates the weight and bias of a linear layer, and K is the maximum sequence length allowed in the model. Language-specific Interlingua Query Here, we present the method for obtaining input representations Q I for the interlingua encoder layers. Initially, we randomly initialize an embedding matrix E l ∈ R d×K containing K embeddings for the source language l. Next, we extract the first len I (X) embeddings from E l to obtain Q I .\nQ I = E l I S ,(4)\nwhere I S ∈ R K×len I (X) has 1s as main diagonal elements and 0s for other elements. Note that the language-specific nature of E l allows the model to learn a unique mapping from each language to the neural interlingua representations. Zhu et al. (2020) used the technique of languageaware positional embedding (Wang et al., 2019) for both the neural interlingua representations and the source and target sentences, resulting in ambiguity regarding whether the improvements were from the neural interlingua representations or not.\nIn contrast, our proposed language-specific interlingua query clarifies whether a language-specific mapping to neural interlingua representations benefits zero-shot translation." }, { "figure_ref": [], "heading": "Training Objectives", "publication_ref": [], "table_ref": [], "text": "Given a training sample sentence pair (X, Y ), we introduce the following training objective, combining an NMT loss, an interlingua alignment loss, and a length prediction loss. The interlingua alignment loss is utilized to guarantee the consistency of the neural interlingua representations for each training sentence pair sample. In contrast, the length prediction loss ensures the generation of variable interlingua length during inference. Specifically, the training objective is defined as follows:\nL(X, Y ) = αL NMT + βL IA + γL LP , (5) where α, β, and γ are weight hyperparameters for each loss, L LP is a cross-entropy loss computed from the softmax outputs from Eq. ( 3), and L IA is a position-wise alignment loss using cosine similarity following Zhu et al. ( 2020):\nL IA = 1 -1 len I (X) i cos < H I (X) i , H I (Y ) i > .(6)\nHere H I (•) i denotes the i-th column of H I (•). 4 Please note that during training, we always have len I (X) = len I (Y ) because we apply teacher forcing to generate the interlingua length for the sentence pair (X, Y ). With L IA , different sentence pairs with varying lengths of translation in centric language can be represented using variable-length neural interlingua representations. This can enhance the bridging ability for zero-shot translation.\n4 Experimental Settings" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b31", "b3", "b15" ], "table_ref": [], "text": "Our study involves conducting experiments on zero-shot translation using three distinct datasets, OPUS (Zhang et al., 2020), IWSLT (Cettolo et al., 2017), and Europarl (Koehn, 2005) " }, { "figure_ref": [], "heading": "Overall Training and Evaluation Details", "publication_ref": [ "b16", "b22" ], "table_ref": [], "text": "For the OPUS and IWSLT datasets, we utilize a Transformer-base model, while for Eu- 4 To derive HI(Y ), it is necessary to feed the target sentence to both the encoder and interlingua encoder layers, which can potentially result in increased computational requirements.\nroparl, we employ a Transformer-big model, to evaluate the performance of Transformer with both sufficient and insufficient training data. Regarding language tag strategies to indicate the source and target languages to the model, we adopt the method of appending the source language tag to the encoder input and the target language tag to the decoder input (Liu et al., 2020). This approach allows for the creation of fully languageagnostic neural interlingua representations in between. 5 The maximum sentence length is set as 256, which indicates that K = 256 (Section 3.1).\nRefer to Appendix B for other training details.\nFor evaluation, we choose the evaluation checkpoint based on the validation L NMT with the lowest value. We use a beam size of 5 during inference on the trained models to conduct inference. We report SacreBLEU (Post, 2018).6 " }, { "figure_ref": [], "heading": "Baselines and Respective Training Details", "publication_ref": [ "b11", "b14", "b26", "b17" ], "table_ref": [], "text": "To compare our variable-length neural interlingua method with previous fixed-length neural interlingua methods, we trained the following settings: MNMT (Johnson et al., 2017) is a system trained with standard Transformer-base or Transformer-big for multiple language pairs. We applied the language tag strategy of source language tag for encoder input and target language tag for decoder input. Pivot translation (Zoph and Knight, 2016) involves translating a source language into a pivot language, usually English, and then translating the pivot language into the target language. This system constitutes a robust baseline for zero-shot translation, which we include for reference. We implement this setting by feeding the pivot language output of the MNMT model to itself to generate the target language. Len-fix. Uni. Intl. We follow the setting described by Zhu et al. ( 2020), but we remove its language-aware positional embedding to test whether a single interlingua module can improve zero-shot translation. Compared to our variablelength interlingua representations presented in Section 3.1, these fixed interlingua representations have a universal len I (Eq. ( 1)) for different source 18.9 † Table 3: BLEU results of zero-shot translation on OPUS. We randomly select six zero-shot language pairs and report the results. The best result among all the settings except \"Pivot\" is in bold. We mark the results significantly (Koehn, 2004) better than \"Len-fix. Uni. Intl.\" with †.\nsentences and a universal E ∈ R d×len I for different languages and without a Q I (Eq. ( 4)). The fixed interlingua length is set to 17, 21, and 30, which are the average lengths of each dataset following Zhu et al. ( 2020) and Vázquez et al. (2019). Len-fix. LS. Intl. The only difference between this system and the \"Len-fix. Uni. Intl.\" system mentioned above is the initialization of the interlingua query. We use a language-specific E l ∈ R d×len I for each source language l without a Q I (Eq. ( 4)). Len-vari. Intl. (ours) This refers to variablelength neural interlingua representations proposed in Section 3. For the last three neural interlingua settings, we set M and N to 3 for both the Transformer encoder and interlingua encoder layers. The values of α, β, and γ (Eq. ( 5)) are set as 1.0, 1.0, and 0.1, respectively. We remove the first residual connection within the first interlingua encoder layer to improve the language-independency of the interlingua representations, inspired by Liu et al. (2021)." }, { "figure_ref": [], "heading": "Results and Analysis", "publication_ref": [], "table_ref": [], "text": "We now present in tables 2, 3, and 4 the results of our variable-length interlingua approach and compare them against several baselines." }, { "figure_ref": [], "heading": "Main results", "publication_ref": [], "table_ref": [ "tab_3", "tab_3", "tab_4" ], "text": "Firstly, Tables 2 and3 indicate that our proposed variable-length interlingua representations outper-form previous work in zero-shot directions. The severe overfitting issue of \"Len-fix. Uni. Intl.\" and \"Len-fix. LS. Intl.\" on IWSLT and Europarl suggests that they are limited to model size and training data size settings, while our proposed method can converge stably on all three settings. These results demonstrate that our flexible interlingua length can benefit zero-shot translation more effectively. Secondly, our proposed method performs better than previous work in \"from en\" supervised directions as shown in Tables 2 and4, but still falls short of the MNMT baseline. This may be attributed to the interlingua module's weak sourcetarget awareness. Thirdly, our variable-length neural interlingua representations perform significantly worse on \"to en\" directions than \"Len-fix.\" methods on OPUS and MNMT on all datasets. We provide analysis of this phenomenon next." }, { "figure_ref": [ "fig_4" ], "heading": "Validation NMT Loss", "publication_ref": [ "b14" ], "table_ref": [], "text": "We investigate why variable-length neural interlingua representations perform poorly in \"to en\" supervised directions by analyzing the validation NMT loss, an approximate measure of NMT performance on the validation set. Figure 3 displays the validation NMT loss for all settings on OPUS. We observe that variable-length interlingua representations can converge well, even smaller than the validation loss of \"Len-fix. Uni. Intl.\" and \"Lenfix. LS. Intl.\" However, the interlingua length predictor was teacher-forced during training, indicat- Table 5: Accuracy of the interlingua length predictor, averaged absolute difference between predicted length and gold length, and \"to en\" BLEU scores of each non-English source language on OPUS. \"w/ Len. Pre.\" and \"w/ gold\" indicate using the predicted interlingua length and the correct interlingua length (length of the English translation), respectively. Accuracy of the length predictor and average abosulute difference are evaluated using OPUS's test set. We mark the results significantly (Koehn, 2004) better than \"BLEU w/ Len. Pre.\" with †. ing the validation NMT loss was calculated with a 100% accurate interlingua length predictor. As a result, the inaccurate interlingua length predictor is likely the primary cause of our method's inferior performance in \"to en\" directions, despite its well-converged validation NMT loss." }, { "figure_ref": [], "heading": "Impact of the Interlingua Length Predictor", "publication_ref": [], "table_ref": [], "text": "We analyze the interlingua length predictor and identify the reason for the subpar performance in \"to en\" translations. We input the source sentences of the test set in non-English languages into the model and check whether the predicted length in interlingua is identical to the length of its English reference. We present the accuracy on the OPUS dataset in Table 5. The results show that the accuracy for each language is approximately 20.0%, which can result in error propagation when trans-lating from those languages. To further understand the impact of the length predictor quality on translation performance, we attempt to provide the model with the correct interlingua length instead of relying on the length predictor. As shown in Table 5, the results reveal significant BLEU improvements when the correct interlingua length is applied. This suggests that the performance issue encountered when translating from a non-centric source language can be addressed by upgrading the interlingua length predictor's accuracy. Furthermore, we can also enhance zero-shot translation performance if we have a better length predictor. Nevertheless, we observe that even with a low length prediction accuracy of approximately 20.0%, we can still achieve solid BLEU performance, averaging 34.0 BLEU points. This indicates that an incorrectly predicted length with just a trivial difference, as shown in Table 5, will not result in the enormous information loss required for translation." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This study introduced a novel variable-length neural interlingua approach that improved zero-shot translation results while providing a more stable model than previous fixed-length interlingua methods. Although our analysis revealed a performance downgrade in \"to en\" directions, we have identified the problematic model component and plan to address it in future studies." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was supported by JSPS KAKENHI Grant Number 22KJ1843." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Zhu, Changfeng, Heng Yu, Shanbo Cheng, and Weihua Luo. 2020. Language-aware interlingua for multilingual neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1650-1655, Online, July. Association for Computational Linguistics.\nZoph, Barret and Kevin Knight. 2016. Multi-source neural translation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 30-34, San Diego, California, June. Association for Computational Linguistics." }, { "figure_ref": [], "heading": "A Preprocessing Details", "publication_ref": [ "b13", "b23" ], "table_ref": [], "text": "Jieba 7 is used to segment Chinese while Moses 8 (Koehn et al., 2007) is utilized to tokenize other languages. We employ BPE (Sennrich et al., 2016) with 50, 000, 40, 000, and 50, 000 merge operations to create a joint vocabulary for each dataset, resulting in the vocabulary sizes of 66, 158, 40, 100, and 50, 363, respectively." }, { "figure_ref": [], "heading": "B Training Details", "publication_ref": [ "b12", "b20" ], "table_ref": [], "text": "Our models are trained Fairseq. 9 As the data size for each language pair is relatively similar, oversampling is not implemented for MNMT. The dropout rate was set to 0.1, 0.4, and 0.3 for each dataset, and we use the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 5e-4, 1e-3, and 5e-4, respectively, employing 4, 000 warmup steps. The Transformer-base model was trained using four 32 GB V100 GPUs, and the Transformer-big model was trained using eight 32 GB V100 GPUs, with a batch size of 4, 096 tokens. To speed up training, mixed precision training (Micikevicius et al., 2018) is also employed. Each dataset is trained for 500, 200, and 500 epochs." }, { "figure_ref": [], "heading": "C Limitations", "publication_ref": [ "b9" ], "table_ref": [], "text": "While this study proposed a novel method for improving neural interlingua representations for zero-shot translation, the following limitations should be addressed in future work:\n• The inaccurate interlingua length predictor currently leads to inferior performance 7 https://github.com/fxsjy/jieba 8 https://github.com/moses-smt/ mosesdecoder 9 https://github.com/facebookresearch/ fairseq for translation from non-centric languages. Therefore, a better predictor should be explored to improve the performance.\n• We used the length of centric language sentences as the interlingua length, which may limit the application for using parallel sentences not involving the centric language. Therefore, a better way to generate variable lengths for neural interlingua representations should be developed in future work.\n• We have yet to test whether the neural interlingua representations obtained in this study can act as a semantic pivot among all the languages. Thus, it would be interesting to evaluate the effectiveness of our variablelength interlingua representations on crosslingual language understanding tasks (Hu et al., 2020)." } ]
The language-independency of encoded representations within multilingual neural machine translation (MNMT) models is crucial for their generalization ability on zero-shot translation. Neural interlingua representations have been shown as an effective method for achieving this. However, fixed-length neural interlingua representations introduced in previous work can limit its flexibility and representation ability. In this study, we introduce a novel method to enhance neural interlingua representations by making their length variable, thereby overcoming the constraint of fixed-length neural interlingua representations. Our empirical results on zeroshot translation on OPUS, IWSLT, and Europarl datasets demonstrate stable model convergence and superior zero-shot translation results compared to fixed-length neural interlingua representations. However, our analysis reveals the suboptimal efficacy of our approach in translating from certain source languages, wherein we pinpoint the defective model component in our proposed method.
Variable-length Neural Interlingua Representations for Zero-shot Neural Machine Translation
[ { "figure_caption": "Figure 1 :1Figure 1: (a) Previous fixed-length neural interlingua representations; (b) Our proposed variable-length neural interlingua representations. Each colored box denotes the representation (R d×1 ) on the corresponding position. \"Enc.\", \"Dec.\", and \"d\" are encoder, decoder, and dimension of model hidden states.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Variable-length interlingua module. \"zh-x\" denotes the x-th embedding of a Chinese-specific interlingua query.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Validation NMT loss curve on OPUS.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Overall BLEU results on OPUS, IWSLT, and Europarl. The best result among all the settings except Pivot is in bold. We mark the results significantly(Koehn, 2004) better than \"Len-fix. Uni. Intl.\" with † for OPUS dataset. Len-vari. Intl. (ours) 20.6 † 18.3 † 26.0 † 23.4 † 20.2 † 22.1 † 20.8 31.8 † 20.0 31.9 † 6.3 14.5", "figure_data": "MethodsZero-shot OPUS IWSLT Europarl OPUS IWSLT Europarl OPUS IWSLT Europarl Supervised: From en Supervised: To enPivot22.019.929.5------MNMT16.513.129.031.229.632.936.833.536.1Len-fix. Uni. Intl.18.212.717.429.619.620.135.322.221.8Len-fix. LS. Intl.18.44.75.830.17.36.735.712.97.1Len-vari. Intl. (ours)18.9 †14.829.630.2 †26.232.634.027.133.8Methodsde-fr →←ru-fr →←nl-de →←zh-ru →←zh-ar →nl-ar ← →←Zero-shot Avg.Pivot23.4 21.2 31.0 26.0 21.8 23.6 24.8 37.9 24.0 38.9 7.4 17.422.0MNMT17.6 15.0 21.5 17.7 17.9 21.4 15.3 27.6 18.0 28.6 5.3 13.316.5Len-fix. Uni. Intl.20.1 17.0 25.0 22.4 19.5 21.3 20.3 30.9 19.6 30.4 6.1 14.418.2Len-fix. LS. Intl.20.7 17.7 25.7 21.7 19.8 21.6 19.9 31.5 20.1 31.6 6.5 14.518.4", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Len-vari. Intl. (ours) 23.3 † 33.8 30.1 † 32.3 32.9 † 32.6 27.3 27.9 29.5 † 32.2 38.0 45.3 † 30.2 † 34.0 BLEU results of supervised translation on OPUS. The best result among all the settings is in bold. We mark the results significantly(Koehn, 2004) better than \"Len-fix. Uni. Intl.\" with †. † 33.4 † 33.3 † 29.4 † 33.4 † 46.0 † 35.2 †", "figure_data": "Methodsen-ar →←en-de →←en-fr →←en-nl →←en-ru →←en-zh →Supervised Avg. ← From en To enMNMT23.9 37.8 30.8 34.6 33.9 35.5 27.8 31.5 29.4 35.1 41.2 46.431.236.8Len-fix. Uni. Intl.22.6 36.6 28.9 33.0 31.7 33.5 27.4 30.1 28.4 34.0 38.8 44.629.635.3Len-fix. LS. Intl.22.9 36.8 29.0 33.8 32.3 33.9 27.7 30.6 28.9 34.3 39.5 44.830.135.7ardefrnlruzh Avg.Acc. of Len. Pre.20.6 26.5 17.6 19.3 21.1 13.8 19.8Avg. of | Len. Pre. -gold |2.43.43.83.13.33.93.3BLEU w/ Len. Pre.33.8 32.3 32.6 27.9 32.2 45.3 34.0BLEU w/ gold35.5", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" } ]
Zhuoyuan Mao; Haiyue Song; Raj Dabre; Chenhui Chu; Sadao Kurohashi
[ { "authors": "Roee Aharoni; Melvin Johnson; Orhan Firat", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Massively multilingual neural machine translation", "year": "2019-06" }, { "authors": "Maruan Al-Shedivat; Ankur Parikh", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Consistency by agreement in zero-shot neural machine translation", "year": "2019-06" }, { "authors": "Naveen Arivazhagan; Ankur Bapna; Orhan Firat; Roee Aharoni; Melvin Johnson; Wolfgang Macherey", "journal": "", "ref_id": "b2", "title": "The missing ingredient in zero-shot neural machine translation", "year": "2019" }, { "authors": "Mauro Cettolo; Marcello Federico; Luisa Bentivogli; Jan Niehues; Sebastian Stüker; Katsuhito Sudoh; Koichiro Yoshino; Christian Federmann", "journal": "", "ref_id": "b3", "title": "Overview of the IWSLT 2017 evaluation campaign", "year": "2017-12-14" }, { "authors": "Raj Dabre; Chenhui Chu; Anoop Kunchukuttan", "journal": "ACM Comput. Surv", "ref_id": "b4", "title": "A survey of multilingual neural machine translation", "year": "2021" }, { "authors": "Daxiang Dong; Hua Wu; Wei He; Dianhai Yu; Haifeng Wang", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Multi-task learning for multiple language translation", "year": "2015-07" }, { "authors": "Orhan Firat; Kyunghyun Cho; Yoshua Bengio", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Multi-way, multilingual neural machine translation with a shared attention mechanism", "year": "2016-06" }, { "authors": "Jiatao Gu; Yong Wang; Kyunghyun Cho; O K Victor; Li", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Improved zero-shot neural machine translation via ignoring spurious correlations", "year": "2019-07" }, { "authors": "Thanh - Ha; Jan Le; Alex Niehues; Waibel", "journal": "", "ref_id": "b8", "title": "Toward multilingual neural machine translation with universal encoder and decoder", "year": "2016" }, { "authors": "Junjie Hu; Sebastian Ruder; Aditya Siddhant; Graham Neubig; Orhan Firat; Melvin Johnson", "journal": "", "ref_id": "b9", "title": "XTREME: A massively multilingual multitask benchmark for evaluating cross-lingual generalisation", "year": "2020-07" }, { "authors": " Pmlr", "journal": "", "ref_id": "b10", "title": "", "year": "" }, { "authors": "Melvin Johnson; Mike Schuster; Quoc V Le; Maxim Krikun; Yonghui Wu; Zhifeng Chen; Nikhil Thorat; Fernanda Viégas; Martin Wattenberg; Greg Corrado; Macduff Hughes; Jeffrey Dean", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b11", "title": "Google's multilingual neural machine translation system: Enabling zero-shot translation", "year": "2017" }, { "authors": "Diederik P Kingma; Jimmy Ba", "journal": "", "ref_id": "b12", "title": "Adam: A method for stochastic optimization", "year": "2015-05-07" }, { "authors": "Philipp Koehn; Hieu Hoang; Alexandra Birch; Chris Callison-Burch; Marcello Federico; Nicola Bertoldi; Brooke Cowan; Wade Shen; Christine Moran; Richard Zens; Chris Dyer; Ondřej Bojar; Alexandra Constantin; Evan Herbst", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "Moses: Open source toolkit for statistical machine translation", "year": "2007-06" }, { "authors": "Philipp Koehn", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Statistical significance tests for machine translation evaluation", "year": "2004-07" }, { "authors": "Philipp Koehn", "journal": "", "ref_id": "b15", "title": "Europarl: A parallel corpus for statistical machine translation", "year": "2005-09-13" }, { "authors": "Yinhan Liu; Jiatao Gu; Naman Goyal; Xian Li; Sergey Edunov; Marjan Ghazvininejad; Mike Lewis; Luke Zettlemoyer", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b16", "title": "Multilingual denoising pre-training for neural machine translation", "year": "2020" }, { "authors": "Danni Liu; Jan Niehues; James Cross; Francisco Guzmán; Xian Li", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Improving zero-shot translation by disentangling positional information", "year": "2021-08" }, { "authors": "Yichao Lu; Phillip Keung; Faisal Ladhak; Vikas Bhardwaj; Shaonan Zhang; Jason Sun", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "A neural interlingua for multilingual machine translation", "year": "2018-10" }, { "authors": "Zhuoyuan Mao; Raj Dabre; Qianying Liu; Haiyue Song; Chenhui Chu; Sadao Kurohashi", "journal": "", "ref_id": "b19", "title": "Exploring the impact of layer normalization zero-shot neural machine translation", "year": "2023" }, { "authors": "Paulius Micikevicius; Sharan Narang; Jonah Alben; Gregory F Diamos; Erich Elsen; David García; Boris Ginsburg; Michael Houston; Oleksii Kuchaiev; Ganesh Venkatesh; Hao Wu", "journal": "", "ref_id": "b20", "title": "Mixed precision training", "year": "2018-04-30" }, { "authors": "Ngoc - Pham; Jan Quan; Thanh-Le Niehues; Alexander Ha; Waibel", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Improving zero-shot translation with language-independent constraints", "year": "2019-08" }, { "authors": "Matt Post", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "A call for clarity in reporting BLEU scores", "year": "2018-10" }, { "authors": "Rico Sennrich; Barry Haddow; Alexandra Birch", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Neural machine translation of rare words with subword units", "year": "2016-08" }, { "authors": "Xu Tan; Jiale Chen; Di He; Yingce Xia; Tao Qin; Tie-Yan Liu", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Multilingual neural machine translation with language clustering", "year": "2019-11" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Lukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b25", "title": "Attention is all you need", "year": "2017-09" }, { "authors": "Raúl Vázquez; Alessandro Raganato; Jörg Tiedemann; Mathias Creutz", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "Multilingual NMT with a language-independent attention bridge", "year": "2019-08" }, { "authors": "Xinyi Wang; Hieu Pham; Philip Arthur; Graham Neubig", "journal": "", "ref_id": "b27", "title": "Multilingual neural machine translation with soft decoupled encoding", "year": "2019-05-06" }, { "authors": "Weizhi Wang; Zhirui Zhang; Yichao Du; Boxing Chen; Jun Xie; Weihua Luo", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Rethinking zeroshot neural machine translation: From a perspective of latent variables", "year": "2021-11" }, { "authors": "Liwei Wu; Shanbo Cheng; Mingxuan Wang; Lei Li", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "Language tags matter for zero-shot neural machine translation", "year": "2021-08" }, { "authors": "Yilin Yang; Akiko Eriguchi; Alexandre Muzio; Prasad Tadepalli; Stefan Lee; Hany Hassan", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Improving multilingual translation by representation and gradient regularization", "year": "2021-11" }, { "authors": "Biao Zhang; Philip Williams; Ivan Titov; Rico Sennrich", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "Improving massively multilingual neural machine translation and zero-shot translation", "year": "2020-07" } ]
[ { "formula_coordinates": [ 4, 72, 142.07, 118.79, 18.97 ], "formula_id": "formula_0", "formula_text": "len I (X) = len(X)," }, { "formula_coordinates": [ 4, 121.68, 441.37, 170, 11.73 ], "formula_id": "formula_1", "formula_text": "H EI = Attn(Q I , H S , H S ),(2)" }, { "formula_coordinates": [ 4, 312.59, 107.5, 206.21, 27.64 ], "formula_id": "formula_2", "formula_text": "len I (X) = arg max i softmax( 1 H T S k W + b) i ," }, { "formula_coordinates": [ 4, 389.77, 319.42, 135.78, 10.81 ], "formula_id": "formula_3", "formula_text": "Q I = E l I S ,(4)" }, { "formula_coordinates": [ 5, 74.68, 274.3, 217, 28.23 ], "formula_id": "formula_4", "formula_text": "L IA = 1 -1 len I (X) i cos < H I (X) i , H I (Y ) i > .(6)" } ]
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b5", "b6", "b7", "b2" ], "table_ref": [], "text": "Inspired by the way humans learn, deep reinforcement learning (DRL) is a machine learning paradigm in which a system, or agent, autonomously learns from gathered experience. Most famously, DRL has been successfully applied to board and video games [1,2] with superhuman performance. In recent years, DRL has also shown promising results in industrial use-cases and combinatorial optimization problems such as the job shop scheduling problem (JSSP) [3][4][5].\nScheduling problems deal with the allocation of resources to jobs over time to optimize criteria such as total time spent to process all jobs, called makespan [6]. The JSSP in particular is a problem formulation, where each job must visit each machine in a factory in a fixed order, and is considered NP-hard to solve optimally. In practice, scheduling problems are often solved using priority dispatching rules (PDRs) consisting of simple rules for determining the priority of jobs over a scheduling sequence [6]. The main promise of DRL for the scheduling problems compared to alternative solution approaches is that it may yield better solutions than commonly used PDRs but with much shorter computation times and less formulation effort than optimal solvers [7]. Yet, DRL for scheduling problems is only in its infancy. On the one hand, the field still neglects some problem conditions inherent to real-world problems [8], and on the other hand it lags behind in the application of promising DRL paradigms such as curriculum learning (CL).\nCL is a recent but very active research field in DRL and is built on the premise that, as with human learning, curricula play a critical role in effective learning behaviors in DRL. More precisely, CL is concerned with generating and learning from suitable experience sequences for the DRL agent. These sequences form the curriculum, which typically progressively varies the task difficulty leading up a final goal. The transfer of CL to the JSSP domain, has only recently been attempted [3]. Such existing methods design curriculums which vary between different problem sizes, i.e. numbers of jobs and machines per problem instance. While applicable to toy-box scenarios, the number of machines is often constant in real-world scenarios and corresponding usable training data. More granular CL within one fixed problem size, however, has not been studied yet. The missing component to accomplish CL in this granularity is a common definition of a degree of difficulty of problem instances within the same problem size.\nIn this work, we present such a definition and propose a new CL strategy for solving the JSSP with DRL. Comparing the learning behavior with and without CL, we empirically show the superiority of our approach with respect to the achieved average makespan. Our main contributions are summarized as follows:\n• The introduction of a measure for the relative difficulty of a problem instance in JSSPs of the same problem size. • A curriculum learning strategy for JSSPs suitable to steer the learning behavior of DRL agents and to receive shorter average makespans (compare Figure 1). The observed behavior shows that starting training on the most difficult instances decreases the resulting makespans by 3%.\nThe remainder of this paper is structured as follows: In section 2 we summarize latest achievements in DRLbased JSSP solutions and introduce CL in this context. Section 3 details our solution method and experimental setup, followed by the presentation of the results and insights in section 4 and their discussions in section 5. Finally, section 6 provides a conclusion and outlook to future work. " }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Deep Reinforcement Learning for Job Shop Scheduling Problems", "publication_ref": [ "b9", "b10", "b4", "b2", "b3", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b18", "b19", "b20" ], "table_ref": [], "text": "Literature on DRL based JSSP solutions is rapidly increasing in volume and can roughly be divided into two classes: ground research and applied research. Ground research is generally concerned with new architectures [10,11,5], learning design decisions [3,4] and their comparison to existing solution methods, such as priority dispatching rules, meta heuristics and optimal solvers [12]. Here, we find continuously better performance of DRL on standard JSSP problems and benchmark datasets, matching and outperforming PDRs in recent years.\nApplied research often considers an additional dimension in the problem formulation inspired by real-world use-cases, such as stochasticity [13,14], machine flexibility [15][16][17], dynamic job releases [18], machine failures [19] or multi-objective optimization criteria [20,21]. These studies show the general feasibility of DRL to learn, but are typically not very competitive with expert systems. Our contribution lies closer to the first class, as it methodologically extends an existing approach by means of a new learning paradigm for CL in job shop scheduling." }, { "figure_ref": [], "heading": "Curriculum Learning in Deep Reinforcement Learning based Job Shop Scheduling", "publication_ref": [ "b8", "b21", "b22", "b23", "b24", "b10", "b17", "b25", "b2", "b2" ], "table_ref": [], "text": "According to Narvekar et al. [9], curriculum learning consists of three key elements: task generation, which deals with the division of the overall goal into easier sub-goals and the generation of suitable training experience; sequencing, dealing with the order in which to present the training experience; and transfer learning, comprising methods to tackle forgetting skills acquired from past experience when confronted with new experience.\nCL for DRL-based JSSP is not much investigated in the current state of research. In a wider sense, CL is used in several approaches to DRL-based job shop scheduling by applying variations of experience replay [22][23][24][25]11,18], in which the gathered experience is rearranged and sampled aiming to make learning more stable. In that way, it is loosely related to the sequencing element of CL. However, experience replay works with the experience once it is already gathered, skipping the task generation element of CL. Task generation is less studied and a remaining challenge for solving combinatorial optimization problems with DRL [26].\nIn our work, we propose an own metric for the importance of experience based on the performance of priority dispatching rules, which serves as a discriminating factor for easy and hard tasks.\nIklassov et al. [3] explicitly propose CL in the JSSP domain. They define the easiness of sub-goals of JSSPs through problem sizes, as most common in combinatorial problems because of the solution space scales with the problem size [27]. By this definition, a problem instance with more jobs and machines is harder than one with less jobs and machines. Making use of a problem size agnostic neural network architecture, the authors introduce an automatic sequencing algorithm which favors collecting experiences from the currently hardest problem size. Their results indicate that models trained with CL drastically outperform those trained without CL. Our approach differs from that of Iklassov et al. [3] in that we apply CL for problem instances of the same size. Hence, we are closing an important gap that enables applying CL to those manufacturing scenarios in which the number of machines remains the same." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [ "b4" ], "table_ref": [], "text": "In the following, we first summarize the work by Zhang et al. [5], which serves as our methodological testbed and baseline, followed by the details of our CL extension and experimental setup." }, { "figure_ref": [], "heading": "Deep Reinforcement Learning Approach", "publication_ref": [ "b4" ], "table_ref": [], "text": "Our approach extends the method and framework presented in Zhang et al. [5], which shows competitive results on recognized benchmark datasets of the JSSP with makespan optimization. Specifically, our method adapts the interaction logic of the DRL agent with the simulation (action-space and environment step), the action evaluation signal (reward), the input formulation (observation-space), and the network architecture.\nThe studied DRL agent iteratively plans tasks of a JSSP by choosing from the list of still unfinished jobs in each iteration step. The corresponding next task of this job is scheduled to start at the earliest still possible time by a mechanism called left-shifting: left shifting means that if the current plan, consisting of all scheduled tasks up to that point, can be optimized by switching the position of the chosen task with the previous one on the used machine, this switch is executed by the simulation. The corresponding reward signal consists of the difference of the makespan of the already scheduled tasks before and after the last step, such that the cumulative reward received throughout the planning process equals the negative makespan of the final plan. The scheduling decision is based on a size-agnostic embedding: For each task, the embedding contains the information whether it is done and what its current lower bound of the makespan is. Each task represents a node in a graph neural network in which the corresponding information is propagated from node to node and finally aggregated by summation.\nIn the original paper, the 40.000 training instances per agent were generated on the fly by randomly sorting processing orders on machines and drawing processing times randomly from a normal distribution. Our central extension is a different sampling procedure as part of the CL approach." }, { "figure_ref": [ "fig_0" ], "heading": "Curricular Training Procedure", "publication_ref": [ "b4", "b4", "b24" ], "table_ref": [], "text": "Task Generation (definition of instance difficulty): In order to carry out curriculum learning, a feature to divide problem instances into subtasks that vary in difficulty is essential. Since instances of the same problem size by definition share the same computational complexity, we resort to a feature defined by how well we are already able to solve instances through an established set of rules, i.e. PDRs. We call this discriminative feature difficulty to solve (DTS). DTS is defined as the makespan, which the most competitive PDR achieves on any given problem instance. Accordingly, we speak of those instances on which a shorter than average makespan is realized through the best PDR as easy tasks and those on which a longer makespan is realized as hard tasks. Applied to our use case, we proceed as follows (cf. Figure 1, step 1 and 1.1 of our method):\nAs in Zhang et al. [5], we generate 40.000 random 6x6 JSSP training instances from normal distributions with respect to machine orders and processing times. After solving the training data with six commonly used priority dispatching rules jointly with the left-shifting procedure used in Zhang et al. [5], we find that the most tasks remaining (MTR) prioritization rule performs best with an average of a 16% larger makespan compared to the optimal makespan (optimality gap). The results of all considered PDRs are shown in the appendix (Table A1). MTR only performed marginally better than the least remaining processing time (LRPT) prioritization, but much better than the most often used shortest processing time (SPT). Optimal solutions were generated using the CP-SAT solver by OR-Tools [25]. Figure 2a) depicts the distribution of achieved makespans through MTR, which is our used DTS metric.\nSequencing: The creation of training sequences is the next step. Often the difficulty is gradually increased over training in CL, following the intuition that an agent learns a basic strategy first and refines it later to match more difficult scenarios. To cover this sequence, but also others, we sort the training instances by DTS, as depicted in Figure 2b), then split it into the easy and hard halves and keep the original, or normal, order (e_n, h_n) or reverse it (e_r, h_r). For example, e_n (red line in Figure 2b)) consists of half of the training data in normal order, i.e. starting from the lowest DTS around 300 and ending at the mean DTS of about 580. The four portions make up our ordered training curriculum elements (CE). One entire training curriculum consists of two concatenated CEs, e.g. [e_n, e_n] or [e_n, h_r], resulting in the 16 possible curricula, schematically depicted in Figure 3." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b4" ], "table_ref": [], "text": "The experiments are designed such that differences in the agent behavior and performance may only be attributed to the training curricula. To this end, a separate RL-agent is trained for each of the 16 curricula until all training instances within the curriculum have been shown once. As baseline, we also train three RL-agents on unordered training data, where the problem instances were randomly shuffled and the agents are randomly initialized with varying random seeds. Training hyperparameters are fixed in accordance with Zhang et al. [5] for all experiments. All agents are tested on a fixed test dataset containing 1000 problem instances each time 2000 training instances have been shown. For more statistically significant results, we sampled three different training datasets with varying random seeds as described in section 3.2. The above experiments are carried out separately on all three datasets." }, { "figure_ref": [ "fig_2" ], "heading": "Experiment Results", "publication_ref": [], "table_ref": [], "text": "Figure 4 shows the results of agents tested on the validation instances over the course of training. Agents trained on the same CE in the first half of the training period, e.g." }, { "figure_ref": [ "fig_2", "fig_3" ], "heading": "on e_n in [e_n, e_n], [e_n, e_r], [e_n, h_n],", "publication_ref": [], "table_ref": [], "text": "[e_n, h_r], are averaged across the three datasets and depicted as solid lines. Generally, one can observe a rapid decline to a first dip of the optimality gap from the first validation point after 2.000 training instances to 6.000 training instances, followed by an increase in the optimality gap and a gradual subsequent convergence to final values towards the end of the training. Interestingly, more than 70% of the agents reach their global minimum in the first dip. This indicates that the agents develop the most successful strategy in the very beginning of training and never return to it, but converge towards a higher (worse) optimality gap instead. Moreover, the best agents 10% of all agents reached their minimum in the first dip.\nA closer look at the first dip (cf. the zoom-in on the right in Figure 4) reveals that the lowest point is directly related to the easiness of the first data shown to the agent, hence the training curriculum. More precisely, the lowest point corresponds to agents trained with the h_r CE (blue line), meaning that they have been trained on the hardest training data. Inversely, agents trained on e_n (red line) remain highest among all points at the first dip. Noteworthily, all agents trained on a CE perform better than those trained without a curriculum (black line) on average. This means that it is advisable to use a curriculum, specifically the h_r curriculum in the beginning of the training, to achieve the best results. In our case, we achieve 7% better results (1.1%p) in the first dip. Overall, we achieve 3% better results (0.5%p).\nNext, we analyze the training behavior of the agents regarding the second half of training, where the second CEs are presented. Note that in some cases, such as [e_n, e_n], the agent sees only one half of all instances.\nWe study two main questions: Firstly, does the second CE have a consistent impact on the final result? Secondly, does the curriculum have a reproducible impact upon introduction in the middle of the training (difference between test after 20,000 and 22,000 training instances)? The latter may help to steer the agent away from a local minimum. Figure 5 shows the learning curves of agents trained on different curriculums composed from the same training dataset. In each plot, learning curves of four agents are displayed. The plots overlap in the first half of the training because of being trained on the same first CE, but diverge in the middle upon introduction of the second CE. Across all plots we were not able to find significant correlations between second curriculum elements and the learning curve with respect to optimal performance. This answers the first question: the curriculum element in the second half does not have a consistent impact on the final overall result. However, we observed trends regarding the local behavior in the beginning of the second half of training. Similar to the behavior in the first half, h_r generally invokes the largest drop in optimality gap compared to the other CEs in three out of four cases. In some cases, this goes so far that while h_r invokes a drop in the optimality gap, e_n invokes a rise in the optimality gap. Analogously, the last bar indicates that in five cases, e_n achieves the lowest performance. Generally, we find that h_r ranks highest and e_n lowest, whereas h_n and e_r rank in between. Similarly, in Figure 6 b), we can look at the absolute impact and count the number of times a CE caused an immediate jump towards better or worse optimality gap. Evidently h_r and h_n rather cause jumps towards better optimality gaps, whereas e_r is neutral and e_n causes jumps towards worse optimality gaps more often than not. " }, { "figure_ref": [ "fig_2" ], "heading": "Discussion of Results", "publication_ref": [], "table_ref": [], "text": "The presented results suggest that the learning behavior of the DRL agent can be positively influenced through the CEs defined in this study. As a practical consequence, we achieve better global results after a comparatively short training period. We therefore propose using CL according to our methodology, which is easily implemented and integrated into existing solution approaches. On a more fundamental level, the results suggest that the proposed DTS metric is useful to evaluate the easiness of a JSSP problem instance, a novum in this particular domain. During the experiments, we further observed the global minimum in the dip (cf. Figure 4) during training. Though useful in this particular case, in RL we much rather observe smooth, almost monotonically decreasing learning curves. An investigation for the reason behind the learning curve may be subject of future work.\nAnother noteworthy observation is that learning on the hardest problems first achieves the best outcomes. CL methods otherwise typically start from easier sub-problems and transfer this knowledge into the actual final problem. Our initial explanation attempt, is that the harder problems introduce a stronger negative reward signal through the larger makespan (note that our definition of DTS is related to the achieved makespan), pushing the agent more towards a certain initial strategy. Another intuitive hypothesis is that a strategy working well on harder problems, which inhibit a larger makespan, very effectively decreases the large optimality gap of these problems and leads to the strong results. To test whether a strategy that minimizes the particularly large optimality gaps may be incentivized through a curriculum, one may try using the optimality gap instead of the makespan achieved by MTR as DTS metric in the future. Note, however, that this requires solving every training instance optimally, which is much slower than our MTRbased approach especially when applying the method to larger problem instances." }, { "figure_ref": [], "heading": "Conclusion and Outlook", "publication_ref": [], "table_ref": [], "text": "CL is a promising DRL paradigm, yet not well studied in the context of JSSP solutions. In this study we investigated the impact of a learning curriculum within a fixed problem size of the JSSP. We found that ordering training instances by how well an established priority dispatching rule, MTR, performs on these instances provides meaningful metric for forming curricula that allow us to improve the learning behavior of DRL agents and to increase the scheduling performance. By starting the training with instances sorted from worst to best performances of MTR, our approach consistently outperforms agents trained on randomly ordered training data.\nMotivated by the presented results, in our future work we will investigate other metrics for the difficulty of problem instances of the same problem size. These may stem from priority dispatching rules that are combined for better performance or well suited for certain modifications of the JSSP. This is especially necessary for the successful transfer the methodology to other scheduling problems which include more challenging optimization objectives and additional constraints." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This research work was undertaken within the research project AlphaMES funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK)." }, { "figure_ref": [], "heading": "Appendix", "publication_ref": [], "table_ref": [], "text": "" } ]
Solving job shop scheduling problems (JSSPs) with a fixed strategy, such as a priority dispatching rule, may yield satisfactory results for several problem instances but, nevertheless, insufficient results for others. From this single-strategy perspective finding a near optimal solution to a specific JSSP varies in difficulty even if the machine setup remains the same. A recent intensively researched and promising method to deal with difficulty variability is Deep Reinforcement Learning (DRL), which dynamically adjusts an agent's planning strategy in response to difficult instances not only during training, but also when applied to new situations. In this paper, we further improve DLR as an underlying method by actively incorporating the variability of difficulty within the same problem size into the design of the learning process. We base our approach on a state-of-the-art methodology that solves JSSP by means of DRL and graph neural network embeddings. Our work supplements the training routine of the agent by a curriculum learning strategy that ranks the problem instances shown during training by a new metric of problem instance difficulty. Our results show that certain curricula lead to significantly better performances of the DRL solutions. Agents trained on these curricula beat the top performance of those trained on randomly distributed training data, reaching 3.2% shorter average makespans.
Conference on Production Systems and Logistics Curriculum Learning In Job Shop Scheduling Using Reinforcement Learning
[ { "figure_caption": "Figure 1 :1Figure 1: Comparison of the common method with our proposed method. Extension through calculations on the training instances and difference in training procedure.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 Figure 3 :23Figure 2 Training data consisting of 40.000 unique instances. a) Histogram of training instances by their DTS (makespan through the MTR dispatching rule); b) Elements of the curriculum: portions of training data sorted by DTS. (e_n = easy, normal order; e_r = easy, reversed order; h_n = hard, normal order; h_r = hard, reversed order)", "figure_data": "", "figure_id": "fig_1", "figure_label": "23", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Performance of the agents on the test instances over training progress. Lines indicate the mean across three random seeds for the training instance generation. Shaded areas indicate the minimum and maximum values across three runs. Colored lines represent agents trained on curricula, where the first curriculum element is indicated in the legend. The second curriculum element is h_r for all depicted agents. The black line represents agents trained on randomly ordered training instances.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 66Figure 6 visualizes the statistics of the immediate local impact of the CEs in the second half of training.Figure 6 a) shows the relative statistical impact of the CEs compared to the CEs by rank. The first bar indicates that in nine out of twelve cases, h_r invoked the largest immediate drop in optimality gap.Analogously, the last bar indicates that in five cases, e_n achieves the lowest performance. Generally, we find that h_r ranks highest and e_n lowest, whereas h_n and e_r rank in between. Similarly, in Figure6 b), we can look at the absolute impact and count the number of times a CE caused an immediate jump towards better or worse optimality gap. Evidently h_r and h_n rather cause jumps towards better optimality gaps, whereas e_r is neutral and e_n causes jumps towards worse optimality gaps more often than not.", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 6 a6Figure 6 visualizes the statistics of the immediate local impact of the CEs in the second half of training.Figure 6 a) shows the relative statistical impact of the CEs compared to the CEs by rank. The first bar indicates that in nine out of twelve cases, h_r invoked the largest immediate drop in optimality gap.Analogously, the last bar indicates that in five cases, e_n achieves the lowest performance. Generally, we find that h_r ranks highest and e_n lowest, whereas h_n and e_r rank in between. Similarly, in Figure6 b), we can look at the absolute impact and count the number of times a CE caused an immediate jump towards better or worse optimality gap. Evidently h_r and h_n rather cause jumps towards better optimality gaps, whereas e_r is neutral and e_n causes jumps towards worse optimality gaps more often than not.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 5 :Figure 6 :56Figure 5: Zoom-ins on the second half of trainings. In each plot, training was performed on the same curriculum element in the first half (top left on [h_r], top right on [h_n], bottom left on [e_r] and bottom right on [e_n])", "figure_data": "", "figure_id": "fig_5", "figure_label": "56", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" } ]
Constantin Waubert De Puiseau; Hasan Tercan; Tobias Meisen
[ { "authors": "A P Badia; B Piot; S Kapturowski; P Sprechmann; A Vitvitskyi; D Guo; C Blundell", "journal": "", "ref_id": "b0", "title": "Agent57: Outperforming the Atari Human Benchmark", "year": "2020" }, { "authors": "O Vinyals; I Babuschkin; W M Czarnecki; M Mathieu; A Dudzik; J Chung; D H Choi; R Powell; T Ewalds; P Georgiev; J Oh; D Horgan; M Kroiss; I Danihelka; A Huang; L Sifre; T Cai; J P Agapiou; M Jaderberg; A S Vezhnevets; R Leblond; T Pohlen; V Dalibard; D Budden; Y Sulsky; J Molloy; T L Paine; C Gulcehre; Z Wang; T Pfaff; Y Wu; R Ring; D Yogatama; D Wünsch; K Mckinney; O Smith; T Schaul; T Lillicrap; K Kavukcuoglu; D Hassabis; C Apps; D Silver", "journal": "Nature", "ref_id": "b1", "title": "Grandmaster level in StarCraft II using multi-agent reinforcement learning", "year": "2019" }, { "authors": "Z Iklassov; D Medvedev; R Solozabal; M Takac", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b2", "title": "Learning to generalize Dispatching rules on the Job Shop Scheduling", "year": "2020" }, { "authors": "V Samsonov; K B Hicham; T Meisen", "journal": "Engineering Applications of Artificial Intelligence", "ref_id": "b3", "title": "Reinforcement Learning in Manufacturing Control: Baselines, Challenges and Ways Forward", "year": "2022" }, { "authors": "C Zhang; W Song; Z Cao; J Zhang; P S Tan; X Chi", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b4", "title": "Learning to Dispatch for Job Shop Scheduling via Deep Reinforcement Learning", "year": "2020" }, { "authors": "M Pinedo", "journal": "Springer International Publishing", "ref_id": "b5", "title": "Scheduling: Theory, algorithms, and systems", "year": "2016" }, { "authors": "I Bello; H Pham; V Le; Q Norouzi; M Bengio", "journal": "", "ref_id": "b6", "title": "Neural Combinatorial Optimization with Reinforcement Learning", "year": "2016" }, { "authors": "C W Puiseau; De; R Meyes; T Meisen", "journal": "Journal of Intelligent Manufacturing", "ref_id": "b7", "title": "On reliability of reinforcement learning based production scheduling systems: a comparative survey", "year": "2022" }, { "authors": "J Park; J Chun; S H Kim; Y Kim; J Park", "journal": "International journal of production research", "ref_id": "b8", "title": "Learning to schedule job-shop problems: representation and policy learning using graph neural network and reinforcement learning", "year": "2021" }, { "authors": "R Magalhães; M Martins; S Vieira; F Santos; J Sousa", "journal": "", "ref_id": "b9", "title": "Encoder-Decoder Neural Network Architecture for solving Job Shop Scheduling Problems using Reinforcement Learning", "year": "2021" }, { "authors": "T Van Ekeris; R Meyes; T Meisen", "journal": "", "ref_id": "b10", "title": "Discovering Heuristics And Metaheuristics For Job Shop Scheduling From Scratch Via Deep Reinforcement Learning", "year": "2021" }, { "authors": "S Abderrazzak; A Hamid; S Omar", "journal": "IFIP Advances in Information and Communication Technology", "ref_id": "b11", "title": "Stochastic Dynamic Programming for Earliness-Tardiness Single Machine Scheduling with Maintenance Considerations", "year": "2021" }, { "authors": "J Brammer; B Lutz; D Neumann", "journal": "OR Spectrum", "ref_id": "b12", "title": "Stochastic mixed model sequencing with multiple stations using reinforcement learning and probability quantiles", "year": "2022" }, { "authors": "S Luo", "journal": "Applied Soft Computing", "ref_id": "b13", "title": "Dynamic scheduling for flexible job shop with new job insertions by deep reinforcement learning", "year": "2020" }, { "authors": "S Baer; J Bakakeu; R Meyes; T Meisen", "journal": "", "ref_id": "b14", "title": "Multi-Agent Reinforcement Learning for Job Shop Scheduling in Flexible Manufacturing Systems", "year": "2019" }, { "authors": "B Waschneck; A Reichstaller; L Belzner; T Altenmüller; T Bauernhansl; A Knapp; A Kyek", "journal": "Procedia CIRP", "ref_id": "b15", "title": "Optimization of global production scheduling with deep reinforcement learning", "year": "2018" }, { "authors": "Y Zeng; Z Liao; Y Dai; R Wang; X Li; B Yuan", "journal": "", "ref_id": "b16", "title": "Hybrid intelligence for dynamic job-shop scheduling with deep reinforcement learning and attention mechanism", "year": "2022" }, { "authors": "A Kuhnle; J.-P Kaiser; F Theiß; N Stricker; G Lanza", "journal": "Journal of Intelligent Manufacturing", "ref_id": "b17", "title": "Designing an adaptive production control system using reinforcement learning", "year": "2021" }, { "authors": "P C Luo; H Q Xiong; B W Zhang; J Y Peng; Z F Xiong", "journal": "International journal of production research", "ref_id": "b18", "title": "Multi-resource constrained dynamic workshop scheduling based on proximal policy optimisation", "year": "2022" }, { "authors": "A Rinciog; C Mieth; P M Scheikl; A Meyer", "journal": "", "ref_id": "b19", "title": "Sheet-Metal Production Scheduling Using AlphaGo Zero", "year": "2020" }, { "authors": "S Narvekar; B Peng; M Leonetti; J Sinapov; E Taylor; M Stone; P ", "journal": "Journal Of Machine Learning Research", "ref_id": "b20", "title": "Curriculum Learning for Reinforcement Learning Domains: A Framework and Survey", "year": "2020" }, { "authors": "N Gandhi; S Mishra", "journal": "", "ref_id": "b21", "title": "Modelling resource allocation in uncertain system environment through deep reinforcement learning", "year": "2021" }, { "authors": "M S A Hameed; A Schwung", "journal": "", "ref_id": "b22", "title": "Reinforcement Learning on Job Shop Scheduling Problems Using Graph Networks", "year": "2020" }, { "authors": "B.-A Han; J.-J Yang", "journal": "IEEE Access", "ref_id": "b23", "title": "Research on Adaptive Job Shop Scheduling Problems Based on Dueling Double DQN", "year": "2020" }, { "authors": "C.-L Liu; C.-C Chang; C.-J Tseng", "journal": "IEEE Access", "ref_id": "b24", "title": "Actor-Critic Deep Reinforcement Learning for Solving Job Shop Scheduling Problems", "year": "2020" }, { "authors": "Y Bengio; A Lodi; A Prouvost", "journal": "", "ref_id": "b25", "title": "Machine Learning for Combinatorial Optimization: a Methodological Tour d'Horizon", "year": "2018" }, { "authors": "Constantin Waubert Biography; De Puiseau", "journal": "", "ref_id": "b26", "title": "", "year": "1994" }, { "authors": " Hasan Tercan", "journal": "", "ref_id": "b27", "title": "holds a Master's Degree from TU Darmstadt in Computer Science", "year": "1988" }, { "authors": "Tobias Meisen", "journal": "", "ref_id": "b28", "title": "is a Professor of Digital Transformation Technologies and Management at the University of Wuppertal since", "year": "1981" } ]
[]
10.1109/MSPEC.2021.9423818
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b55", "b13", "b21", "b37", "b1", "b59", "b41", "b10", "b39", "b43", "b16", "b48", "b6" ], "table_ref": [], "text": "Demands of the modern world are increasingly responsible for causing severe psychological distress in people. World Health Organization estimates psychological distress affects 29% of people in their lifetime (Steel et al., 2014). The shortage of mental health workers and the stigma associated with mental health further demotivates people from actively seeking help. With the expansion of the internet, many people are seen resorting to peer support platforms such as Reddit and Talklife to vent their distress. 1 The anonymity associated with these platforms makes it easier for people to discuss their concerns without being affected by the stigma. Distress consolation through AIdriven chatbots has also become an emerging solution (Fitzpatrick et al., 2017;Inkster et al., 2018;Mousavi et al., 2021). Due to the lack of availability of large-scale psycho-therapeutic conversations, researchers are using data scraped from online peer support forums to train such chatbots (Alambo et al., 2019;Welivita and Pu, 2022). High levels of perceived empathy and information richness make them good candidates for training (Nambisan, 2011;De Choudhury and De, 2014;Sharma et al., 2020a,b). But since peers are not professionals, the responses contained in such forums can sometimes be unfavourable to address distress (e.g. confrontations, judgments, orders etc.). So, using this data can have severe risks. One solution for this is identifying favourable and unfavourable response types that appear in distress support dialogues and developing automatic means that can propose omission or rephrasing of such unfavourable response types. Figure 1 shows an example.\nTo analyze the types of responses in distress support dialogues, we use labels adapted from a wellestablished behavioral coding system named Motivational Interviewing Treatment Integrity (MITI) code (Moyers et al., 2014). It is used in psychology to evaluate how well a mental health provider responds. Specific response types from the MITI code have shown to increase the likelihood of positive health outcomes (Pérez-Rosas et al., 2018;Gaume et al., 2009). It defines favourable response types such as Questioning, Reflecting, and Advising with permission and unfavourable response types such as Advising without permission, Confronting, and Self-Disclosing (extra-session). In our previous work, we developed a dataset called the MI dataset, to have a comparative understanding of the differences between online support provided by peers and trained counselors. For this, we hired professional counselors to annotate responses given by peers and counselors with labels derived from the MITI code. During analysis, we observed that peers' responses tend to be more supportive, and encouraging than counselors' (as observed by the increased percentage of Support and Affirm labels). But it was also observed that important therapeutic techniques, such as asking more open questions than closed ones, reflections, giving information, advices with permission, and emphasizing speaker's autonomy were lacking in peers' responses and hence require further boosting. One of the major observations was that among the advises given by the peers, 92.86% of them belonged to the category Advise without permission, which is MI non-adherent. This percentage was lower in counselor responses, but still accounted for 77.22% of the advises given by counselors.\nIn this work, we aim to detect such Advise without permission responses among distress support dialogues and build a rephraser that can rephrase such responses into Advise with permission, which is more MI-adherent. First, we detect such responses through a classifier trained on an augmented version of the MI dataset. Next, as we do not have human written responses rephrasing Advise without permission responses into Advise with permission, we use automatic methods such as template-based replacement and retrieval to construct a pseudo-parallel training corpus containing pairs of Advise without permission and Advise with permission sentences. Since rephrasing is a labor-intensive task compared to labeling and we require professionally trained counselors to do this in the distress consolation setting, using our already labeled dataset to construct a pseudo-parallel corpus saved us both time and cost. We apply the same methods on the augmented version of the MI dataset to form a much larger pseudo-parallel training corpus and use these corpora to fine-tune BlenderBot (Roller et al., 2021) and GPT3 (Brown et al., 2020). Some of the models we fine-tune incorporate different forms of prompting with the aim of obtaining a better outcome with less training examples. We evaluate the rephrasers using automatic and human evaluation. The results mainly show when the training dataset is small, prompting improves the performance of the rephrasers across style transfer and semantic similarity dimensions. They also suggest that when the training dataset is large (in our case through data augmentation), pseudo-parallel data generated through simpler methods such as template replacement produce better results.\nOur contributions are four-fold. 1) We develop an MI classifier that can predict 15 different favourable and unfavourable response types derived from the MITI code. 2) We propose a methodology to rephrase responses detected as Advise without Permission into more MI-adherent Advise with Permission. We show how this can be done in the absence of human written rephrasings by developing pseudo-parallel corpora using different automatic methods. 3) We evaluate these rephrasers using automatic and human evaluation and show how prompting and data augmentation can improve the performance of the rephrasers when there is less training data. 4) Finally, we discuss how this method can be applied to boost chatbot responses, making them more compliant with the MI strategy. Our code and the datasets can be found at https://github.com/ anuradha1992/Boosting-with-MI-Strategy" }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b22", "b33", "b53", "b46", "b50", "b61", "b28", "b23", "b30", "b54", "b20", "b46", "b32", "b15" ], "table_ref": [], "text": "Rephrasing responses recognized as Advise without Permission into Advise with Perrmission can be identified as a sub-task falling under the task of Text Style Transfer (TST), in which the goal is to automatically control the style attributes (e.g. sentiment, politeness, humor, etc.) of text while preserving the content (Jin et al., 2022). The field of TST involves traditional linguistic approaches as well as deep learning approaches. Traditional approaches to TST rely on term replacement and templates (Mairesse and Walker, 2011;Sheikha and Inkpen, 2011). With the success of deep learning, various neural methods have been recently proposed for TST. Given datasets in which there are direct mappings between the text of the source style and the text of the target style, which are referred to as parallel corpora, standard sequence-to-sequence models are often directly applied for TST (Rao and Tetreault, 2018;Shang et al., 2019;Xu et al., 2019). But parallel corpora are challenging to find because the development of such data often requires costly human labor. Thus, TST on non-parallel corpora has become an emerging area of research (Li et al., 2018;Jin et al., 2019;Liu et al., 2022).\nParallel and nonparallel datasets have been proposed for common sub-tasks of TST such as sentiment (Shen et al., 2017), topic (Huang et al., 2020), formality (Rao and Tetreault, 2018), politeness (Madaan et al., 2020), and humor (Gan et al., 2017) transfer. But to the best of our knowledge, this is the first attempt at introducing a new subtask and releasing an nonparallel corpus for style transfer between MI non-adherent Advise without Permission and MI adherent Advise with Permission responses. This task is more challenging than the other sub-tasks because it requires the expertise of professional counselors to generate training data. In this work, we release a nonparallel corpus that can be utilized for this task, which is annotated by professional counselors. We also show how automatic methods could be applied to create pseudo-parallel corpora using this dataset, which can be used to train neural models for this task." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b40", "b39" ], "table_ref": [], "text": "For this work, we used dialogues curated from two online support platforms. The first one is Coun-selChat (counselchat.com), in which verified counselors respond to distress-related posts. The Coun-selChat dataset available publicly2 contains 2,129 post-response pairs spanning 31 distress-related topics. We also curated dialogues from a carefully selected set of 8 subreddits: mentalhealthsupport; offmychest; sad; suicidewatch; anxietyhelp; depression; depressed; and depression_help, which are popular among Reddit users to vent their distress.\nThis dataset, which we call RED (Reddit Emotional Distress), contains 1,275,486 dyadic conversations having on average of 2.66 turns per dialogue.\nIn our previous work, we recruited professional counselors to annotate a subset of 1,000 dialogues each from CounselChat and RED datasets with labels adapted from the MITI code 2.0 (Moyers et al., 2003) and 4.2.1 (Moyers et al., 2014). We call this the MI dataset. We used 15 labels for annotation. They are elaborated in the appendices. Out of them, we are interested in the labels Advise with Permission and Advise without Permission, which are respectively considered MI-adherent and MI non-adherent response types. The MI dataset contains 16,811 annotated responses, out of which 2.87% (484) and 13.5% (2,285) responses are labeled as Advise with Permission and Advise without Permission, respectively.\nTo further augment the MI dataset, we used automatic labeling to expand the 15 labels into unlabeled dialogue responses from CounselChat and RED datasets. We used two automatic methods for this purpose: 1) N-gram-based matching; and 2) Similarity based retrieval.\nN-gram Based Matching: By tokenizing the responses in the MI dataset and computing the frequencies, we discovered the most frequent N-grams (four-grams and five-grams) occurring among the 15 labels. Examples of them are shown in the appendices. Next, we searched for the presence of these indicative N-grams (first five-gram and then four-grams) among individual sentences that appear in dialogue responses of the unlabeled Coun-selChat and RED datasets. If an indicative N-gram was found in a sentence, we labeled that sentence with the label that N-gram is indicative of. The sentences with overlapping labels were discarded due to ambiguity. In this way, we were able to automatically label 1,918 and 340,361 sentences in CounselChat and RED datasets, respectively.\nSimilarity Based Retrieval: For each unlabeled sentence among the responses in Coun-selChat and RED datasets, we computed the cosine similarity with each of the labeled sentences in the MI dataset. Next, for each unlabeled sentence, we retrieved the labeled sentences whose cosine similarity is higher than a certain threshold (the thresholds were different for each of the 15 labels, which were selected after manually inspecting randomly selected pairs of unlabeled and labeled sentences corresponding to different labels). Next, we used a majority voting scheme to select the label we can associate the unlabeled sentence with. When we encountered ties, we computed the average similarities across the clusters of retrieved sentences with different labels that held a tie and selected the label based on maximum average similarity. Using this method, we were able to automatically annotate 2,881 and 1,196,012 sentences in CounselChat and RED datasets, respectively.\nUsing the union and the intersection of the labels retrieved from N-gram-based matching and similarity-based retrieval and combining them with the gold labels from the MI dataset, we created two augmented-labeled MI datasets having 1,378,469 and 84,052 labeled sentences, respectively. For simplicity, we will refer to them as MI Augmented (Union) and MI Augmented (Intersection) datasets." }, { "figure_ref": [], "heading": "MI Classifier", "publication_ref": [ "b12", "b31" ], "table_ref": [], "text": "We developed a classifier to automatically classify responses in distress-support dialogues into one of the 15 labels mentioned above. This is an important step that should be followed before rephrasing, since first it should identify the unfavourable responses types. For this purpose, we developed a classifier that consists of a representation network that uses the BERT architecture (Devlin et al., 2019), an attention layer that aggregates all hidden states at each time step, a hidden layer, and a softmax layer. We used the BERT-base architecture with 12 layers, 768 dimensions, 12 heads, and 110M parameters as the representation network. It was initialized with weights from RoBERTa (Liu et al., 2019). We trained three classifiers. The first one was trained on the smaller human-annotated MI dataset (MI Gold) taking 80% of the data for training and leaving 10% each for validation and testing. The other two were trained on the MI Augmented (Union) and MI Augmented (Intersection) datasets, leaving out the data used for validation and testing in the first case. In all cases, the optimal model was chosen based on average cross entropy loss calculated between the ground truth and predicted labels in the human-annotated validation set.\nThe classifiers trained on MI Gold, MI Augmented (Intersection), and MI Augmented (Union) datasets reported accuracies of 68.31%, 67.13%, and 73.44% on the MI Gold test set, respectively. The reported accuracies on the MI Gold validation set were 67.08%, 64.07%, and 72.67%, respectively for the three classifiers. Accordingly, the labels collected through the union of N-gram matching and cosine similarity-based methods improved the accuracy of the classifier by 8.33% and 7.5%, respectively on the validation and test sets compared to the accuracies reported when trained on the gold-labeled MI dataset." }, { "figure_ref": [], "heading": "MI Rephraser", "publication_ref": [], "table_ref": [], "text": "After identifying the favourable and unfavourable response types, we can choose to omit the unfavourable responses or if possible, rephrase them into a more MI adherent form. A label pair that this rephrasing strategy can be applied directly are Advise without Permission and Advise with Permission. Through N-gram analysis, we could discover some N-gram patterns that are indicative of the label pair Advise without Permission (e.g. You should, You need to, You musn't) and Advise with Permission (e.g. It maybe helpful to, I wonder if you can, You may want to consider). These could be identified as style attributes that vary across the responses identified as Advise without Permission and Advise with Permission. Thus, given a response identified as Advise without Permission, the goal of the rephraser would be to rephrase the response to be indicative of Advise with Permission, without changing the semantic content of the response.\nAs mentioned in Section 2, this can be identified as a sub-task under the task of Text Style Transfer (TST). TST is formally defined as, given a target utterance x and the target discourse style attribute a , model p(x |a, x), where x is a given text carrying a source attribute value a. In our case, x corresponds to the response identified as Advise without Permission, a corresponds to Advise without Permission, and a corresponds to Advise with Permission." }, { "figure_ref": [], "heading": "Pseudo-Parallel Corpora", "publication_ref": [], "table_ref": [], "text": "As discussed in Section 2, the most recent methods for TST involve data-driven deep learning models. The prerequisite for using such models is that there exist style-specific corpora for each style of interest, either parallel or nonparallel. With the human-annotated MI dataset, we are in possession of a non-parallel corpus containing 2,285 Advise without Permission and 484 Advise with Permission type of responses. With the MI Augmented (Union) dataset, we have 199,885 Advise without Permission and 3,541 Advise with Permission type of responses. Since creating parallel corpora consumes human labor and cost, using the above data, we de-cided to create pseudo-parallel corpora that contain pairs of Advise without Permission and Advise with Permission responses to train our rephrasers. We used two automatic methods to create these pseudoparallel corpora: 1) Template-based replacement method; and 2) Retrieval method." }, { "figure_ref": [], "heading": "Template-Based Replacement Method", "publication_ref": [], "table_ref": [ "tab_14" ], "text": "We used frequency-based N-gram analysis accompanied by human inspection to determine the linguistic templates that represent Advise with Permission and Advise without Permission responses. Table 11 shows some templates discovered for Advise without Permission (on left) and Advise with Permission (on right). In template-based replacement, if the algorithm detects any linguistic template on the left among the responses labeled as Advise without Permission, it will randomly select a template from the right to replace it with, giving a pair of Advise without Permission and Advise with Permission responses that contain the same semantic content but differ in style. We constructed two pseudo-parallel corpora by applying this method to the MI Gold and MI Augmented (Union) datasets, which contained 2,285 and 199,885 responses labeled as Advise without Permission, respectively. They respectively gave us 240 and 38,559 response pairs." }, { "figure_ref": [], "heading": "Advise without Advise with Permission Permission", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Retrieval Method", "publication_ref": [ "b47" ], "table_ref": [], "text": "Given the non-parallel corpus containing Advise without Permission and Advise with Permission responses, we computed the semantic similarity between the Advise without Permission and Advise with Permission responses and retrieved the response pairs whose similarity is above a certain threshold. We used Sentence-BERT (Reimers and Gurevych, 2019) to generate embeddings of the two types of responses and compared them using cosine similarity. After manually inspecting a random subset of response pairs over a range of similarity thresholds, we chose 0.7 as the final threshold to determine the semantically similar response pairs. Similar to template-based replacement, we used this method to construct two pseudoparallel corpora by applying the method to the goldlabeled and augmented-labeled MI datasets and obtained 104 and 54,956 response pairs, respectively. For simplicity, we will refer to the corpus constructed using the gold-labeled MI dataset as pseudo-parallel (PP) corpus and the corpus constructed using the augmented-labeled MI dataset as pseudo-parallel augmented (PPA) corpus. We used 80% of the data from each of the corpora for training our rephrasers, and 10% each for validation and testing. In section 7, we gauge the quality of the above corpora using human ratings." }, { "figure_ref": [], "heading": "Rephrasing Models", "publication_ref": [ "b48", "b6", "b6" ], "table_ref": [ "tab_1" ], "text": "Using the above corpora, we fine-tuned two pretrained language generation architectures Blender (Roller et al., 2021) and GPT-3 (Brown et al., 2020). Blender is a standard Seq2Seq transformer-based dialogue model. We used the 90M parameter version of Blender. Though it is a dialogue generation model, we used it mainly because it is pretrained on Reddit discussions containing ≈1.5B comments and is already aware of the language constructs used in peer support. GPT-3 is a language model that utilizes standard transformer network having 175 billion parameters. We used the smallest but fastest version of GPT-3, Ada, to build our rephrasers. The main reason to use GPT-3 is that it has demonstrated strong few-shot learning capability on many text-based tasks. Both Blender and GPT-3 were fine-tuned on template-based, retrievalbased, and combined PP and PPA corpora.\nPrior work has shown large language models can perform various tasks given a clever prompt prepended to the input (Brown et al., 2020). So, we developed two variations of Blender and GPT3 models by appending a generic prompt and an Ngram-based prompt to the end of the training data. In generic prompting, we simply appended the label Advise with permission: to the end of the input text. In N-gram prompting, we detected if there is any N-gram that is indicative of Advise with permission in the output text. If there is, we appended it to the end of the input text. Table 2 shows training examples with generic and N-gram-based prompts.\nAltogether we developed 10 different rephrasing models by fine-tuning Blender and GPT-3 on: 1)\nTraining example with generic prompting: Input:\ntry to learn from your mistakes and meet some new people . Advise with permission: Output: It may be important to try to learn from your mistakes and meet some new people.\nTraining example with N-gram based prompting: Input: try to learn from your mistakes and meet some new people . It may be important to: Output: It may be important to try to learn from your mistakes and meet some new people. " }, { "figure_ref": [], "heading": "Automatic Evaluation", "publication_ref": [ "b22", "b14", "b42", "b29", "b2", "b24", "b44", "b47", "b57", "b35", "b18", "b28", "b45" ], "table_ref": [ "tab_2" ], "text": "A successful style-transferred output should be able to demonstrate the correct target style and at the same time preserve the semantic content of the original text (Jin et al., 2022;Fu et al., 2018). We refer to the first criterion as Style Transfer Strength and the second as Semantic Similarity. Automatic metrics used to evaluate text generation methods such as the BLEU score (Papineni et al., 2002), ROUGE (Lin and Och, 2004), METEOR (Banerjee and Lavie, 2005), Word Mover Distance (WMD) (Kusner et al., 2015), Character N-gram F-score (chrf) (Popović, 2015), BERTScore (Zhang et al., 2019) and cosine similarity based on sentence embeddings (Reimers and Gurevych, 2019) are used in the literature to evaluate the semantic similarity between the original and the rephrased text. The Part-of-Speech distance (Tian et al., 2018), a metric specific to TST, is also used to measure semantic similarity. Mir et al. (2019) suggest deleting all attribute-related expressions in the text when applying these metrics to evaluate the output of TST tasks. Thus, before evaluation, we removed the style-specific phrases discovered during N-gram analysis from the input and output text.\nTo evaluate the style transfer strength, most works use a style classifier to predict if the output conforms to the target style (Hu et al., 2017;Li et al., 2018;Prabhumoye et al., 2018). We used the MI classifier trained on the MI Augmented (Union) dataset to compute the style transfer strength. It is calculated as the percentage of samples classified as Advise with Permission out of all test samples.\nTable 3 shows the results of automatic evaluation of the rephrasers on the combined PP test dataset, which contains data from both template and retrieval-based PP test sets. Accordingly, GPT3-based rephrasers show better performance compared to Blender-based rephrasers in 85% of the time across the metrics. It could also be observed that data augmentation improves the scores across most metrics irrespective of the backbone model used. Combining the pseudo-parallel corpora obtained from template-based and retrievalbased methods could improve the performance scores of Blender-based rephrasers across most automatic metrics. But GPT-3 based rephrasers trained only on template-based pseudo-parallel data seem to achieve better scores across almost all the metrics when compared to those trained on retrieval-based and combined corpora.\nBlender-based rephrasers that incorporated generic prompting ranked the best across most metrics over all the other Blender-based rephrasers. With the smaller PP training corpus, the GPT-3based rephraser that incorporated generic prompting ranked the best across most metrics. But with the larger PPA training corpus, the GPT-3 based rephraser that was trained on simple templatereplaced pseudo-parallel corpora ranked the best across most automatic metrics." }, { "figure_ref": [], "heading": "Human Evaluation", "publication_ref": [ "b0", "b58" ], "table_ref": [], "text": "Similar to automatic evaluation, we used two human evaluation criteria to rate the rephrased sentences. The first is how close the rephrased sentence is to Advise with permission (Style transfer strength). The second is to what extent the rephrased sentence preserves the context/meaning of the original sentence (Semantic similarity).\nWe used the UpWork crowdsourcing platform (www.upwork.com) and recruited four professional counselors to rate the rephrased sentences. Given the original Advise without Permission sentence and a list of rephrased sentences generated by the 10 different rephrasers, we asked two questions from the counselors: 1) Is the rephrased sentence indicative of Advise with permission?; and 2) Does the rephrased sentence preserve the original context? The counselors were asked to answer these questions by indicating a rating on a Likert scale ranging from 0 (Not at all) to 4 (Yes it is). Along with the rephrased sentences, we also presented them the corresponding Advise with permission sentence obtained from the pseudo-parallel corpora in order to gauge the quality of the corpora used for training. The sentences to be rated were presented to them in a random order to reduce bias.\nAs the combined PP test corpus developed on the MI Gold dataset is small (only 34 samples), we used 200 randomly selected samples from the combined PPA test corpus developed on the augmented MI dataset to be rated by the human workers. This was to verify the trend of results reported on the PP test corpus. We bundled 9 randomly selected test cases in one batch and allocated two workers to rate each batch. Results were calculated based on the average rating given by the two workers. Following Adiwardana et al. (2020) we also calculated the average of style transfer strength and semantic similarity ratings to obtain a single score. We computed the inter-rater agreement based on weighted Kappa that uses Fleiss-Cohen weights (Wan et al., 2015) and the scores were 0.5870 (moderate agreement) and 0.6933 (substantial agreement) for style transfer strength and semantic similarity, respectively.\nTable 4 shows the results of the human evaluation experiment. According to the results, GPT3-based rephrasers win over Blender-based rephrasers 70% and 85% of the time along style transfer and semantic similarity dimensions, respectively. And when it comes to the smaller PP training corpus, using generic prompting during training increases the scores across most cases. But when it comes to the larger PPA corpus, simply training the rephrasers with template-replaced pseudo-parallel pairs gives the best results irrespective of the underlying backbone model.\nThe average ratings obtained for style transfer strength and semantic similarity for sentence pairs in the PP test corpus were 3.21 and 3.16, respectively. The sentence pairs in the PPA test corpus scored 3.12 and 2.69 in the above two dimensions, respectively. The average ratings being close to 3 with most of them being above 3 suggests that the training corpora used are of substantial quality." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [], "text": "In this paper, we presented an example on how distress-consoling responses could be boosted with MI strategy. For this, we first developed a classifier that can identify favourable and unfavourable response types as defined by the MITI code. Then we narrowed our focus to the MI non-adherent response type Advise without Permission and developed several rephrasers that can rephrase Advise without Permission responses into MI adherent response type Advise with Permission. As curating human written rephrasings was costly, we used templated-based replacement and retrieval methods to create pseudo-parallel corpora from gold-labeled and augmented-labeled MI datasets that contained responses from Reddit and CounselChat platforms. We used this data to train several Blender and GPT3-based rephrasers. We also used generic and N-gram-based prompts to see if prompting can improve the rephrasers' performance.\nAutomatic as well as human evaluation results suggested fine-tuning on GPT3 gives better results in rephrasing Advise without permission responses into Advise with permission. Data augmentation techniques we used by expanding the MITI labels using N-gram-based matching and similaritybased retrieval improved the performance of the MI classifier as well as the Blender and GPT3based rephrasers. The results also suggested when the training datasets are small, the use of generic prompting can enable the rephrasing models to produce better results across style transfer and semantic similarity dimensions. But if you are dealing with large datasets (in our case through data augmentation), pseudo-parallel data generated through simpler methods such as template-based replacement can enable the models to generate substantially good rephrasings closer to the required style and semantically similar to the original sentence.\nIn the future, we hope to develop a chatbot that can respond to psychological distress using the RED dataset that contain dialogues curated from several mental health-related subreddits. Then we hope to improve the responses generated by this chatbot by applying MI boosting at two different levels: one at the data level; and the other at the model level. At data level boosting, we hope to apply the MI classifier and automatically label the responses in the training data itself. By doing so, we will be able to rephrase the MI non-adherent responses such as Advise without Permission into more MI-adherent responses and omit the other unfavourable responses from the training data. The MI-boosted training data can then be used to train the chatbot. At model-level boosting, a similar methodology can be applied at the level the chatbot is decoding responses (e.g. beam search). Not only generative chatbots but also retrieval-based chatbots could be benefited from this methodology." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b26", "b37" ], "table_ref": [], "text": "Certain parts of our proposed methodology, for example, template-based replacement and n-grambased prompting are applicable only when stylespecific linguistic attributes could be identified between the source and the target text. And due to the cost of human labor and the lack of publicly available client-therapist dialogues, the sample size drawn in the study is small and thus may have an impact on the conclusions drawn. Our methods have only been tested for the English language. But we believe similar methods could be applied to other languages given they have unparallel corpora tagged with Advise without Permission and Advise with Permission labels. The rephrasing methods described in this paper are tested for short sentences with a maximum sentence length of 98 tokens. Thus, the scalability of these methods for long text still remains to be tested.\nWhen testing the rephrasers, there are some combinations that could be tried other than the ones already tested. For example, more models can be fine-tuned and tested separately on templatereplaced and retrieval-based PP and PPA corpora but incorporating generic and N-gram prompting. In this work, we first combined these two types of corpora before attempting prompting since we could observe better performance on Blender when the corpora were combined.\nIn order to have more data, we combined the Advise with Permission and Advise without Permission responses present in CounselChat and RED datasets. But studies show that there are differences in the language used by counselors and peers (Lahnala et al., 2021;Mousavi et al., 2021). So, there can be linguistic differences between the same type of response in CounselChat and RED datasets. Future work should attempt to identify these differences and ideally rephrase the responses given by peers to reflect the language of the counselors." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [ "b3", "b4", "b27", "b36", "b56", "b11", "b9" ], "table_ref": [], "text": "Data Curation: Only publicly available data in Reddit and CounselChat websites were used in this work. Analysis of posts on websites such as Reddit is considered \"fair play\" since individuals are anonymous and users are aware their responses remain archived on the site unless explicitly deleted.\nIt is also stated in Reddit's privacy policy that it allows third parties to access public Reddit content.3 Also, Reddit's data is already widely available in larger dumps such as Pushshift (Baumgartner et al., 2020). Even though the policies allow it, it should be thoroughly noted that this data contains sensitive information. Thus, we adhere to the guidelines suggested by Benton et al. (2017) for working with social media data in health research, and share only anonymized and paraphrased excerpts from the dataset so that it is not possible to recover usernames through a web search with the verbatim post text. In addition, references to usernames as well as URLs are removed from dialogue content for de-identification.\nHuman Evaluation: The human raters recruited from the crowdsourcing platform, UpWork, were all trained in the practice of counseling. Since the methods were tested on English-only text, we recruited workers who had professional competency in the English language. We paid them $10 for evaluating each batch of rephrased sentences that required on average ≈30 minutes to complete. Thus, the amount paid to the human raters was ≈2.75 times above the US minimum wage of $7.25 per hour. We also paid an extra $2 as a bonus per each batch for workers who obtained an aboveaverage agreement with the other worker who rated the same batch.\nChatbots for Distress-Consolation: One of the main applications of the proposed methodology is boosting chatbot responses for distress consolation with motivational interviewing strategy. Using chatbots for distress consolation or other mental health interventions has raised ethical concerns among many (Lanteigne, 2019;Montemayor et al., 2021;Tatman, 2022). However, chatbots that intervene in mental health-related matters have already been developed and have been quite popular for a while. Some examples are SimSensei (DeVault et al., 2014), Dipsy (Xie, 2017), Woebot (woebothealth.com), and Wysa (www.wysa.io). Czerwinski et al. (2021) state, About 1 billion people globally are affected by mental disorders; a scalable solution such as an AI therapist could be a huge boon. The current technology to develop such chatbots rely heavily on deep learning and pre-trained language models. But due to the inherently unpredictable nature of these models, they pose a threat of delivering unfavourable responses when such chatbots are used for distress consolation. We believe the methodology we suggest in this work can help them become more reliable and fail-safe by adhering to the motivational interviewing strategy, a guiding style of communication heavily practiced in psychotherapy. However, since the unfavourable response detection and rephrasing methods still rely on neural network models, the artifacts produced in this paper should be used for research purposes only and real-world deployment of them should be done under human supervision. It should be noted that these demographic biases can subtly skew our data and models from representing average human behavior. The data we curated were English-only and they may perpetuate an English bias in NLP systems." }, { "figure_ref": [], "heading": "A.2 The MI Dataset", "publication_ref": [ "b40", "b39", "b49" ], "table_ref": [ "tab_6", "tab_8" ], "text": "Altogether, 15 labels adapted from the MITI code 2.0 (Moyers et al., 2003) and 4.2.1 (Moyers et al., 2014) were used for annotation. They included Closed Question, Open Question, Simple Reflection, Complex Reflection, and Give Information, which are generally considered favourable. They also included labels recognized specifically as MI adherent, which are Advise with Permission, Affirm, Emphasize Autonomy, and Support. There are another four labels recognized as MI non-adherent, which are Advise without Permission, Confront, Direct, and Warn. We also included two other labels Self-Disclose and Other, which are not included in the MITI code. The label Self-Disclose was included because, in peer support conversations, peers are mostly seen to share their lived experiences. Though it is believed that Self-Disclosure contributes in building rapport between the speaker and listener, as suggested by R. Schwartz (2021), this type of disclosure must be used wisely with caution since it can as well be counterproductive distorting client's transference. Thus, it is important to be able to recognize this response type.\nTable 5 shows the full list of labels we adapted from the MITI code along with descriptions and examples. Table 6 shows the statistics of the annotated responses in the MI dataset, corresponding to each label." }, { "figure_ref": [], "heading": "A.3 Data Augmentation: N-gram Based Matching", "publication_ref": [], "table_ref": [ "tab_10", "tab_11" ], "text": "We denote examples of the most frequent N-grams corresponding to each label in Table 7. For simplicity, we list only some of them along with their corresponding frequencies. For data augmentation, we used all four-grams and five-grams, which had a frequency of above 5.\nTable 8 shows the statistics of the labels extended through N-gram based matching in CC and RED datasets. We also encountered 518 and 53,196 sentences in CounselChat and RED datasets respectively that had overlapping labels, which were discarded due to ambiguity." }, { "figure_ref": [ "fig_2" ], "heading": "A.4 Data Augmentation: Similarity Based Retrieval", "publication_ref": [ "b47", "b5", "b8" ], "table_ref": [ "tab_11" ], "text": "To derive semantically meaningful sentence embeddings that can be compared using cosine-similarity, we used Sentence-BERT (SBERT) proposed by Reimers and Gurevych (2019), which uses siamese and triplet network structures to compute sentence embeddings. Among several models the authors have proposed, we used the roberta-base-nli-stsbmean-tokens model, fine-tuned on the NLI (Bowman et al., 2015) and STS benchmark (STSb) (Cer et al., 2017) datasets, since it has reported a high Spearman's rank correlation of 84.79 ± 0.38 between the cosine-similarity of the sentence embeddings and the gold labels in the STS benchmark test set outperforming the existing state-of-the-art.\nIt is also more efficient to use than roberta-large.\nAs described in Section 3, we used majority voting followed by computing the average similarity of retrieved sentences with the same label (in case of ties) to choose the final label for an unlabeled sentence. In Figure 2, we show an example elaborating this procedure.\nTable 8 shows the statistics of the labels extended through similarity-based retrieval in CC and RED datasets." }, { "figure_ref": [], "heading": "A.5 Augmented MI Datasets", "publication_ref": [], "table_ref": [ "tab_12" ], "text": "Table 9 shows the statistics corresponding to each label in the MI Augmented (Union) and MI Augmented (Intersection) datasets developed by taking the union and the intersection of the sentences automatically annotated by N-gram based matching and similarity based retrieval methods." }, { "figure_ref": [], "heading": "MITI label", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Description Examples", "publication_ref": [], "table_ref": [], "text": "1. Closed Question Questions that can be answered with an yes/no response or a very restricted range of answers. Speaker: Mostly, I would change for future generations. If we waste everything, then there will be nothing left.\nListener: It sounds like you have a strong feeling of responsibility." }, { "figure_ref": [], "heading": "Give Information", "publication_ref": [], "table_ref": [], "text": "The listener gives information, educates, provides feedback, or gives an opinion without advising.\nThis assignment on logging your cravings is important because we know that cravings often lead to relapses. MI Adherent Behaviour Codes: 6. Advise with Permission Advising when the speaker asks directly for the information or advice. Indirect forms of permission can also occur, such as when the listener invites the speaker to disregard the advice as appropriate.\nIf you agree with it, we could try to brainstorm some ideas that might help you." }, { "figure_ref": [], "heading": "Affirm", "publication_ref": [], "table_ref": [], "text": "Encouraging the speaker by saying something positive or complimentary.\nYou should be proud of yourself for your past's efforts." }, { "figure_ref": [], "heading": "Emphasize Autonomy", "publication_ref": [], "table_ref": [], "text": "Emphasizing the speaker's control, freedom of choice, autonomy, and ability to decide. " }, { "figure_ref": [], "heading": "Yes", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_3" ], "heading": "B MI Classifier", "publication_ref": [ "b31", "b17" ], "table_ref": [ "tab_13" ], "text": "We used the same hyper-parameter setting used in RoBERTa (Liu et al., 2019) when training the MI classifier. We used the Adam optimizer with β 1 of 0.9, β 2 of 0.98, an value of 1 × 10 -6 , and a learning rate of 2 × 10 -5 . A dropout of 0.1 was used on all layers and attention weights, and a GELU activation function (Hendrycks and Gimpel, 2016). We limited the maximum number of input tokens to 100, and used a batch size of 32. All models were trained for 20 epochs. In all cases, the optimal epoch was selected based on the average cross entropy loss calculated between the ground-truth and predicted labels of the human-annotated (MI Gold) validation set. All the experiments were conducted on a machine with [email protected], 256 GB RAM, 2x200 GB SSD, and 4xGPU (NVIDIA Titan X Pascal). Experiments were also done using GPT3 as the pre-trained language model, however, RoBERTa was seen to outperform GPT3 in this classification task.\nFigure 3 shows the architectural diagram of the MI classifier used for annotation. Table 10 shows the performance scores of the MI classifier when trained on gold-labeled and augmented MI datasets. In Figure 4, we visualize the process of creating Pseudo-Parallel (PP) and Pseudo-Parallel Augmented (PPA) corpora along with statistics corresponding to each dataset." }, { "figure_ref": [], "heading": "C.2 Rephrasing Models", "publication_ref": [ "b48", "b34", "b5", "b8" ], "table_ref": [ "tab_1" ], "text": "For developing rephrasing models, we used the 90M parameter version of Blender (Roller et al., 2021). It contains an 8 layer encoder, an 8-layer decoder with 512-dimensional embeddings, and 16 attention heads. It has a maximum input length of 1024 tokens. All code for fine-tuning is available in ParlAI (Miller et al., 2017). All the models were fine-tuned for 200 epochs, with a batch size of 8, and a learning rate of 1 × 10 -6 . For other hyperparameters, we used the default values defined in their documentation at https://parl.ai/projects/ recipes. Fine-tuning the models was conducted in a machine with [email protected], 256 GB RAM, 2x200 GB SSD, and 4xGPU (NVIDIA Titan X Pascal).\nWe also used GPT3 pretrained language model having 175 billion parameters. The smallest but fastest version of GPT3, Ada was used in our experiments.\nFine-tuning of GPT3 models were done through the paid API provided by OpenAI (www.openai.com) following API guide at https://beta.openai.com/docs/ guides/fine-tuning. We used the default set of hyperparameters for fine-tuning all GPT3 based were fine-tuned for 4 epochs, with a batch size ≈0.2% of the number of examples in the training set (capped at 256), and a learning rate of 0.05. Table 12 shows some examples of rephrased sen-tences by the different rephraser models we finetuned.\nhave proposed, we used the roberta-base-nli-stsbmean-tokens model, fine-tuned on the NLI (Bowman et al., 2015) and STS benchmark (STSb) (Cer et al., 2017) datasets to generate the embeddings. All the automatic evaluation scores are reported for a single run. " }, { "figure_ref": [], "heading": "E Human Evaluation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "F Other Remarks", "publication_ref": [], "table_ref": [], "text": "In human evaluation results, we observed in 97.5% of the cases, the average scores obtained for style transfer strength are better than the average scores obtained for semantic similarity. This observation is invariant of the type of backbone model used in training. This implies template-based and retrievalbased methods used in creating pseudo parallel data to train the rephrasers make it easier for the rephrasers to generate rephrased sentences that reflect a particular style (in this case, Advise with permission) than preserving the semantic meaning of the original sentence. This is a matter to be further investigated. To improve the scores on semantic similarity, future work can explore ways to take into account the context that precedes the sentence to be rephrased. In this way, though the rephrased version may not reflect exactly what was in the original sentence, it might still be able to generate rephrasings relevant to the preceding context.\nIt should be noted that the application of this work is not limited to improving chatbot responses for distress consolation. This could also be applied for the development of intelligent writing assistants that can suggest better responses when peers untrained in the practice of counseling attempt to respond to distress-related posts on peer support platforms such as Reddit." }, { "figure_ref": [], "heading": "G Distribution and Use of Artifacts", "publication_ref": [ "b12", "b31", "b48", "b6" ], "table_ref": [], "text": "The artifacts produced, including the datasets and the models, will be released under the CC BY-NC-SA 3.0 license https://creativecommons.org/ licenses/by-nc-sa/3.0, providing only noncommercial access to the users. We use artifacts such as the CounselChat dataset, and pretrained language architectures such as BERT (Devlin et al., 2019), RoBERTA (Liu et al., 2019), Blender (Roller et al., 2021), and GPT3 (Brown et al., 2020) for research purposes only, which does not violate their intended use." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Advise without permission: try to learn from your mistakes and meet some new people . Advise with permission (ground-truth): It may be important to try to learn from your mistakes and meet some new people . Template-based: Blender: it would be a good idea to try to learned from your mistakes and meet some new people like me . GPT3: I encourage you to try to learn from your mistakes and meet some new people . Retrieval-based: Blender: In addition to being open to meeting new friends , involve yourself in projects from which you ' ll feel satisfied by your own results . GPT3: I think it would be valuable for you to start mingling with other young people . Template+Retrieval-based: Blender: i would encourage you to look up ' ' toxic relationships ' ' and try to make your relationships stronger . GPT3: I would encourage you to try to learn from your mistakes and meet some new people. Template+Retrieval-based (with generic prompting): Blender: in addition to being open to meeting new friends , involve yourself in projects from which you ' ll feel satisfied by your own results . GPT3: I would encourage you to try to learn from your mistakes and meet some new people. Template+Retrieval-based (with N-gram prompting): Prompt: it may be important to Blender: it may be important to try to learn from your mistakes and meet a new person . : ) it may also be important for you to start engaging in things outside of the home that make you happy . GPT3: It may be important to try to learn from your mistakes and meet some new people Advise without permission: find something you enjoy and work towards it . Advise with permission (ground-truth): It maybe helpful to find something you enjoy and work towards it . Template-based: Blender: it would be a good idea to find something yourself and work towards it . . . i wonder if you have heard of this ? GPT3: It may be important to find something you enjoy and work towards it . Retrieval-based: Blender: i think it would be valuable for you to start engaging in things outside of the home that make you happy . GPT3: Give yourself some credit for being awesome and tell yourself you can build new friendships . Template+Retrieval-based: Blender: i think it would be valuable for you to start engaging in things outside of the home that make you happy . GPT3: It would be a good idea to find something you enjoy and work towards it . Template+Retrieval-based (with generic prompting): Blender: i think it would be valuable for you to start engaging in things outside of the home that make you happy . GPT3: An option would be to find something you enjoy and work towards it . Template+Retrieval-based (with N-gram prompting): Prompt: it maybe helpful to Blender: it's perhaps helpful to find something yourself and work towards it . GPT3: It maybe helpful to find something you enjoy and work towards it .\nAdvise without permission: Talk to a therapist . Advise with permission (ground-truth): I encourage you to meet with a therapist and ask them how they might be able to help you . Template-based: Blender: it would be a good idea to talk to one of the therapists . i wonder if you can talk to an therapist . GPT3: I encourage you to talk to a therapist . Retrieval-based: Blender: I encourage you to meet with a therapist and ask them how they might be able to help you . GPT3: I encourage you to meet with a therapist and ask them how they might be able to help you Template+Retrieval-based: Blender: i encourage you to meet with a therapist and ask them how they might be able to help you . GPT3: I encourage you to meet with a therapist and ask them how they might be able to help Template+Retrieval-based (with generic prompting): Blender: i encourage you to meet with a therapist and ask them how they might be able to help you . GPT3: I encourage you to meet with a therapist and ask them how they might be able to help Template+Retrieval-based (with N-gram prompting): Prompt: able to help you Blender: i encourage you to meet with a therapist . they ' ll help you find some strategies or tools to make it easier . GPT3: I encourage you to meet with a therapist and ask them how they might be able to help " }, { "figure_ref": [], "heading": "D Automatic Evaluation", "publication_ref": [ "b57", "b47" ], "table_ref": [], "text": "We used the the NLTK package to compute the BLEU 4 , METEOR 5 , and chrf 6 scores. The ROUGE score and the BERTscore were computed using the rouge 7 and bert_score 8 python libraries, respectively. The POS distance was calculated as mentioned in the work by Tian et al. (2018) following the code released by the authors on github. 9 For computing the Word Mover Distance (WMD), we used Gensim's implementation of the WMD. 10 We used sentence embeddings generated using Sentence-BERT (Reimers and Gurevych, 2019) to compute the cosine similarity between the original and rephrased text. Among the models the authors" } ]
AI-driven chatbots have become an emerging solution to address psychological distress. Due to the lack of psychotherapeutic data, researchers use dialogues scraped from online peer support forums to train them. But since the responses in such platforms are not given by professionals, they contain both conforming and non-conforming responses. In this work, we attempt to recognize these conforming and non-conforming response types present in online distress-support dialogues using labels adapted from a well-established behavioral coding scheme named Motivational Interviewing Treatment Integrity (MITI) code and show how some response types could be rephrased into a more MI adherent form that can, in turn, enable chatbot responses to be more compliant with the MI strategy. As a proof of concept, we build several rephrasers by fine-tuning Blender and GPT3 to rephrase MI non-adherent Advise without permission responses into Advise with permission. We show how this can be achieved with the construction of pseudo-parallel corpora avoiding costs for human labor. Through automatic and human evaluation we show that in the presence of less training data, techniques such as prompting and data augmentation can be used to produce substantially good rephrasings that reflect the intended style and preserve the content of the original text.
Boosting Distress Support Dialogue Responses with Motivational Interviewing Strategy
[ { "figure_caption": "Figure 1 :1Figure 1: Example of detecting unfavourable and favourable response types in distress support dialogues and boosting the responses by omitting unfavourable responses or rephrasing them into more favourable ones.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "-You can (verb) -It maybe helpful to (verb) -You could (verb) -You may want to (verb) -You need to (verb) -I encourage you to (verb) -You should (verb) -Perhaps you can (verb) -(Verb) -, if you would like.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: An example of automatically labeling an unlabeled sentence by computing the cosine-similarity with labeled sentences. The label is chosen based on majority voting. But this example shows a tie. Thus, we compute the average similarity of the sentence clusters that hold a tie and select the label of the sentence cluster with the maximum average similarity.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The architecture of the MI classifier.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figures 55Figures 5, 6, and 7 shows the user interfaces developed for the human evaluation task. The first one shows the task description, the second one shows the self-evaluating practice task designed to get the counselors familiarized with the rating task, and the last one shows the actual human evaluation task itself.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Human evaluation task description.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Self-evaluating practice task offered to the counselors to get familiarized with the rating task.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: The human evaluation task interface.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Examples with generic and N-gram prompts.", "figure_data": "template-based PP and PPA corpora; 2) retrieval-based PP and PPA corpora; 3) combined template-based and retrieval-based PP and PPA corpora; 4)combined template and retrieval based PP and PPAcorpora appending generic prompts; 5) combinedtemplate and retrieval based PP and PPA corporaappending N-gram prompts. Some examples ofthe rephrased output by these different models areshown in the appendices.", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Automatic evaluation results on PP test set. Under each method (Template, Retrieval etc.), the score of the rephraser that performs the best is made bold. The best score obtained for each of BB and GPT3-based rephrasers along each criteria is highlighted in green. Out of them, the best overall score is highlighted with a darker green.", "figure_data": "CriteriaTemplateRetrievalTemplate +Template +Template +RetrievalRetrievalRetrieval(with generic(with N-gramprompting)prompting)BBGPT3BBGPT3BBGPT3BBGPT3BBGPT3Training dataset: PPBLEU-10.13150.3464 0.0787 0.1308 0.1429 0.29770.17630.38210.1585 0.2751BLEU-20.03660.3225 0.0131 0.0501 0.0496 0.26710.06130.35560.0677 0.2374BLEU-30.00460.3120 0.0046 0.0328 0.0000 0.25430.00310.34650.0000 0.2269BLEU-40.00330.2994 0.0000 0.0326 0.0000 0.22620.00000.33010.0000 0.2164ROUGE-L0.17600.5333 0.1176 0.1608 0.1843 0.44950.21670.54500.2135 0.4404METEOR0.15680.4622 0.0994 0.1323 0.1879 0.42100.20840.50140.2108 0.3726WMD ↓1.03110.7068 1.1122 1.0800 1.0345 0.79281.00730.67461.0163 0.8447Chrf Score0.26900.5008 0.1678 0.2095 0.2690 0.47370.30820.53410.2955 0.4245BERTScore0.86560.9138 0.8382 0.8658 0.8683 0.90480.88210.91370.8693 0.9003POS dist. ↓5.47712.5523 9.8218 7.1482 5.8271 2.70424.83782.58305.8854 3.6298Cos Similarity0.61160.7524 0.4429 0.4291 0.6129 0.65160.69180.74030.6571 0.6471Style Strength29.4173.530.0047.0638.2479.4194.1261.7623.5358.82Training dataset: PPABLEU-10.20390.3751 0.2122 0.0987 0.2308 0.32290.25880.36880.2021 0.3349BLEU-20.09130.3456 0.1468 0.0263 0.1591 0.28360.18490.33320.1455 0.3034BLEU-30.00310.3352 0.1370 0.0172 0.1319 0.27250.15360.31610.1239 0.2922BLEU-40.00000.3217 0.1286 0.0069 0.1213 0.25360.14370.29870.1169 0.2798ROUGE-L0.26420.5363 0.2419 0.1216 0.2718 0.44670.30160.52780.2352 0.5178METEOR0.30810.4673 0.2436 0.1063 0.2932 0.42610.31020.46070.2557 0.4381WMD ↓0.97160.6849 1.0069 1.1584 0.9451 0.97540.90950.72581.0000 0.7927Chrf Score0.37580.5038 0.3550 0.1782 0.4005 0.46480.40480.50470.3672 0.4897BERTScore0.87700.9116 0.8748 0.8582 0.8795 0.90210.88370.91400.8700 0.9028POS dist. ↓7.47451.9593 8.0439 7.0396 6.9338 2.86956.17472.6637 10.1620 3.0649Cos Similarity0.64280.7481 0.5910 0.4605 0.6277 0.65010.63030.73180.5717 0.6807Style Strength73.5376.4758.8232.3570.5961.7667.6555.8852.9452.94CriteriaTemplateRetrievalTemplate +Template +Template +RetrievalRetrievalRetrieval(with generic(with N-gramprompting)prompting)BB GPT3BB GPT3BB GPT3BB GPT3BB GPT3Training dataset: PP; Tested on: PPSemantic Similarity (SS)1.743.35 0.321.07 1.622.652.492.72 1.882.31Style Transfer Strength (STS)2.783.88 0.442.16 2.723.473.993.21 2.473.21(Average of SS and STS)2.263.62 0.541.62 2.173.063.242.97 2.182.76Training dataset: PP; Tested on: PPASemantic Similarity (SS)2.070.69 0.790.94 2.222.602.822.87 2.102.50Style Transfer Strength (STS)2.513.70 0.652.00 2.613.173.963.14 2.263.02(Average of SS and STS)2.292.20 0.721.47 2.422.893.393.01 3.232.76Training dataset: PPA; Tested on: PPSemantic Similarity (SS)2.633.19 1.210.81 1.692.571.742.53 1.212.32Style Transfer Strength (STS)3.943.82 2.741.44 3.153.283.003.47 2.572.99(Average of SS and STS)3.293.51 1.981.13 2.422.932.373.00 1.892.66Training dataset: PPA; Tested on: PPASemantic Similarity (SS)2.783.26 1.401.00 1.702.311.712.36 1.222.31Style Transfer Strength (STS)3.923.82 2.301.92 2.592.852.603.06 2.402.98(Average of SS and STS)3.353.54 1.851.46 2.152.582.162.71 1.812.65", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Do you think this is an advantage? Did you use herion this week? 2. Open Question Questions that allow a wide range of possible answers.It may seek information or may invite the speaker's perspective or may encourage self-exploration.", "figure_data": "What do you think are the advantagesof changing this behavior?What is your take on that?3. Simple ReflectionSimple reflections include repetition, rephrasing, or para-It seems that you are not sure what isphrasing of speaker's previous statement. It conveysgoing to come out of this talk.understanding or facilitate speaker-listener exchanges.It sounds like you're feeling worried.4. Complex Reflection Complex reflections include repeating or rephrasing theprevious statement of the speaker but adding substantialmeaning or emphasis to it. It serves the purposeconveying a deeper or more complex picture of what thespeaker has said.", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": ", you're right. No one can force you to stop drinking. It is really up to you to decide.", "figure_data": "9. SupportSupporting the client with statements of compassion orI'm here to help you with thissympathy.I know it's really hard to stop drinkingMI Non-Adherent Behaviour Codes:10. Advise without Per-Making suggestions, offering solutions or possible ac-You should simply scribble a note thatmissiontions without first obtaining permission from the speaker.reminds you to turn the computer offduring breaks.11. ConfrontDirectly and unambiguously disagreeing, arguing, cor-You think that is any way to treat peoplerecting, shaming, blaming, criticizing, labeling, moraliz-you love?ing, ridiculing, or questioning the speaker's honesty.Yes, you are an alcoholic. You mightnot think so, but you are.12. DirectGiving the speaker orders, commands, or imperatives.Don't do that!Keep track of your cravings, using thislog, and bring it in next week to reviewwith me.13. WarnA statement or event that warns of something or thatBe careful, DO NOT stop taking medsserves as a cautionary example.without discussing with your doctor.Other:14. Self-DiscloseThe listener discloses his/her personal information orI used to be similar where I get ob-experiences.sessed about how people look but aftermaturing some I got over that.15. OtherAll other statements that are not classified under any ofGood morning.the above codesHi there.", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The set of labels adapted from the MITI code that the MI classifier is able to recognize.", "figure_data": "", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Statistics of human annotated MITI labels in CounselChat (CC) and RED datasets.", "figure_data": "", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Examples of most frequent four-grams and five-grams corresponding to each label. Their frequencies are denoted within brackets.", "figure_data": "LabelN-gram based matchingSimilarity-based retrieval# Labels # LabelsTotal # Labels# LabelsTotalin CCin REDin CCin REDClosed Question7517,19017,26513271,50561,637Open Question2912,24212,2714936,10736,156Simple Reflection719,6749,7454321,82721,870Complex Reflection11020,53920,6492017,24317,263Give Information57171,99672,567893166,586167,479Advise w/ Permission1615,9796,14053,7283,733Affirm13616,40716,543187106,066106,253Emphasize Autonomy00032,8392,842Support21394,67094,883482528,469528,951Advise w/o Permission52058,85759,377969171,502172,471Confront00012,5812,582Direct0001621,05821,074Warn00062,3422,348Self-Disclose528,30928,314814,70214,710Other274,4984,5256729,45728,524Total1,918340,361 342,2792,881 1,196,012 1,198,893", "figure_id": "tab_10", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Statistics of the labels extended through N-gram-based matching and similarity-based retrieval in CC and RED datsets.", "figure_data": "LabelMI Augmented (Intersection)MI Augmented (Union)# Labels # LabelsTotalTotal # Labels# LabelsTotalTotalin CCin RED+ MI Goldin CCin RED+ MI GoldClosed Question95,5985,6076,51213578,93279,06779,972Open Question12,3532,3542,8306040,80540,86541,341Simple Reflection11851867424119,96120,00220,558Complex Reflection22012031,4974421,24721,29122,585Give Information773,3793,4568,3121083203,110204,193209,049Advise w/ Permission0282851253,0523,0573,541Affirm488989461,891208106,575106,783107,728Emphasize Autonomy00025332,7002,7032,956Support7644,635 44,71145,944551592,220592,771594,004Advise w/o Permission1448,8729,01611,3011,029196,571197,600199,885Confront00031802,4682,4682,786Direct0008981520,69020,70521,603Warn00011362,2782,2842,397Self-Disclose07297292,1191236,52236,53437,924Other0558106731,26831,33532,140Total35866,883 67,24184,0523,259 1,358,399 1,361,6581,378,469", "figure_id": "tab_11", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Statistics of the annotated responses in MI Augmented (Intersection) and MI Augmented (Union) datasets.", "figure_data": "DatasetSizeOptimalTrainValidTestEpochLoss Acc. (%) Acc. (%)F1-score(weighted avg.)Train:13,449MI GoldValid (Gold):1,6817 0.300267.0868.3168.07Test (Gold):1,681MITrain:80,690AugmentedValid (Gold):1,6812 0.227764.0767.1365.85(Intersection)Test (Gold):1,681MITrain: 1,375,107AugmentedValid (Gold):1,68113 0.132472.6773.4472.92(Union)Test (Gold):1,681", "figure_id": "tab_12", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "The performance scores of the MI classifier when trained on gold-labeled and augmented MI datasets. All scores are reported on the human-annotated validation and test sets. All scores are reported for a single run. You can try to (verb) -It would be good idea to (verb) -I think you should (verb) -It may be important to (verb) -I suggest that you (verb) -I would encourage you to (verb) -I suggest you (verb) -I wonder if you can (verb) -Maybe you can (verb) -Maybe it is important to (verb) -Maybe you could (verb) -An option would be to (verb) -You may want to consider (present continuous form of the verb) -You may consider (present continuous form of the verb) -I would recommend (present continuous form of the verb) -I wonder if you can consider (present continuous form the verb)", "figure_data": "Advise without PermissionAdvise with Permission-You can (verb)-It maybe helpful to (verb)-You could (verb)-You may want to (verb)-You need to (verb)-I encourage you to (verb)-You should (verb)-Perhaps you can (verb)-(Verb)-, if you would like.-", "figure_id": "tab_13", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Linguistic templates corresponding to Advise without Permission and Advise with Permission responses.", "figure_data": "", "figure_id": "tab_14", "figure_label": "11", "figure_type": "table" } ]
Anuradha Welivita; Pearl Pu
[ { "authors": "Daniel Adiwardana; Minh-Thang Luong; David R So; Jamie Hall; Noah Fiedel; Romal Thoppilan; Zi Yang; Apoorv Kulshreshtha; Gaurav Nemade; Yifeng Lu", "journal": "", "ref_id": "b0", "title": "Towards a human-like open-domain chatbot", "year": "2020" }, { "authors": "Amanuel Alambo; Manas Gaur; Usha Lokala; Ugur Kursuncu; Krishnaprasad Thirunarayan; Amelie Gyrard; Amit Sheth; Jyotishman Randon S Welton; Pathak", "journal": "IEEE", "ref_id": "b1", "title": "Question answering for suicide risk assessment using reddit", "year": "2019" }, { "authors": "Satanjeev Banerjee; Alon Lavie", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "METEOR: An automatic metric for MT evaluation with improved correlation with human judgments", "year": "2005" }, { "authors": "Jason Baumgartner; Savvas Zannettou; Brian Keegan; Megan Squire; Jeremy Blackburn", "journal": "", "ref_id": "b3", "title": "The pushshift reddit dataset", "year": "2020" }, { "authors": "Adrian Benton; Glen Coppersmith; Mark Dredze", "journal": "", "ref_id": "b4", "title": "Ethical research protocols for social media health research", "year": "2017" }, { "authors": "R Samuel; Gabor Bowman; Christopher Angeli; Christopher D Potts; Manning", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "A large annotated corpus for learning natural language inference", "year": "2015" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell; Sandhini Agarwal; Ariel Herbert-Voss; Gretchen Krueger; Tom Henighan; Rewon Child; Aditya Ramesh; Daniel Ziegler; Jeffrey Wu; Clemens Winter; Chris Hesse; Mark Chen; Eric Sigler; Mateusz Litwin; Scott Gray; Benjamin Chess; Jack Clark; Christopher Berner; Sam Mccandlish; Alec Radford; Ilya Sutskever; Dario Amodei", "journal": "", "ref_id": "b6", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b7", "title": "", "year": "" }, { "authors": "Daniel Cer; Mona Diab; Eneko Agirre; Iñigo Lopez-Gazpio; Lucia Specia", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation", "year": "2017" }, { "authors": "Mary Czerwinski; Javier Hernandez; Daniel Mc-Duff", "journal": "IEEE Spectrum", "ref_id": "b9", "title": "Building an ai that feels: Ai systems with emotional intelligence could learn faster and be more helpful", "year": "2021" }, { "authors": "Munmun De; Choudhury ; Sushovan De", "journal": "", "ref_id": "b10", "title": "Mental health discourse on reddit: Self-disclosure, social support, and anonymity", "year": "2014" }, { "authors": "David Devault; Ron Artstein; Grace Benn; Teresa Dey; Ed Fast; Alesia Gainer; Kallirroi Georgila; Jon Gratch; Arno Hartholt; Margaux Lhommet", "journal": "", "ref_id": "b11", "title": "Simsensei kiosk: A virtual human interviewer for healthcare decision support", "year": "2014" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Kathleen Kara Fitzpatrick; Alison Darcy; Molly Vierhile", "journal": "JMIR mental health", "ref_id": "b13", "title": "Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (woebot): a randomized controlled trial", "year": "2017" }, { "authors": "Zhenxin Fu; Xiaoye Tan; Nanyun Peng; Dongyan Zhao; Rui Yan", "journal": "", "ref_id": "b14", "title": "Style transfer in text: Exploration and evaluation", "year": "2018" }, { "authors": "Chuang Gan; Zhe Gan; Xiaodong He; Jianfeng Gao; Li Deng", "journal": "", "ref_id": "b15", "title": "Stylenet: Generating attractive visual captions with styles", "year": "2017" }, { "authors": "Jacques Gaume; Gerhard Gmel; Mohamed Faouzi; Jean-Bernard Daeppen", "journal": "Journal of substance abuse treatment", "ref_id": "b16", "title": "Counselor skill influences outcomes of brief motivational interventions", "year": "2009" }, { "authors": "Dan Hendrycks; Kevin Gimpel", "journal": "", "ref_id": "b17", "title": "Gaussian error linear units (gelus)", "year": "2016" }, { "authors": "Zhiting Hu; Zichao Yang; Xiaodan Liang; Ruslan Salakhutdinov; Eric P Xing", "journal": "", "ref_id": "b18", "title": "Toward controlled generation of text", "year": "2017" }, { "authors": " Pmlr", "journal": "", "ref_id": "b19", "title": "", "year": "" }, { "authors": "Yufang Huang; Wentao Zhu; Deyi Xiong; Yiye Zhang; Changjian Hu; Feiyu Xu", "journal": "", "ref_id": "b20", "title": "Cycle-consistent adversarial autoencoders for unsupervised text style transfer", "year": "2020" }, { "authors": "Becky Inkster; Shubhankar Sarda; Vinod Subramanian", "journal": "JMIR mHealth and uHealth", "ref_id": "b21", "title": "An empathy-driven, conversational artificial intelligence agent (wysa) for digital mental well-being: real-world data evaluation mixed-methods study", "year": "2018" }, { "authors": "Di Jin; Zhijing Jin; Zhiting Hu; Olga Vechtomova; Rada Mihalcea", "journal": "Computational Linguistics", "ref_id": "b22", "title": "Deep learning for text style transfer: A survey", "year": "2022" }, { "authors": "Zhijing Jin; Di Jin; Jonas Mueller; Nicholas Matthews; Enrico Santus", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "IMaT: Unsupervised text attribute transfer via iterative matching and translation", "year": "2019" }, { "authors": "Matt Kusner; Yu Sun; Nicholas Kolkin; Kilian Weinberger", "journal": "", "ref_id": "b24", "title": "From word embeddings to document distances", "year": "2015" }, { "authors": " Pmlr", "journal": "", "ref_id": "b25", "title": "", "year": "" }, { "authors": "Allison Lahnala; Yuntian Zhao; Charles Welch; Jonathan K Kummerfeld; Kenneth Lawrence C An; Rada Resnicow; Verónica Mihalcea; Pérez-Rosas", "journal": "Association for Computational Lingfgfg", "ref_id": "b26", "title": "Exploring self-identified counseling expertise in online support forums", "year": "2021" }, { "authors": "Camylle Lanteigne", "journal": "", "ref_id": "b27", "title": "Social robots and empathy: The harmful effects of always getting what we want", "year": "2019" }, { "authors": "Juncen Li; Robin Jia; He He; Percy Liang", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Delete, retrieve, generate: a simple approach to sentiment and style transfer", "year": "2018" }, { "authors": "Chin-Yew Lin; Franz Josef; Och ", "journal": "", "ref_id": "b29", "title": "Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics", "year": "2004" }, { "authors": "Ruibo Liu; Chongyang Gao; Chenyan Jia; Guangxuan Xu; Soroush Vosoughi", "journal": "", "ref_id": "b30", "title": "Non-parallel text style transfer with self-parallel supervision", "year": "2022" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b31", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "Aman Madaan; Amrith Setlur; Tanmay Parekh; Barnabas Poczos; Graham Neubig; Yiming Yang; Ruslan Salakhutdinov; Alan W Black; Shrimai Prabhumoye", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "Politeness transfer: A tag and generate approach", "year": "2020" }, { "authors": "François Mairesse; Marilyn A Walker", "journal": "Computational Linguistics", "ref_id": "b33", "title": "Controlling user perceptions of linguistic style: Trainable generation of personality traits", "year": "2011" }, { "authors": "Alexander Miller; Will Feng; Dhruv Batra; Antoine Bordes; Adam Fisch; Jiasen Lu; Devi Parikh; Jason Weston", "journal": "", "ref_id": "b34", "title": "ParlAI: A dialog research software platform", "year": "2017" }, { "authors": "Remi Mir; Bjarke Felbo; Nick Obradovich; Iyad Rahwan", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Evaluating style transfer for text", "year": "2019" }, { "authors": "Carlos Montemayor; Jodi Halpern; Abrol Fairweather", "journal": "AI & society", "ref_id": "b36", "title": "In principle obstacles for empathic ai: why we can't replace human empathy in healthcare", "year": "2021" }, { "authors": "Mahed Seyed; Alessandra Mousavi; Morena Cervone; Giuseppe Danieli; Riccardi", "journal": "", "ref_id": "b37", "title": "Would you like to tell me more? generating a corpus of psychotherapy dialogues", "year": "2021" }, { "authors": " Tb", "journal": "", "ref_id": "b38", "title": "", "year": "" }, { "authors": "J K Moyers; D Manuel; T Ernst; J Moyers; D Manuel; C Ernst; Fortini", "journal": "", "ref_id": "b39", "title": "Motivational interviewing treatment integrity coding manual 4", "year": "2014" }, { "authors": "Theresa B Moyers; Tim Martin; Jennifer K Manuel; William R Miller; Ernst", "journal": "Retrieved from Verfübar unter", "ref_id": "b40", "title": "The motivational interviewing treatment integrity (miti) code: Version 2.0", "year": "2003" }, { "authors": "Priya Nambisan", "journal": "Journal of the American Medical Informatics Association", "ref_id": "b41", "title": "Information seeking and social support in online health communities: impact on patients' perceived empathy", "year": "2011" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b42", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Verónica Pérez-Rosas; Xuetong Sun; Christy Li; Yuchen Wang; Kenneth Resnicow; Rada Mihalcea", "journal": "", "ref_id": "b43", "title": "Analyzing the quality of counseling conversations: the tell-tale signs of high-quality counseling", "year": "2018" }, { "authors": "Maja Popović", "journal": "Association for Computational Linguistics", "ref_id": "b44", "title": "chrF: character n-gram F-score for automatic MT evaluation", "year": "2015" }, { "authors": "Yulia Shrimai Prabhumoye; Ruslan Tsvetkov; Alan W Salakhutdinov; Black", "journal": "", "ref_id": "b45", "title": "Style transfer through back-translation", "year": "2018" }, { "authors": "Sudha Rao; Joel Tetreault", "journal": "Association for Computational Linguistics", "ref_id": "b46", "title": "Dear sir or madam, may I introduce the GYAFC dataset: Corpus, benchmarks and metrics for formality style transfer", "year": "2018" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks", "year": "2019" }, { "authors": "Stephen Roller; Emily Dinan; Naman Goyal; Da Ju; Mary Williamson; Yinhan Liu; Jing Xu; Myle Ott; Eric Michael Smith; Y-Lan Boureau; Jason Weston", "journal": "Association for Computational Linguistics", "ref_id": "b48", "title": "Recipes for building an open-domain chatbot", "year": "2021" }, { "authors": "Robert Schwartz", "journal": "", "ref_id": "b49", "title": "The big reveal | ethical implications of therapist self-disclosure", "year": "2021" }, { "authors": "Mingyue Shang; Piji Li; Zhenxin Fu; Lidong Bing; Dongyan Zhao; Shuming Shi; Rui Yan", "journal": "Association for Computational Linguistics", "ref_id": "b50", "title": "Semi-supervised text style transfer: Cross projection in latent space", "year": "2019" }, { "authors": "Ashish Sharma; Monojit Choudhury; Tim Althoff; Amit Sharma", "journal": "", "ref_id": "b51", "title": "a. Engagement patterns of peerto-peer interactions on mental health platforms", "year": "2020" }, { "authors": "Ashish Sharma; Adam S Miner; David C Atkins; Tim Althoff", "journal": "", "ref_id": "b52", "title": "A computational approach to understanding empathy expressed in text-based mental health support", "year": "2020" }, { "authors": "Abu Fadi; Diana Sheikha; Inkpen", "journal": "", "ref_id": "b53", "title": "Generation of formal and informal sentences", "year": "2011" }, { "authors": "Tianxiao Shen; Tao Lei; Regina Barzilay; Tommi Jaakkola", "journal": "Advances in neural information processing systems", "ref_id": "b54", "title": "Style transfer from non-parallel text by cross-alignment", "year": "2017" }, { "authors": "Zachary Steel; Claire Marnane; Changiz Iranpour; Tien Chey; John W Jackson; Vikram Patel; Derrick Silove", "journal": "International journal of epidemiology", "ref_id": "b55", "title": "The global prevalence of common mental disorders: a systematic review and metaanalysis 1980-2013", "year": "2014" }, { "authors": "Rachael Tatman", "journal": "", "ref_id": "b56", "title": "", "year": "2022" }, { "authors": "Youzhi Tian; Zhiting Hu; Zhou Yu", "journal": "", "ref_id": "b57", "title": "Structured content preservation for unsupervised text style transfer", "year": "2018" }, { "authors": " Tang Wan; Hui Hu Jun; Zhang; Wu Pan; Hua", "journal": "Shanghai archives of psychiatry", "ref_id": "b58", "title": "Kappa coefficient: a popular measure of rater agreement", "year": "2015" }, { "authors": "Anuradha Welivita; Pearl Pu", "journal": "", "ref_id": "b59", "title": "Heal: A knowledge graph for distress management conversations", "year": "2022" }, { "authors": "Xing Xie", "journal": "", "ref_id": "b60", "title": "Dipsy: A digital psychologist", "year": "2017" }, { "authors": "Ruochen Xu; Tao Ge; Furu Wei", "journal": "", "ref_id": "b61", "title": "Formality style transfer with hybrid textual annotations", "year": "2019" } ]
[]
10.18653/v1/2022.findings-naacl.154
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b26", "b9", "b30", "b19", "b8", "b8", "b0", "b16" ], "table_ref": [], "text": "Spoken Language Understanding (SLU), which is used to extract the semantic frame of user queries (e.g., intents and slots) (Tur and De Mori, 2011). Typically, SLU consists of two sub-tasks: intent detection and slot filling. Take the utterance shown in Figure 1 as an example, given \"Listen to Rock Music\", the outputs include an intent class label (i.e., Listen-to-Music) and a slot label sequence (i.e., O, O, B-music-type, I-music-type).\nSince intent detection and slot filling are highly tied (Qin et al., 2021c), dominant methods in the literature explore joint models for SLU to capture shared knowledge (Goo et al., 2018;Wang et al., 2018;Qin et al., 2019). Recently, Gangadharaiah and Narayanaswamy (2019) shows that, in the amazon internal dataset, 52% of examples contain multiple intents. Inspired by this observation, various SLU work shift their eye from single-intent * Equal Contribution SLU to multi-intent SLU scenario (Gangadharaiah and Narayanaswamy, 2019;Qin et al., 2020b;Casanueva et al., 2022;Moghe et al., 2022).\nThanks to the development of neural network, especially the successful use of large pretrained models, remarkable success have been witnessed in SLU. Nevertheless, there still lacks a unified opensource framework to facilitate the SLU community. In this work, we make the first attempt to introduce OpenSLU, a unified, modularized, and extensible toolkit for SLU, which aims to help researchers to set up experiments and develop their new models quickly. The main features of OpenSLU are:\n• Unified and modularized toolkit. OpenSLU is the first unified toolkit to support both single-intent and multi-intent SLU scenarios. Meanwhile, it is highly modularized by decoupling SLU models into a set of highly reusable modules, including data module, model module, evaluation module, as well as various common components and functions. Such modularization allows users to quickly reimplement SLU baselines or develop their new SLU models by re-using provided modules or adding new modules. files. This enables users can easily develop their models by simply extending the configurations. Additionally, we provide various interfaces of various common functions or modules in SLU models, including Encoder and Decoder module. Besides, the interfaces of our toolkit are fully compatible with the Py-Torch interface, which allows seamless integration and flexibly rewriting any sub-module in the toolkit.\n• Visualization Tool. We provide a visualization tool to help users to view all errors of the model directly. With the help of visualization tool, we can get a clearer picture: where we are and where we should focus our efforts to improve the performance of the model, which helps to develop a more superior framework.\nTo our knowledge, this is the first unified, modularized, and extensible toolkit for SLU. We hope our work can help researchers to quickly initiate experiments and spur more breakthroughs in SLU 1 . " }, { "figure_ref": [], "heading": "Architecture and Design", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2" ], "heading": "Data Module", "publication_ref": [], "table_ref": [], "text": "OpenSLU offers an integrated data format in the data module (see Figure 2(a)) for 1 Video introduction about OpenSLU is available at https: //youtu.be/uOXh47m_xhU. SLU models, which can be denoted as:\nraw text → Preprocessor → Dataset → DataLoaderFactory → model input.\nGiven the input raw text, Preprocessor submodule first pre-process different raw texts to an integrated .jsonl format that contains slot, text and intent, which is formatted as:\n{ \" slot \": [ List of Slot Value ] , \" text \": [ List of Text ] , \" intent \": [ Intent Value ] } .\nThe Dataset sub-module offers a range of data processing operations to support both pretrained and non-pretrained models. For pretrained models, these operations include lowercase conversion, BPE-tokenization, and slot alignment, while for non-pretrained models, the sub-module handles word-tokenization and vocabulary construction.\nFinally, DataLoaderFactory sub-model is used for creating DataLoader to manage the data stream for models." }, { "figure_ref": [ "fig_1" ], "heading": "Model Module", "publication_ref": [], "table_ref": [], "text": "As shown in Figure 2(b), the overall model module contains encoder module( §2.2.1) and decoder module ( §2.2.2)." }, { "figure_ref": [], "heading": "Encoder", "publication_ref": [ "b27", "b19", "b12", "b9", "b18", "b6", "b4", "b10" ], "table_ref": [], "text": "For the encoder module, we implement both nonpretrained models and pretrained models. In nonpretrained models, we offer the widely used SLU encoders including self-attentive (Vaswani et al., 2017;Qin et al., 2019) and BiLSTM (Hochreiter and Schmidhuber, 1997;Goo et al., 2018;Liu et al., 2020b) encoder. Additionally, we support autoload GloVe embedding (Pennington et al., 2014).\nIn pretrained models, OpenSLU supports various encoders including BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2020a), ELECTRA (Clark et al., 2020), DeBERTa v3 (He et al., 2021)." }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "Decoder", "publication_ref": [ "b9", "b30", "b2", "b9", "b13", "b19", "b30", "b7", "b14", "b19", "b9", "b19" ], "table_ref": [], "text": "Since slot filling and intent detection are highly related, dominant methods in the literature employ joint models to capture the shared knowledge across the related tasks (Goo et al., 2018;Wang et al., 2018;Chen et al., 2019). To support the joint modeling paradigm, decoder in OpenSLU contains two sub-modules: (1) interaction module for capturing interaction knowledge for slot filling and intent detection and (2) classification module for the final prediction results.\nInteraction Module. As summarized in Qin et al. (2021c), interaction module consists of two widely used the interaction types, including single flow interaction and bidirectional flow interaction.\n• Single Flow Interaction refers to the flow of information from intent to slot in one direction as illustrated in Figure 3(a). A series of studies (Goo et al., 2018;Li et al., 2018;Qin et al., 2019) have achieved remarkable improvements in performance by guiding slot filling with intent detection information.\n• Bidirectional Flow Interaction stands for that the bidirectional cross-impact between intent detection and slot filling can be considered, which is shown in Figure 3(b). Another series of works (Wang et al., 2018;E et al., 2019;Liu et al., 2019;Qin et al., 2021a) build the bidirectional connection across slot filling and intent detection to enhance each other.\nBased on the two types of interaction, users can easily design the interaction module and interaction order via our provided classic interaction modules and customized configurations.\nClassification Module. It aims to transform hidden states after interaction module into final classification logits. There are two types of classification modules supported by OpenSLU:\n• MLP Classifier. Multi-Layer Perceptron (MLP) Classifier is a fundamental classification decoding algorithm. Nevertheless, the method ignores the dependency across tokens. • LSTM Classifier. It indicates that we adopt an LSTM classifier for the final prediction, which has the advantage of modeling the dependency of tokens (from left to right). However, it is an autogressive classification module for SLU, which cannot be parallel to speed up the decoding prediction.\nTo improve the quality of SLU prediction results, we also implement several SLU tricks, like teacherforcing and token-level intent detection (Qin et al., 2019). Users can switch between different prediction strategies by simply setting up the hyperparameter to improve performance. • Slot F1 Score (Goo et al., 2018;Qin et al., 2019) is used for assessing slot filling performance. This metric is calculated as the harmonic mean between precision and recall." }, { "figure_ref": [], "heading": "Evaluation and Metrics", "publication_ref": [ "b9", "b19", "b8", "b9", "b19" ], "table_ref": [], "text": "• Intent Accuracy (Goo et al., 2018;Qin et al., 2019) is a measure used to evaluate the accuracy of intent detection, based on the ratio of correctly predicted intents.\n• Intent F1 Score (Gangadharaiah and Narayanaswamy, 2019;Qin et al., 2020b) is adopted to evaluate the macro F1 Score of the predicted intents in the multi-intent detection. • Exact Match Accuracy (Goo et al., 2018;Qin et al., 2019Qin et al., , 2020b) ) takes intent detection as well as slot filling into account simultaneously. This metric is calculated as the ratio of sentences for which both the intent and slot are predicted correctly within a sentence." }, { "figure_ref": [], "heading": "修改", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "Common Modules", "publication_ref": [], "table_ref": [], "text": "Logger. We provide a generic Logger component to help users to track the process of model building including wandb.ai, fitlog and local file logging (see Figure 2(d)).\nApplications. We provide complete scripts in the Application (see Figure 2(e)) for training, prediction, visual error analysis, and the final stage of model deployment.\nConfiguration. As shown in Figure 2(f), our toolkit employs Configuration module to manage the model configuration, training parameters, and training and analysis data. We will introduce more details in Section Toolkit Usage ( §3).\n3 Toolkit Usage" }, { "figure_ref": [ "fig_5", "fig_5" ], "heading": "Reproducing Existing Models", "publication_ref": [ "b9", "b11" ], "table_ref": [], "text": "For reproducing an existing model implemented by OpenSLU on different datasets, users are required only to specify the dataset and model by setting hyper-parameters, i.e., model and dataset. Experiments can be reproduced in a simple command line instruction, as shown in Figure 4(a). This instruction aims to fine-tuning Slot-Gated (Goo et al., 2018) model on ATIS (Hemphill et al., 1990) dataset. With YAML configuration files, we can modify hyper-parameters conveniently, which allows users can reproduce various experiments quickly without modifying the source code. In addition, we designed OpenSLU to work on a variety of hardware platforms. If the hyper-parameter device is set to \"cuda\", CUDA devices will be used.\nOtherwise, CPU will be employed by default. As shown in Figure 4(b), we also support distributed training on multi-GPU by setting hyper-parameters and command line parameters." }, { "figure_ref": [ "fig_5" ], "heading": "Customizable Combination Existing Components", "publication_ref": [], "table_ref": [], "text": "As the model is designed as reusable modules, users can easily reuse modules via the call of interface or configuration files. More specifically, for the interface, users can call common-used encoder and decoder modules in one line of code from the pre-configured library. For configuration files, users can combine existing component libraries only through configuration files, thus creating a customized model. It can be useful for users in cross-cutting areas, such as biology, that are unfamiliar with using Python code to create models, as it allows them to create their own models without using any Python code. Such feature can potentially make it easier to build and test models more rapidly. Similarly, the customized model can be trained by specifying the relevant configuration file path and running simple command line instructions, as shown in Figure 4(d)." }, { "figure_ref": [ "fig_5", "fig_5" ], "heading": "Implementing a New SLU Model", "publication_ref": [ "b9", "b30", "b19", "b2", "b4", "b10", "b32", "b3" ], "table_ref": [], "text": "Since OpenSLU split the model into fine-grained components, users can directly reuse modules through configuration files. Specifically, when users aim to implement a new SLU model, only a few key innovative modules need to be rewritten by users, including a specific Model class and 2 functions as follows: Non-Pretrained Models Slot Gated (Goo et al., 2018) 94.7 94.5 82.5 93.2 97.6 85.1 Bi-Model (Wang et al., 2018) 95.2 96.2 85.6 93.1 97.6 84.1 Stack Propagation (Qin et al., 2019) 95.4 96.9 85.9 94.6 97.9 87.1 DCA Net (Qin et al., 2021a) 95.9 97.3 87.6 94.3 98.1 87.3 Pretrained Models Joint BERT (Chen et al., 2019) 95.8 97.9 88.6 96.4 98.4 91.9 RoBERTa (Liu et al., 2020a) 95.8 97.8 88.1 95.7 98.1 90.6 ELECTRA (Clark et al., 2020) 95.8 96.9 87.1 95.7 98.3 90.1 DeBERTa v3 (He et al., 2021) 95 EMA.(%)\nMulti-task AGIF GL-GIN • __init__() function. This function aims for parameter initialization, global variable definition, and so on. All modules can be inserted into the system by configuring the __model_target__ hyperparameters, so as to quickly and automatically build the model.\n• forward() function. This function mainly focuses on forward data flow and learning the parameters according to the pre-defined configuration.\nIn most cases, rewriting Interaction module is enough for building a new SLU model. As shown in Figure 4(e), this module accepts HiddenData data object as input and return with HiddenData data object. HiddenData contains the hidden_states for intent and slot, and other helpful information. With the advancement of SLU research, patterns of decoders become increasingly complex (Xing and Tsang, 2022;Cheng et al., 2022). Therefore, to further meet the needs of complex exploration, we provide the BaseDecoder class, and the user can simply override the forward() function in class, which accepts HiddenData as input data format and OutputData as output data format, as shown in Figure 4(f)." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "Extensive reproduction experiments are conducted to evaluate the effectiveness of OpenSLU." }, { "figure_ref": [], "heading": "Data Settings", "publication_ref": [ "b11", "b5" ], "table_ref": [], "text": "In single-intent SLU, we employ two widely used benchmarks including ATIS (Hemphill et al., 1990) and SNIPS dataset (Coucke et al., 2018).\nIn multi-intent SLU scenario, we support 2 widely used datasets: MixATIS and MixS-NPIS (Qin et al., 2020b), which are collected from the ATIS, SNIPS by simple conjunctions, e.g., \"and\", to connect sentences with different intents." }, { "figure_ref": [ "fig_7" ], "heading": "Result Reproduction", "publication_ref": [ "b9", "b30", "b19", "b2", "b4", "b10" ], "table_ref": [], "text": "We implement various state-of-the-art SLU models. For single-intent SLU methods, we re-implement the following baselines: (1) Slot Gated (Goo et al., 2018); (2) Bi-Model (Wang et al., 2018);\n(3) Stack Propagation (Qin et al., 2019); (4) DCA Net (Qin et al., 2021a); (5) Joint Bert (Chen et al., 2019); (6) RoBERTa (Liu et al., 2020a); ( 7) ELECTRA (Clark et al., 2020); (8) DeBERTa v3 (He et al., 2021). For multi-intent SLU methods, we adopt the following baselines: (1) AGIF (Qin et al., 2020b); (2) GL-GIN (Qin et al., 2021b).\nThe reproduction results are illustrated in Table 1, we observe that OpenSLU toolkit can reproduce the comparable results reported in previous works, which verify the effectiveness of OpenSLU. In addition, OpenSLU can outperform some reported results in previous published work, which further shows the superiority of OpenSLU. Meanwhile, the same trend can be observed in multiintent SLU setting, which is shown in Figure 5." }, { "figure_ref": [], "heading": "Visualization Analysis", "publication_ref": [ "b29", "b31", "b25", "b17" ], "table_ref": [], "text": "According to a number of studies (Vilar et al., 2006;Wu et al., 2019;Ribeiro et al., 2020;Paleyes et al., 2022), model metrics tests alone no longer adequately reflect the model's performance. To help researchers further improve their models, we pro- " }, { "figure_ref": [ "fig_8" ], "heading": "Error Distribution Analysis.", "publication_ref": [ "b1" ], "table_ref": [], "text": "We provide error distribution analysis that presents the number and percentage of label errors predicted by the model. By viewing the error distributions, the model can be easily analyzed and studied qualitatively (Caubrière et al., 2020). As a result, the weaknesses of each system can be better understood and improvements can be made to the model in the future.\nTake the error in Figure 6(a) as an example, a large number of atis_flight labels are incorrectly predicted compared with all other labels. Therefore, we should pay more attention on how to improve the performance of atis_flight labels." }, { "figure_ref": [ "fig_8" ], "heading": "Label Transfer Analysis.", "publication_ref": [ "b31", "b25" ], "table_ref": [], "text": "Label Transfer Analysis module first offers the percentage of incorrect predictions for each label and provides the probability of being incorrectly predicted as each of the other labels to present a finegrained statistics for a better understanding of issues such as invisible bias in the model (Wu et al., 2019;Ribeiro et al., 2020).\nFor example, Figure 6(b) shows the details in incorrect prediction on 'B-fromloc.city_name'. We observe 34% of 'B-fromloc.city_name' predict incorrectly and 77.3% of error labels are predicted as 'O'. By having access to this information, users can be better guided to improve their data or label learning methods to prevent those error predictions." }, { "figure_ref": [ "fig_8", "fig_8" ], "heading": "Instance Analysis.", "publication_ref": [], "table_ref": [], "text": "In order to provide a better case study, OpenSLU offers a instance-level analysis view by highlighting error results and interactively checking all golden labels (shown in Figure 6(c)). Such instance analysis allows users to examine data on a case-by-case basis in an intuitive way. This can be seen easily in Figure 6(c), where token 'a' is predicted as 'B-fromloc.city_name' instead of 'O'.\nFurthermore, we also deploy OpenSLU into the Gradio2 platform, which allows users to connect the demo directly to the public network and access it via the computer or mobile device." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b33" ], "table_ref": [], "text": "This paper introduce OpenSLU, a unified, modularized, and extensible toolkit for spoken language understanding. In our toolkit, we implement 10 models on both single-intent setting and multi-intent SLU settings, both covering the categories of nonpretrained and pretrained language models. Our toolkit is plug-and-play and can be easily applied to other SLU setting, which is extensible to support seamless incorporation of other external modules. To the best of our knowledge, this is the first openresource toolkit for SLU and we hope OpenSLU can attract more breakthroughs in SLU. In the future, we can extend OpenSLU to support cross-lingual scenario (Qin et al., 2020a;Zheng et al., 2022)." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was supported by the National Key R&D Program of China via grant 2020AAA0106501 and the National Natural Science Foundation of China (NSFC) via grant 62236004 and 61976072." } ]
Spoken Language Understanding (SLU) is one of the core components of a task-oriented dialogue system, which aims to extract the semantic meaning of user queries (e.g., intents and slots). In this work, we introduce OpenSLU, an open-source toolkit to provide a unified, modularized, and extensible toolkit for spoken language understanding. Specifically, OpenSLU unifies 10 SLU models for both single-intent and multi-intent scenarios, which support both non-pretrained and pretrained models simultaneously. Additionally, OpenSLU is highly modularized and extensible by decomposing the model architecture, inference, and learning process into reusable modules, which allows researchers to quickly set up SLU experiments with highly flexible configurations. OpenSLU is implemented based on PyTorch, and released at https://github.com/ LightChen233/OpenSLU.
OpenSLU: A Unified, Modularized, and Extensible Toolkit for Spoken Language Understanding
[ { "figure_caption": "Figure 1 :1Figure 1: An example of spoken language understanding. Listen-to-Music stands for the intent label while {O, O, B-music-type, I-music-type} denotes the slot sequence labels.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Overall workflow of OpenSLU, which consists of (a) Data Module, (b) Model Module, (c) Evaluation and Metrics, (d) Logger, (e) Applications and (f) Configuration.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 22Figure 2 illustrates the overall workflow of OpenSLU. In this section, we describe the (a) Data Module ( §2.1); (b) Model Module; ( §2.2); (c) Evaluation and Metrics ( §2.3) and other common modules (Logger, Applications and Configuration module) ( §2.4).", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Single flow interaction decoder (a) vs. bidirectional flow interaction decoder (b).", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "FollowingGoo et al. (2018); Qin et al. (2021c), we support various metrics for SLU (shown in Figure 2(c)), including Slot F1 Score, Intent Accuracy, Intent F1, and Exactly Match Accuracy (EMA).", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Example usage of OpenSLU.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "(%) Intent Acc.(%) EMA(%) Slot F1.(%) Intent Acc.(%) EMA(%)", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Multi-intent SLU main results on EMA. All baseline results are re-implemented by OpenSLU.", "figure_data": "", "figure_id": "fig_7", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Visualization Analysis. It mainly contains three functions including Error Distribution Analysis (a), Label Transfer Analysis (b) and Instance Analysis (c).", "figure_data": "", "figure_id": "fig_8", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Single-intent SLU main Results. All baseline results are re-implemented by OpenSLU.", "figure_data": ".897.888.497.098.492.78070.274.874.940 6036.439.542.420MixATISMixSNIPS", "figure_id": "tab_3", "figure_label": "1", "figure_type": "table" } ]
Libo Qin; Qiguang Chen; Xiao Xu; Yunlong Feng; Wanxiang Che
[ { "authors": "Inigo Casanueva; Ivan Vulić; Georgios Spithourakis; Paweł Budzianowski", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "NLU++: A multilabel, slot-rich, generalisable dataset for natural language understanding in task-oriented dialogue", "year": "2022" }, { "authors": "Antoine Caubrière; Sahar Ghannay; Natalia Tomashenko; Renato De Mori; Antoine Laurent; Emmanuel Morin; Yannick Estève", "journal": "IEEE", "ref_id": "b1", "title": "Error analysis applied to end-to-end spoken language understanding", "year": "2020" }, { "authors": "Qian Chen; Zhu Zhuo; Wen Wang", "journal": "", "ref_id": "b2", "title": "Bert for joint intent classification and slot filling", "year": "2019" }, { "authors": "Lizhi Cheng; Wenmian Yang; Weijia Jia", "journal": "", "ref_id": "b3", "title": "A scope sensitive and result attentive model for multi-intent spoken language understanding", "year": "2022" }, { "authors": "Kevin Clark; Minh-Thang Luong; Quoc V Le; Christopher D Manning", "journal": "", "ref_id": "b4", "title": "ELECTRA: Pretraining text encoders as discriminators rather than generators", "year": "2020" }, { "authors": "Alice Coucke; Alaa Saade; Adrien Ball; Théodore Bluche; Alexandre Caulier; David Leroy; Clément Doumouro; Thibault Gisselbrecht; Francesco Caltagirone; Thibaut Lavril", "journal": "", "ref_id": "b5", "title": "Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces", "year": "2018" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "E Haihong; Peiqing Niu; Zhongfu Chen; Meina Song", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "A novel bi-directional interrelated model for joint intent detection and slot filling", "year": "2019" }, { "authors": "Rashmi Gangadharaiah; Balakrishnan Narayanaswamy", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "Joint multiple intent detection and slot labeling for goal-oriented dialog", "year": "2019" }, { "authors": "Chih-Wen Goo; Guang Gao; Yun-Kai Hsu; Chih-Li Huo; Tsung-Chieh Chen; Keng-Wei Hsu; Yun-Nung Chen", "journal": "", "ref_id": "b9", "title": "Slot-gated modeling for joint slot filling and intent prediction", "year": "2018" }, { "authors": "Pengcheng He; Jianfeng Gao; Weizhu Chen", "journal": "", "ref_id": "b10", "title": "Debertav3: Improving deberta using electra-style pre-training with gradient-disentangled embedding sharing", "year": "2021" }, { "authors": "Charles T Hemphill; John J Godfrey; George R Doddington", "journal": "", "ref_id": "b11", "title": "The ATIS spoken language systems pilot corpus", "year": "1990-06-24" }, { "authors": "Sepp Hochreiter; Jürgen Schmidhuber", "journal": "Neural Computation", "ref_id": "b12", "title": "Long Short-Term Memory", "year": "1997" }, { "authors": "Changliang Li; Liang Li; Ji Qi", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "A selfattentive model with gate mechanism for spoken language understanding", "year": "2018" }, { "authors": "Yijin Liu; Fandong Meng; Jinchao Zhang; Jie Zhou; Yufeng Chen; Jinan Xu", "journal": "", "ref_id": "b14", "title": "CM-net: A novel collaborative memory network for spoken language understanding", "year": "2019" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov ; Zihan; Genta Liu; Zhaojiang Indra Winata; Peng Lin; Pascale Xu; Fung", "journal": "", "ref_id": "b15", "title": "Attention-informed mixed-language training for zero-shot cross-lingual task-oriented dialogue systems", "year": "2020" }, { "authors": "Nikita Moghe; Evgeniia Razumovskaia; Liane Guillou; Ivan Vulić; Anna Korhonen; Alexandra Birch", "journal": "", "ref_id": "b16", "title": "Multi3nlu++: A multilingual, multi-intent, multi-domain dataset for natural language understanding in task-oriented dialogue", "year": "2022" }, { "authors": "Andrei Paleyes; Raoul-Gabriel Urma; Neil D Lawrence", "journal": "ACM Comput. Surv", "ref_id": "b17", "title": "Challenges in deploying machine learning: A survey of case studies", "year": "2022" }, { "authors": "Jeffrey Pennington; Richard Socher; Christopher Manning", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "GloVe: Global vectors for word representation", "year": "2014" }, { "authors": "Libo Qin; Wanxiang Che; Yangming Li; Haoyang Wen; Ting Liu", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "A stack-propagation framework with token-level intent detection for spoken language understanding", "year": "2019" }, { "authors": "Libo Qin; Tailu Liu; Wanxiang Che; Bingbing Kang; Sendong Zhao; Ting Liu", "journal": "", "ref_id": "b20", "title": "a. A cointeractive transformer for joint slot filling and intent detection", "year": "2021" }, { "authors": "Libo Qin; Minheng Ni; Yue Zhang; Wanxiang Che; ; ", "journal": "International Joint Conferences on Artificial Intelligence Organization", "ref_id": "b21", "title": "Cosda-ml: Multi-lingual code-switching data augmentation for zero-shot cross-lingual nlp", "year": "2020" }, { "authors": "Libo Qin; Fuxuan Wei; Tianbao Xie; Xiao Xu; Wanxiang Che; Ting Liu", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "GL-GIN: Fast and accurate non-autoregressive model for joint multiple intent detection and slot filling", "year": "2021" }, { "authors": "Libo Qin; Tianbao Xie; Wanxiang Che; Ting Liu", "journal": "Survey Track", "ref_id": "b23", "title": "A survey on spoken language understanding: Recent advances and new frontiers", "year": "2021" }, { "authors": "Libo Qin; Xiao Xu; Wanxiang Che; Ting Liu", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "AGIF: An adaptive graph-interactive framework for joint multiple intent detection and slot filling", "year": "2020" }, { "authors": "Marco Tulio Ribeiro; Tongshuang Wu; Carlos Guestrin; Sameer Singh", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Beyond accuracy: Behavioral testing of NLP models with CheckList", "year": "2020" }, { "authors": "Gokhan Tur; Renato De Mori", "journal": "John Wiley & Sons", "ref_id": "b26", "title": "Spoken language understanding: Systems for extracting semantic information from speech", "year": "2011" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Ł Ukasz Kaiser; Illia Polosukhin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b27", "title": "Attention is all you need", "year": "2017" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b28", "title": "", "year": "" }, { "authors": "David Vilar; Jia Xu; L F Haro; Hermann Ney", "journal": "", "ref_id": "b29", "title": "Error analysis of statistical machine translation output", "year": "2006" }, { "authors": "Yu Wang; Yilin Shen; Hongxia Jin", "journal": "", "ref_id": "b30", "title": "A bimodel based RNN semantic frame parsing model for intent detection and slot filling", "year": "2018" }, { "authors": "Tongshuang Wu; Marco Tulio Ribeiro; Jeffrey Heer; Daniel Weld", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "Errudite: Scalable, reproducible, and testable error analysis", "year": "2019" }, { "authors": "Bowen Xing; Ivor Tsang", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "Co-guiding net: Achieving mutual guidances between multiple intent detection and slot filling via heterogeneous semantics-label graphs", "year": "2022" }, { "authors": "Bo Zheng; Zhouyang Li; Fuxuan Wei; Qiguang Chen; Libo Qin; Wanxiang Che", "journal": "", "ref_id": "b33", "title": "HIT-SCIR at MMNLU-22: Consistency regularization for multilingual spoken language understanding", "year": "2022" } ]
[ { "formula_coordinates": [ 2, 306.14, 302.8, 218.27, 23.12 ], "formula_id": "formula_0", "formula_text": "raw text → Preprocessor → Dataset → DataLoaderFactory → model input." }, { "formula_coordinates": [ 2, 306.59, 392.98, 166.03, 47.09 ], "formula_id": "formula_1", "formula_text": "{ \" slot \": [ List of Slot Value ] , \" text \": [ List of Text ] , \" intent \": [ Intent Value ] } ." } ]
2023-05-17
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16" ], "table_ref": [], "text": "The exponential growth of digital images on the internet, fueled by the increasing diversity of social media platforms, has brought forth significant challenges. One of the prominent issues is the ease with which digital images can be manipulated using easily accessible tools, leading to a surge in incidents of image tampering. Among various techniques employed for image tampering, copy-move forgery [1] stands out as one of the most commonly utilized and easily executed methods. This technique involves duplicating a specific region, known as the source region, within an image. Subsequently, the duplicated region is manipulated through processes such as scaling, rotating, or color adjustment before being pasted into another region, termed the target region, within the same image. Copy-move forgery serves as a means to conceal or duplicate objects within an image for malicious purposes [2]. Instances of copy-move forgery, such as the circulation of fake news containing manipulated images in political contexts, have the potential to confuse the public and contribute to political biases. Similarly, the malicious manipulation of evidence within legal proceedings or the falsification of experimental results in academic papers can have severe consequences, including judicial injustice and academic misconduct.\nHence, the development of image forensic methods for copy-move forgery detection is of paramount importance. In particular, distinguishing between the source and target regions within an image is crucial to accurately identify manipulated areas-a process commonly referred to as copy-move source/target distinguish (CMSTD). This technique finds relevance in numerous scenarios, including legal investigations, journalism, scientific research, and digital art forensics. In legal investigations, CMSTD serves the purpose of discerning original evidence from manipulated content. For instance, it can aid in determining whether a weapon has been added or removed from an image, thus assisting in establishing the authenticity of evidence. Similarly, in journalism, CMSTD plays a pivotal role in verifying the genuineness of images employed in news stories. Furthermore, CMSTD finds application in scientific research to identify tampered images in published papers. By doing so, it safeguards the reliability and integrity of research findings. Additionally, in the realm of digital art forensics, CMSTD aids in the detection of image manipulation within artistic works. This aspect becomes crucial for purposes such as copyright protection and plagiarism detection. Given the increasing prevalence of digital image tampering, CMSTD has emerged as a critical technique for ensuring the authenticity and dependability of digital images across various domains.\nAs the prevalence of copy-move forgery continues to increase, there is a growing demand for the development of accurate and efficient detection methods. In recent years, various techniques have been proposed to address this issue. Feature-based algorithms [3], [4] employ feature extraction techniques such as SIFT and segmentation to identify duplicated regions. Similarly, block-based algorithms [5], [6] utilize block comparisons for detection purposes. However, distinguishing between the source and target regions remains a challenging task in copy-move forgery detection. The rapid advancements in deep learning within the field of computer vision [7]- [9] have motivated numerous studies that leverage deep feature analysis for copy-move forgery detection. For instance, Rao et al. [10] proposed a convolutional neural network (CNN) that utilizes high-pass filters as the initialized layer to identify copy-move regions. Wu et al. [11] developed an end-to-end deep neural network capable of extracting block features and analyzing self-correlation between feature pixels to identify similar regions in copy-move forgery images. Additionally, Zhong et al. [12] proposed a dense inception network incorporating pyramid feature extractors, correlation matching blocks, and hierarchical post-processing modules to accurately localize copy-move regions. However, the previously mentioned methods only occupy in binary detection where the duplicated regions are highlighted and the pristine backgrounds are suppressed. In recent years, research aims to distinguish source or target those regions belongs to also comes to public attention. Wu et al. [13] proposed the BusterNet which consists of a similarity detection branch and a manipulation detection branch to detect and distinguish copy-move regions in the images. Later, Chen et al. [14] proposed a cascaded network based on the architecture of BusterNet, which contains a copy-move similarity detection network (CMSDNet) and a source/target region distinguishment network (STRDNet). Different from BusterNet, the source/target localization is learned from the detection map of the CMSDNet, therefore, this method is a twostep netnetwork and the two subnetworks should be trained separately. Islam et al. [15] proposed a dual-order attentive generative adversarial network (DOA-GAN) that adopts the dual-order attention module to extract location-aware features and the atrous spatial pyramid pooling blocks to extract global features. Besides, Barni et al. [16] designed a multi-branch CNN to differentiate the source and target regions between two nearly duplicated regions with a hypothesis testing framework given the binary localization mask. Zhang et al. [17] proposed a CNN transformer based Generative adversarial network that first combines transformer for feature extraction in copy-move forgery localization.\nHowever, the detection and distinction of copy-move regions still pose significant challenges due to several factors. Firstly, the presence of substantial differences among various datasets makes it difficult to develop a method that can be effectively transferred across different scenarios. Secondly, accurately colocalizing the source and target regions as forged areas proves to be a complex task. Existing methods typically address these two tasks independently or rely on binary class detection obtained from a three-class segmentation mask. In this paper, we aim to investigate whether a CNN network can achieve a balanced performance in terms of forgery detection and source/target localization.\nThe remaining sections of the paper are organized as follows: Section II introduces the proposed network. In Section III, we present the experimental settings and provide an analysis of the results. Finally, Section IV summarizes the findings and concludes this study." }, { "figure_ref": [ "fig_0" ], "heading": "II. METHODOLOGY A. Overview", "publication_ref": [], "table_ref": [], "text": "The proposed research aims to evaluate the effectiveness of an end-to-end convolutional neural network in achieving a balanced performance between copy-move detection and localization. The network's ability to learn the co-relationship between source and target regions, instead of relying solely on memorizing their locations during training, is crucial due to variations in size and texture characteristics.\nIn Fig. 1, the network architecture is illustrated. It consists of three main components: a feature extractor, a transformerbased encoder, and a two-branch decoder responsible for generating the detection map and source/target distinguishment map. The input image is assumed to have dimensions of 256 × 256 × 3. Initially, the input image X undergoes feature extraction using ResNet18 to extract deep features. Subsequently, a two-vision transformer is employed as the encoder to highlight the manipulated regions within the image feature map. Finally, the resulting features are fed into forgery detection and localization decoders, which share a similar architecture. These decoders upsample the feature maps to match the original image size, generating the binary-class detection map Ŷ f and the three-class segmentation map Ŷ d , respectively. " }, { "figure_ref": [ "fig_0" ], "heading": "B. Transformer-based Encoder", "publication_ref": [ "b17", "b18" ], "table_ref": [], "text": "The transformer-based encoder leverages the power of two vision transformers to accurately localize copy-move regions within the deep feature. This aspect of the encoder plays a crucial role in capturing subtle and intricate modifications by computing the inner relationship among different feature patches. This is particularly challenging for convolutional neural networks that rely on direct memorization of related features from annotations. The design of the encoder takes inspiration from Twins [18], a notable work in the field.\nTo illustrate the functioning of the encoder, refer to Fig. 1. The local attention mechanism divides the features into multiple feature patches, enabling multi-head self-attention to be performed on each individual patch. This local attention mechanism helps in capturing fine-grained details and local dependencies within the feature space. On the other hand, the global attention mechanism conducts multi-head selfattention at a global level, allowing the encoder to capture broader contextual information and long-range dependencies. The enhanced deep features obtained from the attention mechanisms are further processed by a residual block [19] . This block comprises three 2D convolutional layers, with each layer followed by a Rectified Linear Unit (ReLU) activation function. The purpose of the residual block is to introduce nonlinearity and learn more abstract representations that contribute to improved feature representations." }, { "figure_ref": [ "fig_1" ], "heading": "C. Two Branch Decoder", "publication_ref": [], "table_ref": [], "text": "In order to effectively identify and delineate the copied regions within an image, our approach employs a dualbranch decoder architecture. This architecture comprises two distinct branches, namely the forgery detection branch and the source/target distinguishment branch. The decoder component itself is composed of four layers of 2D convolution, each of which is subsequently followed by an up-sampling layer.\nFig. 2 illustrates the structure of the decoders, highlighting the differentiation between the forgery detection branch, denoted as Decoder f , and the source/target distinguishment branch, denoted as Decoder d . The differentiating factor lies in the final layer of each branch. Specifically, the forgery detection branch produces a binary-class map by generating a two-channel output, while the source/target distinguishment branch generates a three-channel output, resulting in a threeclass map.\nThis design choice enables our system to accurately identify the areas where copying has occurred within the image. The binary-class map serves to indicate the presence or absence of copy-move regions, while the three-class map facilitates the differentiation between the source and target regions involved in the forgery. By utilizing these distinct decoder branches, our method aims to improve the precision and comprehensiveness of copy-move region detection.\nThe entire network is trained using a cross-entropy loss minimization approach for two branches, which involve comparing the predicted outputs with their corresponding annotations. The loss functions for each branch are defined as follows:\nL f ce = - 1 hw (Y f log Ŷf + (1 -Y f ) log(1 -Ŷf )), L d ce = - 1 hw 3 c=1 (Y c d log( Ŷ c d )),(1)\nwhere Ŷf and Ŷd represent the predicted results of the Decoder f and Decoder d , respectively. Y f and Y d denote the binary-class annotation map and three-class annotation map, respectively.\nTo ensure balanced performance between the two tasks, it is important for the predicted binary detection map and the threeclass segmentation map to be consistent. To achieve this, the source and target classes of Ŷd are combined into a single class. The mean squared error (MSE) between Ŷf and Ŷd is then calculated as follows:\nL mse = E( Ŷf ; Ŷd )[( Ŷf -Ŷd ) 2 ].(2)\nFinally, the MSE loss is added as a regularizer term and the entire network is optimized through back-propagation to minimize the total loss, which is defined as:\nL = L f ce + L d ce + γL mse ,(3)\nwhere γ represents a hyperparameter that controls the influence of the MSE loss term in relation to the cross-entropy losses." }, { "figure_ref": [], "heading": "III. EXPERIMENTS AND ANALYSIS A. Datasets and Implementation Details", "publication_ref": [ "b12", "b19", "b20", "b12", "b21", "b22" ], "table_ref": [], "text": "To enhance the data scale for copy-move forgery, Wu et al. [13] introduced the USCISI dataset, which was compiled from two existing datasets: the MIT SUN2012 dataset [20] and the Microsoft COCO dataset [21]. The USCISI dataset comprises over 100,000 image samples, ranging in size from 340 × 260 to 1068 × 994 pixels. Following the experimental methodology outlined in [13], 80,000 samples were utilized for training purposes, while the remaining samples were divided equally for validation and testing. The generalization ability of the proposed network was assessed by employing the CoMoFoD dataset [22], a representative copy-move dataset.\nCoMoFoD contains a total of 5,000 images, including 200 base forged images and 25 categories of attacks.\nThe proposed method was implemented using the PyTorch framework and trained and evaluated on an NVIDIA A100 graphics processing unit. During training, we utilized the Adam optimizer [23] with an initial learning rate of 0.001 and a weight decay of 0.0005. To adapt the learning rate during training, we employed the \"poly\" learning rate decay policy. This policy adjusts the initial learning rate by multiplying it with (1iter/maxiter) power , where power is set to 0.9 after each iteration. The training process was terminated after 30 epochs, and a batch size of 64 was used for training. After each epoch, we evaluated the model's performance on the validation set. The best-performing model, determined by the highest mean F 1 score, was selected for evaluation on the test set." }, { "figure_ref": [ "fig_2", "fig_3", "fig_4", "fig_5" ], "heading": "B. Experimental Result", "publication_ref": [ "b12", "b14" ], "table_ref": [ "tab_0" ], "text": "To evaluate the performance of our proposed method, we conducted a rigorous comparison with established forgery detection and source/target distinguishment techniques, namely BusterNet [13] and DOA-GAN [15].\nFor the task of copy-move forgery detection, all manipulated regions within the images were identified as forged regions. We employed a meticulous evaluation approach, measuring pixel-level precision, recall, and F1-score for each image. The average scores across the entire USCISI dataset were reported. The detection accuracy, along with the visualized results, are presented in Table I and Fig. 3. Considering the CoMoFoD dataset, which incorporates diverse postprocessing operations applied to the base images to evaluate the robustness of the methods, we compute the mean F1-score within each category. For the purpose of correctly identifying detected images, we define F1-scores exceeding 0.5 as successful detection. This criterion is detailed in Tables II-III. Additionally, we present a subset of visual examples in Fig. 4. It can be observed that the proposed research outperforms other competitive methods in accurately extracting forged region.\nThe quantitative results and visualization examples of the source/target distinguish task are presented in Table IV and Figs. 56. It is important to note that the model was trained and validated specifically on the USCISI dataset. Among the evaluated models, DOA-GAN demonstrated the highest performance on the original data source, while BusterNet exhibited the best transferability across different sources and targets. Although our network achieved the second-best performance compared to BusterNet and DOA-GAN on both datasets, it is worth mentioning that the binary detection task's optimal performance does not necessarily translate to the best results for the source/target differentiate task. This observation highlights the importance of considering specific task requirements when evaluating model performance.\nFurthermore, it is crucial to explore the transferability of the copy-move detection network across various datasets. While BusterNet demonstrated superior transferability in this study, the relatively low scores in source and target classes illustrate its robustness in locating the manipulated regions still needs improvements. Continued research in this area will provide valuable insights into the generalization capabilities of copymove detection networks and their effectiveness in different contexts." }, { "figure_ref": [], "heading": "C. Ablation Study", "publication_ref": [], "table_ref": [], "text": "To comprehensively evaluate the effectiveness of the Mean Squared Error (MSE) loss and the transformer-based encoder, an ablation study was carried out on the USCISI dataset. The objective of this study was to investigate the impact of these components on the overall performance of the proposed network. The results of this study are presented in tasks. It is evident from the table that both the MSE loss and the transformer contribute to enhancing the segmentation performance. The utilization of the MSE loss aids in minimizing the discrepancy between the predicted and ground truth segmentation maps, facilitating accurate boundary delineation. The transformer-based encoder, with its ability to capture longrange dependencies and contextual information, enriches the network's understanding of spatial relationships and structural patterns within the dataset. As a result, it effectively improves the overall segmentation accuracy and robustness of the system." }, { "figure_ref": [], "heading": "D. Parametric Analysis", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "In this subsection, we delve into examining the impact of the value of γ and the depths of the transformer encoder on network performance. Our investigation, as presented in Table VI, reveals that the detection and distinguishment performance undergo significant improvement with higher values of γ. Specifically, we find that the suggested value of γ for optimal results is 1000. Moreover, the depths of the transformer encoder also play a crucial role in network performance. Based on the findings in Table VII, it is recommended to employ a depth of either 1 or 2 for the transformer encoder in order to achieve desirable outcomes." }, { "figure_ref": [], "heading": "IV. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this study, we aim to present an approach for enhancing forgery detection and source/target distinguishment using an end-to-end network. Our proposed method simultaneously generates a forgery detection map and a source/target distinguishment map. To achieve this, we employ a transformerbased encoder comprising two vision transformers. Unlike directly memorizing the entire dataset, the proposed approach focuses on capturing the copy and moved regions by emphasizing the inner features associated with the manipulated areas. To strike a balance between detection accuracy and localization, we introduce the Mean Squared Error (MSE) similarity between the two maps. This similarity measure is calculated and utilized in the back-propagation process to minimize the overall training network. By leveraging this strategy, we aim to improve the network's ability to accurately identify and locate manipulated regions. Although the experimental results demonstrate that the proposed toy end-to-end network surpasses the performance of existing methods in the binary detection task, there is still room for further exploration in developing reliable and robust model that can effectively disambiguate changed regions and perform consistently on other datasets." } ]
Copy-move forgery detection is a crucial research area within digital image forensics, as it focuses on identifying instances where objects in an image are duplicated and placed in different locations. The detection of such forgeries is particularly important in contexts where they can be exploited for malicious purposes. Recent years have witnessed an increased interest in distinguishing between the original and duplicated objects in copy-move forgeries, accompanied by the development of largerscale datasets to facilitate this task. However, existing approaches to copy-move forgery detection and source/target differentiation often involve two separate steps or the design of individual endto-end networks for each task. In this paper, we propose an innovative method that employs the transformer architecture in an end-to-end deep neural network. Our method aims to detect instances of copy-move forgery while simultaneously localizing the source and target regions. By utilizing this approach, we address the challenges posed by multi-object copy-move scenarios and report if there is a balance between the detection and differentiation tasks. To evaluate the performance of our proposed network, we conducted experiments on two publicly available copy-move datasets. The results and analysis aims to show the potential significance of our focus in balancing detection and distinguishment result and transferring the trained model in different datasets in the field.
Can Deep Network Balance Copy-Move Forgery Detection and Distinguishment?
[ { "figure_caption": "Fig. 1 .1Fig. 1. The overall flowchart of the proposed network. The network consisting of three main components: a feature extractor, a transformer-based encoder, and a two-branch decoder. The ResNet18 is utilized as the backbone network. The extracted features are then passed through a transformer-based encoder, which is constructed using a combination of local attention and global attention mechanisms. The final stage involves the two-branch decoder, which is responsible for forgery detection and source/target distinguishment.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. The structure of the decoders. (a) Decoder f and (b) Decoder d .", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Visualized examples of the copy-move forgery detection maps on the USCISI dataset.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Visualized examples of the copy-move forgery detection maps on the CoMoFoD dataset.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. Visualized examples of the source/target distinguishment maps on the USCISI dataset.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig. 6. Visualized examples of the source/target distinguishment maps on the CoMoFoD dataset.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "DETECTION PERFORMANCES ON THE USCISI DATASET EXPRESSED IN TERMS OF PIXEL-LEVEL EVALUATION", "figure_data": "AlgorithmPrecision RecallF1BusterNet55.6157.7452.16DOA-GAN78.0955.9560.80Ours69.8965.9065.55", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "", "figure_data": "precision97.7594.7695.21pristinerecall99.4599.6398.02F198.5897.0096.415235Image551415224065 7953LabelBusterNetDOA-GANOurs", "figure_id": "tab_1", "figure_label": "V", "figure_type": "table" }, { "figure_caption": "STUDY OF THE PROPOSED NETWORK ON THE USCISI DATASET.", "figure_data": "Copy-Move DetectionSource/Target DistinguishmentMSE loss TransformerSource & TargetPristineSourceTargetPristinePrecision RecallF1Precision RecallF1Precision RecallF1Precision RecallF1Precision RecallF163.0748.0751.3094.6697.2995.8747.9338.4840.1071.3166.9466.5994.6397.3495.8861.9350.9452.9794.9196.9395.8248.4942.7743.0273.0066.8967.3294.9096.9295.8167.5663.3962.6896.4497.1096.7157.7756.4955.1672.2876.7672.7396.4397.1396.7267.8365.0864.1396.6597.0996.8258.9058.5056.9174.3477.4974.2996.6597.1296.83", "figure_id": "tab_2", "figure_label": "V", "figure_type": "table" } ]
Shizhen Chang
[ { "authors": "V Christlein; C Riess; J Jordan; C Riess; E Angelopoulou", "journal": "IEEE Transactions on information forensics and security", "ref_id": "b0", "title": "An evaluation of popular copy-move forgery detection approaches", "year": "2012" }, { "authors": "G K Birajdar; V H Mankar", "journal": "Digital investigation", "ref_id": "b1", "title": "Digital image forgery detection using passive techniques: A survey", "year": "2013" }, { "authors": "I Amerini; L Ballan; R Caldelli; A Del; G Bimbo; Serra", "journal": "IEEE transactions on information forensics and security", "ref_id": "b2", "title": "A sift-based forensic method for copy-move attack detection and transformation recovery", "year": "2011" }, { "authors": "J Li; X Li; B Yang; X Sun", "journal": "IEEE transactions on information forensics and security", "ref_id": "b3", "title": "Segmentation-based image copymove forgery detection scheme", "year": "2014" }, { "authors": "D Cozzolino; G Poggi; L Verdoliva", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b4", "title": "Efficient dense-field copymove forgery detection", "year": "2015" }, { "authors": "E Ardizzone; A Bruno; G Mazzola", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b5", "title": "Copy-move forgery detection by matching triangles of keypoints", "year": "2015" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b6", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Y Xu; T Bai; W Yu; S Chang; P M Atkinson; P Ghamisi", "journal": "", "ref_id": "b7", "title": "Ai security for geoscience and remote sensing: Challenges and future trends", "year": "2022" }, { "authors": "S Chang; M Kopp; P Ghamisi", "journal": "", "ref_id": "b8", "title": "Dsfer-net: A deep supervision and feature retrieval network for bitemporal change detection using modern hopfield networks", "year": "2023" }, { "authors": "Y Rao; J Ni", "journal": "IEEE", "ref_id": "b9", "title": "A deep learning approach to detection of splicing and copy-move forgeries in images", "year": "2016" }, { "authors": "Y Wu; W Abd-Almageed; P Natarajan", "journal": "IEEE", "ref_id": "b10", "title": "Image copy-move forgery detection via an end-to-end deep neural network", "year": "2018" }, { "authors": "J.-L Zhong; C.-M Pun", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b11", "title": "An end-to-end dense-inceptionnet for image copy-move forgery detection", "year": "2019" }, { "authors": "Y Wu; W Abd-Almageed; P Natarajan", "journal": "", "ref_id": "b12", "title": "Busternet: Detecting copymove image forgery with source/target localization", "year": "2018" }, { "authors": "B Chen; W Tan; G Coatrieux; Y Zheng; Y.-Q Shi", "journal": "IEEE Transactions on Multimedia", "ref_id": "b13", "title": "A serial image copy-move forgery localization scheme with source/target distinguishment", "year": "2020" }, { "authors": "A Islam; C Long; A Basharat; A Hoogs", "journal": "", "ref_id": "b14", "title": "Doa-gan: Dual-order attentive generative adversarial network for image copy-move forgery detection and localization", "year": "2020" }, { "authors": "M Barni; Q.-T Phan; B Tondi", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b15", "title": "Copy move source-target disambiguation through multi-branch cnns", "year": "2020" }, { "authors": "Y Zhang; G Zhu; X Wang; X Luo; Y Zhou; H Zhang; L Wu", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b16", "title": "Cnn-transformer based generative adversarial network for copy-move source/target distinguishment", "year": "2022" }, { "authors": "X Chu; Z Tian; Y Wang; B Zhang; H Ren; X Wei; H Xia; C Shen", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b17", "title": "Twins: Revisiting the design of spatial attention in vision transformers", "year": "2021" }, { "authors": "S Chang; P Ghamisi", "journal": "", "ref_id": "b18", "title": "Changes to captions: An attentive network for remote sensing change captioning", "year": "2023" }, { "authors": "J Xiao; J Hays; K A Ehinger; A Oliva; A Torralba", "journal": "IEEE", "ref_id": "b19", "title": "Sun database: Large-scale scene recognition from abbey to zoo", "year": "2010" }, { "authors": "T.-Y Lin; M Maire; S Belongie; J Hays; P Perona; D Ramanan; P Dollár; C L Zitnick", "journal": "Springer", "ref_id": "b20", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "D Tralic; I Zupancic; S Grgic; M Grgic", "journal": "IEEE", "ref_id": "b21", "title": "Comofod-new database for copy-move forgery detection", "year": "2013" }, { "authors": "D P Kingma; J Ba", "journal": "", "ref_id": "b22", "title": "Adam: A method for stochastic optimization", "year": "2014" } ]
[ { "formula_coordinates": [ 3, 324.53, 256.28, 238.51, 54.3 ], "formula_id": "formula_0", "formula_text": "L f ce = - 1 hw (Y f log Ŷf + (1 -Y f ) log(1 -Ŷf )), L d ce = - 1 hw 3 c=1 (Y c d log( Ŷ c d )),(1)" }, { "formula_coordinates": [ 3, 372.74, 448.17, 190.3, 12.17 ], "formula_id": "formula_1", "formula_text": "L mse = E( Ŷf ; Ŷd )[( Ŷf -Ŷd ) 2 ].(2)" }, { "formula_coordinates": [ 3, 385.31, 509.28, 177.72, 12.69 ], "formula_id": "formula_2", "formula_text": "L = L f ce + L d ce + γL mse ,(3)" } ]